You can host a serverless runtime environment on an Amazon Virtual Private Cloud (VPC). The serverless runtime environment creates an elastic network interface (ENI) to connect to your cloud environment.
Before you can create a serverless runtime environment, you must create and configure AWS resources in your VPC to connect to the serverless runtime environment in Informatica's VPC.
Note: Your cloud environment must be on the AWS cloud platform and your VPC must have default tenancy. A serverless runtime environment can't connect to a VPC with dedicated instance tenancy.
Configure your environment
Create and configure AWS resources in your VPC to connect to the serverless runtime environment in Informatica's VPC.
The following image shows the resources in a sample environment:
Use the following guidelines to create and configure each resource:
VPC
A VPC contains the data to process in the serverless runtime environment.
Create a VPC in your AWS account. Enable DNS hostnames and DNS resolution for the VPC.
Also, ensure that at least one of the following scenarios apply to you:
- Your VPC's DHCP option is set with AmazonProvidedDNS.
- If you have custom DNS servers in your DHCP option set, ensure that AmazonProvidedDNs is part of the option set or that the DNS servers can resolve EC2 internal hostnames. To ensure that the DNS servers can resolve EC2 internal hostnames, internally redirect the DNS query to AmazonProvidedDNS.
Security group
A security group controls the traffic flow from the serverless runtime environment.
Create a security group in the VPC. The security group is associated with all ENIs that the serverless runtime environment creates. You specify this security group in the serverless runtime environment properties.
Leave the inbound rules empty to restrict all incoming traffic. The outbound rules can either allow all traffic or limit traffic to all Amazon S3 resources and all source and target systems that the serverless runtime environment accesses.
Private subnet to host the ENI
A private subnet hosts the ENI that the serverless runtime environment uses to connect to your VPC.
Create a private subnet and configure a CIDR range to determine the maximum number of IP addresses and therefore, the scalability, of the serverless runtime environment. Configure the CIDR range to have at least 25 IP addresses per serverless runtime environment so that the serverless runtime environment can scale effectively when developers run concurrent workloads.
After your organization administrator creates a serverless runtime environment in Administrator, the serverless runtime environment creates a ENI in your private subnet.
Public subnet for internet access
A public subnet provides internet access through a NAT gateway.
Create a public subnet using any availability zone in the region where you created the VPC. The CIDR range must be within the VPC CIDR range. Choose a range based on the number of IP addresses that you want to have within the subnet.
VPC to VPC connectivity
VPC to VPC connectivity is used to access data in a different VPC than the VPC that connects to the serverless runtime environment. For example, a mapping might read data from an Amazon Redshift cluster in a VPC and write data to a different Amazon Redshift cluster in another VPC.
If you process data across VPCs, configure VPC to VPC connectivity. AWS provides several ways to configure VPC to VPC connectivity, such as VPC peering or AWS Transit Gateway. Use AWS PrivateLink wherever it's applicable. For more information, refer to the AWS documentation.
NAT gateway for internet access from the private subnet
A NAT gateway allows outbound traffic to the internet from private instances. All compute instances in the serverless runtime environment that are associated with the ENI are private.
Create a NAT gateway to route outbound traffic from the private subnet to the internet. AWS provides several ways to configure subnet routing rules, such as route tables and NACL. For more information, refer to the AWS documentation.
IAM role
An IAM role defines a minimal policy that the serverless runtime environment and advanced cluster worker nodes use to create, attach, detach, and delete the ENI that's associated with the private subnet in your VPC.
The IAM role must be able to access the S3 location for supplementary files as well as the sources and targets you use in mappings. You can use the following template:
In the trust relationship, specify the Informatica account number as a trusted entity and create an external ID. To find the Informatica account number, create a serverless runtime environment in Administrator and check the environment properties. You can use the following template:
When you set up your cloud environment, you can add safe IP addresses for IP filtering, set up a system disk, set up a location for JAR files and external libraries, and configure TLS to authenticate REST APIs.
Perform the following VPC configuration tasks as necessary:
•Add trusted IP addresses. If your organization filters based on IP addresses, add the safe Informatica addresses so that they won't get blocked by the firewall. For more information, see Adding trusted Informatica IP addresses.
•Additional configuration if you want to use existing EFS or NFS directories as data disks in a serverless runtime environment. For more information, see Using EFS or NFS directories as data disks.
•Create a data disk in your serverless runtime environment if you have files in EFS or NFS directories that you want to use in the environment. For more information, see Configuring a data disk.
•Create a supplementary file location. If your mappings use JAR files and external libraries, set up a location on Amazon S3 to store the files. For more information, see Creating the supplementary file location.
•Configure TLS to authenticate REST APIs. If you use a REST V3 Connector, you can configure TLS to authenticate REST APIs. For more information, see Configuring TLS to authenticate REST APIs.
Adding trusted Informatica IP addresses
If your organization uses trusted IP address ranges, edit the ranges in your organization properties and add the appropriate trusted IP addresses.
US
The following table lists trusted IP addresses for US regions:
Region
Trusted IP addresses
US East (N. Virginia) us-east-1
- 54.160.9.90
- 54.221.247.69
US East (Ohio) us-east-2
- 18.220.76.98
- 3.131.176.232
US West (N. California) us-west-1
- 52.52.220.198
- 13.56.74.27
US West (Oregon) us-west-2
- 44.239.8.148
- 44.242.20.143
APJ
The following table lists trusted IP addresses for APJ regions:
Region
Trusted IP addresses
Asia Pacific (Hong Kong) ap-east-1
- 18.167.71.151
- 18.163.244.73
Asia Pacific (Mumbai) ap-south-1
- 65.1.80.5
- 13.234.141.216
Asia Pacific (Osaka) ap-northeast-3
- Not available
Asia Pacific (Seoul) ap-northeast-2
- 52.79.244.47
- 3.34.56.248
Asia Pacific (Singapore) ap-southeast-1
- 52.76.184.230
- 18.140.193.120
Asia Pacific (Sydney) ap-southeast-2
- 3.24.111.61
- 54.253.179.190
Asia Pacific (Tokyo) ap-northeast-1
- 35.72.149.44
- 13.112.143.134
Canada
The following table lists trusted IP addresses for Canada regions:
Region
Trusted IP addresses
Canada (Central) ca-central-1
- 3.96.182.201
- 3.97.103.68
EMEA
The following table lists trusted IP addresses for EMEA regions:
Region
Trusted IP addresses
Europe (Frankfurt) eu-central-1
- 3.125.185.124
- 3.64.66.226
Europe (Ireland) eu-west-1
- 54.76.54.130
- 54.78.183.88
Europe (London) eu-west-2
- 35.176.60.118
- 18.135.50.152
Europe (Milan) eu-south-1
- 35.152.49.63
- 35.152.45.151
Europe (Paris) eu-west-3
- 15.237.157.126
- 15.237.97.211
Europe (Stockholm) eu-north-1
- 13.49.61.89
- 13.53.141.231
UK
The following table lists trusted IP addresses for UK regions:
Region
Trusted IP addresses
Europe (Frankfurt) eu-central-1
- 18.157.124.91
Europe (Ireland) eu-west-1
- 34.250.251.16
Europe (London) eu-west-2
- 18.170.170.192
Europe (Milan) eu-south-1
- 15.161.184.93
- 15.160.41.209
Europe (Paris) eu-west-3
- 13.37.37.71
Europe (Stockholm) eu-north-1
- 13.53.147.238
Configuring a system disk
The serverless runtime environment can use system disks for improved performance.
Configure a system disk to improve mapping performance in Data Integration.
You can configure system disks in Amazon EFS (Elastic File System) and NFS (Network File System) formats. File system connections in EFS are TLS-enabled by default. File system connections in NFS use NFSv4 (Network File System Version 4).
When you use a system disk, the serverless runtime environment creates a folder with the name <organization ID>/<serverless environment Id> on the system disk. This folder stores job metadata and logs.
Rules and guidelines for the EFS file system
Use the following guidelines when you configure system disks in the Amazon EFS format:
•Set the file system to the ID of the EFS file system.
•Allow the subnet in the serverless runtime environment to access to the Amazon EFS file system.
•Configure the EFS security group to allow inbound access from the security group configured in the serverless runtime environment.
•Configure the IAM role in the serverless environment with full access to the EFS file system. You can grant full access in the file system policy or in the IAM role. For example, the following file system policy allows root access to ServerlessRole (SREIICS) for file system fs-12345 and allows SecureTransport only:
The following table describes the actions in the sample policy:
Action
Description
elasticfilesystem:ClientMount
Provides read-only access to a file system.
elasticfilesystem:ClientWrite
Provides write permissions on a file system.
elasticfilesystem:ClientRootAccess
Provides use of the root user when accessing a file system.
•Create any folder required by an access point before creating the access point itself. For example, if the access point refers to the folder /my-company/dev, then define this folder first before you set up the access point.
Use the following guidelines when you configure system disks in the NFS format.
•Set the file system to the DNS of the NFS server.
•Configure the subnet in the serverless runtime environment to allow access to the NFS file server.
•Configure the file server security group to allow inbound access from the security group configured in the serverless runtime environment.
Using EFS or NFS directories as data disks
To use existing EFS or NFS directories as data disks in a serverless runtime environment, perform some setup steps so that the serverless runtime environment has permissions to access these directories. When the setup is complete, the serverless runtime environment can read existing files and write new files to these directories.
1Mount the EFS or NFS directories in an EC2 instance that you have.
2Log in to the EC2 instance.
3Locate a user with ID=501. If one doesn't exist, create a new user with this ID.
User ID 501 is the user cldagnt, which the serverless runtime environment uses to access mounted EFS or NFS directories.
4Assign read and write permissions to the mounted directories for user 501.
Create a data disk in your serverless runtime environment if you have files in EFS or NFS directories that you want to use in the environment, without updating all your mappings.
Once you mount your EFS or NFS locations in a data disk, you have access to the following features:
•Flat file support. You can use flat files from the mounted EFS or NFS locations in your mappings.
•Parameter file support. You can use parameter files stored in the mounted EFS or NFS locations. This simplifies migrating jobs from a Secure Agent group to a serverless runtime environment, since you do not need to modify your mappings.
Tip: If you create data disks, ensure that you've set up the correct user and permissions to use the mounted directories as data disks. For more information, see Using EFS or NFS directories as data disks.
- REST V3 Connector truststore and keystore certificates
- JAR files for the Java transformation
- Installation and resource files for the Python transformation
You can customize the directory structure under the serverless_agent_config folder and specify the relative path to each file in the serverlessUserAgentConfig.yml file.
Configuring TLS to authenticate REST APIs
If you use REST V3 Connector with an API collection or Machine Learning transformation that runs in a serverless runtime environment, you can configure TLS to establish one-way or two-way secure communication to authenticate REST APIs.
Contact Informatica Global Customer Support to request the required custom properties. Make sure that truststore and keystore certificates are in JKS format.
1Navigate to the supplementary file location on Amazon S3.
2In the serverless_agent_config folder, create a subfolder called SSL.
3Add the truststore and keystore certificates to the SSL folder.
For one-way secure communication, add the truststore certificates. For two-way secure communication, add both the truststore and keystore certificates.
4 Copy the following code snippet to a text editor and add the relative path to each certificate in the supplementary file location:
5In the serverless_agent_config folder, open the serverlessUserAgentConfig.yml file.
6Add the code snippet to the serverlessUserAgentConfig.yml file and save the file.
The serverless runtime environment will copy the certificates from the supplementary file location to its own reference directory so that it can use the certificates at run time.
7In the REST V3 connection properties, use the following format to specify each truststore and keystore file path in the serverless runtime environment: /home/cldagnt/SystemAgent/serverless/configurations/ssl_store/<certificate name>.jks
Provide the custom properties to your developer. Developers enter the custom properties in mapping tasks that run in the serverless runtime environment.
Configure the serverlessUserAgentConfig.yml file
When you create a supplementary file location, you need to create and configure the serverlessUserAgentConfig.yml file.
To configure which files to copy from the supplementary file location to the serverless runtime environment, specify the file paths within the serverlessUserAgentConfig.yml file.
Note: Escape any spaces or special characters that appear in paths entered in the serverlessUserAgentConfig.yml file.
Populating the serverlessUserAgentConfig.yml File
Use the following template to create your serverlessUserAgentConfig.yml file:
# The Secure Agent is the root element, and configurations are applied to the agent. # Under the agent, there are three levels: #1: apps : Application where you need to apply configurations. #2: event: Event relating to the life cycle of application. #autoDeploy: Configurations that need the agent app to restart. Configurations are applied and minor versions of the app are upgraded. An upgrade event will detect the difference between the configuration that was last applied and the current request and apply only those configuration changes. Note that Administrator does not show notifications during minor version upgrades. #autoApply: Configuration that takes effect immediately, such as copying Swagger files. #3: section: Contains configurations based on connectors.
# How do I apply the YML file? # Create a serverlessUserAgentConfig.yml file with these contents in <supplementary_file_location>/serverless_agent_config. # The path in the serverlessUserAgentConfig.yml file is relative to <supplementary_file_location>/serverless_agent_config/.
# fileCopy section : Provide the source location of the file that needs to be copied.
version: 1 agent: # At the agent level, provide general configurations that are not specific to the application. agentAutoApply: general: # General section for common configurations across applications and connectors. sslStore: # Use this to copy SSL files to the instance machine. You can provide a list of fileCopy. - fileCopy: sourcePath: SSL/RESTV2_JWTpyn.jks # Data Integration Server app dataIntegrationServer: autoApply: # Apply configurations that don't need to upgrade the minor version or a restart of the app. For example, you can copy files. restv2: # Connector section swaggers: # List of Swaggers files to copy to the instance machine. - fileCopy: sourcePath: restv2/<swagger_file_name>.json keystores: # List of keystore files to copy to the instance machine. - fileCopy: sourcePath: restv2/key truststores: # List of truststore files to copy to the instance machine. - fileCopy: sourcePath: restv2/key.ext wsconsumer: wsdls: - fileCopy: sourcePath: s3/ jdbc: drivers: - fileCopy: sourcePath: s3/file autoDeploy: # A change in this event will trigger a minor version upgrade with the new configurations. # In this case, the Data Integration Server app will get a minor version upgrade. general: # General section for Data Integration Server app autoDeploy event. ssls: - fileCopy: sourcePath: SSL/RESTV2_JWTpyn.jks importCerts: certName: cname alias: IICS sap: jcos: # List of jco related files to copy. - fileCopy: sourcePath: sap/jco/libsapjco3.so - fileCopy: sourcePath: sap/jco/sapjco3.jar nwrfcs: # List of nwrfc related files to copy. - fileCopy: sourcePath: sap/nwrfc/libicudata.so.50 - fileCopy: sourcePath: sap/nwrfc/libicudecnumber.so hanas: # List of hana related files to copy. - fileCopy: sourcePath: sap/hana/libicudata.so.50 odbc: # Specify ODBC configurations. # This section can be used to configure multiple drivers. drivers: # Specify drivers to copy. - fileCopy: sourcePath: ODBC/DWdb227.so - fileCopy: sourcePath: ODBC/DWdb227.so dns: # Specify DNS entries. These entries will be updated in odbc.ini file. # If the file is not present, a new odbc.ini file will be created. # Make sure to give a name as a unique entry for the ini file configuration. The file will be read and updated using the name. - name: "SQL server" # Section name in ini file unique key. entries: - key: Driver # Only provide the driver file name without the path. value: DWsqls227.so # Because the file is copied, the path to attach during odbc entry is already known. - key: Description value: "SQL Server 2014 Connection for ODL" - key: HostName value: INVW16SQL19 - key: PortNumber value: 1433 - key: Database value: adapter_semantic - key: QuotedId value: No - key: AnsiNPW value: Yes
For more information about populating connector information in the serverlessUserAgentConfig.yml file, see the help for the appropriate connector.
Copying files for the Elastic Server
In the serverlessUserAgentConfig.yml file, you can specify files to copy from the supplementary file location to the serverless runtime environment. When you run mappings in advanced mode in the serverless runtime environment, the Elastic Server and the advanced cluster can use the files to access and process data.
You can copy the following file types for the Elastic Server:
•JDBC V2 Connector JAR files
•JAR files for the Java transformation
•Installation and resource files for the Python transformation
You can customize file paths by specifying the relative path to the file in the supplementary file location. For example, you might store JDBC V2 Connector JAR files in the following locations:
You can add files for the Elastic Server to the supplementary file location while the serverless runtime environment is running. Files for the Elastic Server include JDBC V2 Connector JAR files, Java transformation JAR files, and Python transformation resource files.
To add a file while the environment is running, complete the following steps:
1Add the file to the appropriate location in <Supplementary file location>/serverless_agent_config/.
2Specify the file in the serverlessUserAgentConfig.yml file. For information about the serverlessUserAgentConfig.yml file, see Creating the supplementary file location or the help for the appropriate connector.
It may take up to 10 minutes for the file to synchronize to the serverless runtime environment.
You must redeploy the serverless runtime environment after you perform any of the following tasks:
•Update an existing file.
To update an existing file while the serverless runtime environment is running, you must add the file to the supplementary file location and to the serverlessUserAgentConfig.yml file using a different name.
•Add other file types, such as ODBC shared libraries.
•Add a new folder or directory, such as a Python installation directory for the Python transformation.
•Remove files from the serverless runtime environment.
Proxy servers in a serverless runtime environment
If your organization uses an outgoing proxy server to connect to the internet, you can configure the serverless runtime environment to connect to Informatica Intelligent Cloud Services through the proxy server.
When you configure a proxy server for the serverless runtime environment, you define the required proxy server settings in the serverlessUserAgentConfig.yml file before you can import metadata or design your mappings. Data Integration copies the proxy entries in the file to the serverless runtime environment.
To apply the proxy when you run mappings, set the proxy configurations on the Serverless Environments page in Administrator.
You can configure proxy settings for the serverless runtime environment in certain connectors. To see if the proxy applies in a connector, see the help for the appropriate connector.
Configuring the proxy in the serverlessUserAgentConfig.yml file
To apply proxy server settings when you design mappings and import metadata, add the proxy server details to the serverlessUserAgentConfig.yml file.
Use the following code snippet as a template to provide the values for the proxy server in the serverlessUserAgentConfig.yml file:
agent: agentAutoDeploy: general: proxy: proxyHost: <Host_name of proxy server> proxyPort: <Port number of the proxy server> proxyUser: <User name of the proxy server> proxyPassword: <Password to access the proxy server> nonProxyHost: <Non-proxy host>
Configuring the proxy in the JVM options
To apply proxy server settings when you run mappings or tasks, configure JVM options in Administrator.
1On the Serverless Environments page, click the name of the serverless runtime environment.
2Click Edit.
3In the Runtime Configuration Properties section, select the Service as Data Integration Server and the Type as DTM.
4Edit any of the JVMOption fields and specify appropriate values for each parameter based on whether you use an HTTPS or HTTP proxy server.
The following table describes the parameters:
Parameter
Description
-Dhttp.proxySet=
Determines if the serverless runtime environment must use the proxy settings when the outgoing proxy server is HTTP. Select -Dhttp.proxySet=True to use the proxy.
-Dhttps.proxySet=
Determines if the serverless runtime environment must use the proxy settings when the outgoing proxy server is HTTPS. Select -Dhttps.proxySet=True to use the proxy.
-Dhttp.proxyHost=
Host name of the outgoing HTTP proxy server.
-Dhttp.proxyPort=
Port number of the outgoing HTTP proxy server.
-Dhttp.proxyUser=
Authenticated user name for the HTTP proxy server.
-Dhttp.proxyPassword=
Password for the authenticated user.
-Dhttps.proxyHost=
Host name of the outgoing HTTPS proxy server.
-Dhttps.proxyPort=
Port number of the outgoing HTTPS proxy server.
-Dhttps.proxyUser=
Authenticated user name for the HTTPS proxy server.
-Dhttps.proxyPassword=
Password for the authenticated user.
5Click Save.
Allowing domains in the proxy server
To run a mapping successfully, the proxy server must allow traffic from the AWS endpoints that are required to process the data in the mapping.
Specify the region that contains the VPC that connects to the serverless runtime environment.
Serverless runtime environment in Administrator
Once you've created the environment on AWS, you create a corresponding environment on the Serverless Environments page in Administrator. You can view properties for a serverless runtime environment by expanding the Actions menu for the environment and selecting View.
The following image shows the Serverless Environments page:
1Option to create a serverless runtime environment
2Refresh icon
3Actions menu
To create a serverless runtime environment, enter the serverless runtime environment properties. It takes at least five minutes for the serverless runtime environment to become available. Use the Serverless Environments page to track the status of the environment and review any status messages.
You can create a maximum of 10 serverless runtime environments in your organization. If you have a trial license, you can create a maximum of two environments.
Configuring the Basic Configuration properties
The Basic Configuration section of a serverless runtime environment contains general information about the environment, including its Informatica Account Number and its current status.
The following table describes the basic properties:
Property
Description
Name
Name of the serverless runtime environment.
Description
Description of the serverless runtime environment.
Task Type
Type of tasks that run in the serverless runtime environment.
- Select Data Integration to run mappings outside of advanced mode.
- Select Advanced Data Integration to run mappings in advanced mode.
Cloud Platform
Cloud platform to host the serverless runtime environment.
You can use only Amazon Web Services (AWS).
Max Compute Units Per Task
Maximum number of serverless compute units corresponding to machine resources that a task can use.
Task Timeout
Amount of time in minutes to wait for a task to complete before it is terminated. The timeout ensures that serverless compute units are not unproductive when a task hangs.
By default, the timeout is 2880 minutes (48 hours). You can set the timeout to a value that is less than 2880 minutes.
Informatica Account Number
Informatica's account number on the cloud platform where the serverless runtime environment will be created. The account number is populated automatically.
External ID
External ID to associate with the role that you create for the serverless runtime environment. You can use the generated external ID or specify your own external ID.
Configuring the Platform Configuration properties
The Platform Configuration section of a serverless runtime environment contains technical information about the platform, including the region, subnet, and security group.
The following table describes the platform properties:
Property
Description
Configuration Name
Name of the resource configuration.
Configuration Description
Description of the resource configuration.
The description can be up to 256 characters and can contain alphanumeric characters and the following special characters:
._-:/()#,@[]+=&;{}!$"*
Account Number
Your account number on the cloud platform.
Region
Region of your cloud environment. The sources and targets that you use in mappings should either reside in or be accessible from this region.
AZ ID
Identifier for the availability zone. The sources and targets that you use in mappings must either reside or be accessible from the availability zone.
VPC ID
ID of the Amazon Virtual Private Cloud (VPC). The VPC must be configured with an endpoint to access the sources and targets that you use in mappings.
For example, vpc-2f09a348.
Subnet ID
ID of the subnet within the VPC. The subnet must be have an entry point to access the sources and targets that you use in mappings.
For example, subnet-b46032ec.
Security Group ID
ID of the security group that the serverless runtime environment will attach to the ENI. The security group allows access to the sources and targets that you use in tasks.
For example, sg-e1fb8c9a.
Role Name
Name of the IAM role that the serverless runtime environment can assume on your AWS account.
The role must have permissions to create, read, delete, list, detach, and attach an ENI. It also requires read and write permissions on supplementary file location.
Use the Informatica account number and the external ID when you create a policy for the role.
AWS Tags
AWS tags to label the ENI that is created in your AWS account.
Each tag must be a key-value pair in the format: Key=string,Value=string where Key and Value are case-sensitive.
Use a space to separate tags.
Follow the rules and guidelines for tagging that AWS specifies. For more information, refer to the AWS documentation.
Supplementary File Location
Location on Amazon S3 to store supplementary files, such as JAR files and external libraries for certain transformations and connectors.
Use the format: s3://<bucket name>/<folder name>.
You must put script files in a folder named command_scripts. This folder can have subfolders. Informatica Intelligent Cloud Services synchronizes files at regular intervals within the command_scripts directory to the Secure Agent, specifically to the agent install directory apps/Common_Integration_Components/data/command/serverless/command_scripts. If you update files in Amazon S3, Informatica Intelligent Cloud Services automatically synchronizes them to the Secure Agent.
Configuring the Runtime Configuration properties
The Runtime Configuration properties section of a serverless runtime environment determines how the environment behaves.
Use this section to set variables for the default directories and to reduce the number of tasks that can run at the same time.
Note: Don't change any other variables or properties unless directed by your system administrator or by Informatica Global Customer Support.
Setting variables for default directories
You can set system variables that the serverless runtime environment uses for locations such as the source and target directories and temp files. Review the system defaults and update them as necessary.
Directory names can't contain the following special characters: * ? < > " | ,
Tip: Filter the list to show "Service = Data_Integration_Server" and "Type = PMRDTM_CFG" to find the system variables more easily.
The following table describes the system variables:
System Variable Name
Description
$PMLookupFileDir
Directory for lookup files.
Default is $PMRootDir
$PMBadFileDir
Directory for reject files.
Default is $PMRootDir/error
$PMCacheDir
Directory for index and data cache files.
Default is $PMRootDir/cache
$PMStorageDir
Directory for state of operation files. The Data Integration Service uses these files for recovery if you have the high availability option or if you enable a workflow for recovery. These files store the state of each workflow and session operation.
Default is $PMRootDir
$PMTargetFileDir
Directory for target files.
Default is $PMRootDir
$PMSourceFileDir
Directory for source files.
Default is $PMRootDir
$PMExtProcDir
Directory for external procedures.
Default is $PMRootDir
$PMTempDir
Directory for temporary files.
Default is $PMRootDir/temp
Reducing the number of simultaneous tasks
By default, a serverless runtime environment can run 150 tasks at the same time. To reduce the number of tasks, use the maxDTMProcesses property under "Service = Data_Integration_Server" and "Type = Tomcat." The value can be between 1 and 150.
Configuring the System Disk properties
Configuring a system disk can in the serverless runtime environment can improve mapping performance in Data Integration.
The following table describes the properties for a data disk:
Property
Description
Type
Data disk type, either EFS or NFS.
File System
For EFS disks, the file system is the file system ID of the EFS disk.
For NFS disks, the file system is the DNS of the file system.
Source Mount
File system path to be mounted in the serverless runtime environment.
Target Mount
File system to be mounted on the Secure Agent.
Access Point
The ID of the Amazon EFS file system access point.
The access point ensures isolation for tenants in a multi-tenant EFS file system.
Once an access point is set up, you can configure the file system policy to allow access only to the access point for the serverless IAM role.
Serverless runtime validation
The validation process validates the AWS resource configuration properties and some network settings on the serverless runtime environment when you perform specific tasks.
The validation process connects to your AWS account using the IAM role to verify and list the resource properties, such as the subnet ID, availability zone ID, and role name. The IAM role establishes trust between your AWS account and the Informatica AWS account so that the serverless runtime environment can create an ENI and securely connect to data sources in your cloud environment. The IAM role must have permission to view the resource. For more information about setting up the IAM role, see Configure your environment .
The following role permissions are required for validation:
•ec2:DescribeRegions
•ec2:DescribeAvailabilityZones
•ec2:DescribeVpcs
•ec2:DescribeSubnets
•ec2:DescribeSecurityGroups
If validation fails for any resource, the serverless runtime environment fails to start. You can download the detailed validation messages using the download option on the Serverless Environments page or the specific serverless runtime environment configuration page. Validation results and messages are available for failed environments only.
In addition to the serverless runtime environment properties, the validation process also checks for the number of IP addresses available on the subnet. The serverless runtime environment creation fails if there are insufficient IP addresses available on the subnet.
Note: The validation process does not validate the Amazon Virtual Private Cloud (VPC) ID if the subnet ID does not exist in your Amazon account.
Serverless runtime environment properties and network settings are validated when you perform the following tasks on a serverless runtime environment:
•Create a new serverless runtime environment.
•Edit a failed serverless runtime environment and save the updates.
•Clone a serverless runtime environment and save the configurations.
•Redeploy a failed serverless runtime environment.
Serverless runtime environment management
After you create a serverless runtime environment in AWS, you can perform management tasks such as editing, redeploying, or cloning the serverless runtime environment.
Editing the serverless runtime environment
The properties that you can edit in a serverless runtime environment vary, depending on the environment's status.
You can edit the following properties, based on the status of the serverless runtime environment:
•Up and Running. You can only update the following fields: Max Compute Units Per Task and Task Timeout. The updated values take effect for subsequent task runs.
•Failed. You can update all the properties. The updated properties take effect once you use the Redeploy action.
If the serverless runtime environment shows any other status, you must delete the serverless runtime environment and create a new one.
To edit a serverless runtime environment, expand the Actions menu for the serverless runtime environment and select Edit.
Redeploying the serverless runtime environment
The redeploy action restarts the serverless runtime environment, either after a change in the environment or if the environment shuts down for a specific reason.
You might redeploy the serverless runtime environment in the following situations:
•You change your organization's licenses.
•The serverless runtime environment shuts down because the organization ran out of serverless compute units. You can add more compute units to your organization and redeploy the serverless runtime environment.
•You update the configuration in your cloud environment. For example, you update files in the supplementary file location, or you update the policy that is attached to the IAM role.
Before you redeploy a serverless runtime environment, in Monitor, be sure that no jobs are running in the runtime environment. Then, in Administrator, expand the Actions menu for the serverless runtime environment and click Redeploy.
Note: Wait until the redeployment completes before you run a mapping. Any jobs running during redeployment will fail.
Cloning the serverless runtime environment
You might clone a serverless runtime environment to create another environment that has a similar configuration. For example, you want to create a similar serverless runtime environment that connects to a different subnet in your cloud environment or uses a different security group.
To clone a serverless runtime environment, expand the Actions menu for the serverless runtime environment and select Clone.
Deleting the serverless runtime environment
Delete a serverless runtime environment when it's no longer required.
Before you delete a serverless runtime environment, perform the following tasks:
•Use Monitor to make sure that the environment is not running any jobs.
•Use the Show Dependencies action to see if the environment is being used by any tasks, mappings, or connections. If dependencies exist, remove them before deleting the environment.
To delete a serverless runtime environment, expand the Actions menu for the serverless runtime environment and select Delete.
Using metering serverless compute units
Serverless compute units represent CPUs and memory that a serverless runtime environment can use to run tasks.
When you create a serverless runtime environment, you configure a maximum number of serverless compute units that each task can request from the serverless runtime environment. When you create a mapping task, you can override the maximum number of compute units that the task can request. In Monitor, you can view the number of compute units that the task requested and consumed.
If the task runs longer than the task timeout that you specify, the serverless runtime environment terminates the task.
For information about the meter, see Organization Administration.
Configuring disaster recovery
If a disaster impacts the region or the availability zone that hosts a serverless runtime environment, redirect jobs to a temporary serverless runtime environment in a stable region or availability zone as part of your organization's disaster recovery plan.
Disaster recovery procedure
During a disaster, all virtual machines in the serverless runtime environment shut down and jobs can no longer run in the environment.
To minimize data loss and downtime, complete the following tasks:
1Create a temporary serverless runtime environment in a stable region or availability zone.
2Make sure that the connections used in jobs are available in the stable region or availability zone.
3Clean up data related to incomplete job runs. If data was partially loaded to a target, manually delete the data or update the mapping to truncate the target before writing new rows.
4Redirect jobs to the temporary environment.
Restoring the primary environment
When the region or availability zone that hosts the primary serverless runtime environment has recovered, you can restore the primary environment.
To restore the primary environment, complete the following tasks:
1Clean up the ENIs that were created in your AWS account for the primary environment.