Deploy the elastic runtime environment on AWS to make the Kubernetes cluster available to process data.
Step 1. Clean up cluster (optional)
If you need to reinstall the elastic runtime environment after receiving an error, clean up Kubernetes resources in the AWS environment before reinstalling the elastic runtime environment.
On the jump host, run the following commands:
cd /tmp/src/ ./uninstall.sh rm $HOME/.kube/config
Note: The commands don't remove AWS resources that were provisioned during previous installations.
Step 2. Download the installer on the jump host
Complete the following tasks:
1Contact your organization's IT team to create a passphrase for the private key on your local machine and configure the jump host to authenticate using the public key.
In the elastic runtime environment directory, open the config.txt file by running the following command:
vi config.txt
The following table describes the values to update in the config.txt file:
Variable
Description
IDS_HOST
Domain for the POD for your organization. For example: dm-us.informaticacloud.com. To locate the domain, find your POD in POD Availability and Networking and copy the property from the Login URL field.
USER
Informatica Intelligent Cloud Services user name.
PASS
Informatica Intelligent Cloud Services password.
runtimeEnvironmentId
Elastic runtime environment ID.
To find the ID, navigate to the Runtime Environments page in Administrator, copy the elastic runtime environment ID from the URL.
For example, in the URL https://usw1.dmr-us.informaticacloud.com/cloudUI/products/administer/main/elastic-agent/KUBERNETES/0141GU25000000000002/overview, the elastic runtime environment ID is 0141GU25000000000002.
SECURITY_GROUP_NAME
Name of the the security group that allows access to the elastic runtime environment.
If you didn't create a security group and the cluster installer role has permissions for security groups, the cluster installer automatically creates a security group and the inbound and outbound rules and then continues the installation.
If you created a security group and the cluster installer role has permissions to modify the security group, the cluster installer adds required inbound rules that are missing. If the cluster installer role doesn't have permissions to modify the security group, cluster installation stops. To continue the installation, edit the security group manually and add the required inbound rules For more information about the required inbound rules, see Step 1. Create AWS resources.
VPC_NAME
Name of the VPC that you created.
The config.txt file includes the following information:
# Distributed Configuration for Cluster Installer # ================================================= # Customize the following variables to match your environment. # # IDS_HOST # The host address or URL for the IDS service. export IDS_HOST="<POD URL like dm-us.informaticacloud.com>" # USER # Informatica Intelligent Cloud Services user name for the organization that you want to log in to. export USER="<IICS user name>" # PASS # Informatica Intelligent Cloud Services password. export PASS="<IICS password>" # runtimeEnvironmentId # A unique identifier for your runtime environment. export runtimeEnvironmentId="<elastic runtime environment ID from Administrator>" # PROXY_USER PROXY_HOST PROXY_PORT # Specify these values if your organization uses HTTP proxy for outbound communication. # You will be prompted to enter Proxy password by the cluster installation script if PROXY_USER is specified. export PROXY_USER= export PROXY_HOST= export PROXY_PORT= # # majorVersion # Major version number for your release. Update as needed. export majorVersion=202507
# Following variables are provided so that you can customize cluster creation as per your organization policies. # # KEY_PAIR_NAME and KUBE_CONFIG_SECRET_NAME # These values identify the key pair and its secret name used to access your cluster. # Name of the Key Pair that is used to login to the nodes in the cluster KEY_PAIR_NAME="idmc-elastic-rte-key-pair" KUBE_CONFIG_SECRET_NAME="idmc-elastic-rte-kube-config" # # ORG_ADMIN_CREDS_SECRET_NAME # The name of the secret that stores organization administrator credentials. ORG_ADMIN_CREDS_SECRET_NAME="idmc-elastic-rte-org-creds" # # SECURITY_GROUP_NAME # The security group name defined for access within your environment. SECURITY_GROUP_NAME="<security group name like sg_ert>" # # NODE_NAME_PREFIX # Prefix for naming the nodes in your cluster. NODE_NAME_PREFIX="idmc-elastic-rte" # # AGENT_APP_LAUNCH_TEMPLATE_NAME and AGENT_APP_ASG_NAME # Launch template and auto scaling group names for the agent application. AGENT_APP_LAUNCH_TEMPLATE_NAME="idmc-elastic-rte-agent-app-launch-tmpl" AGENT_APP_ASG_NAME="idmc-elastic-rte-agent-app-asg" # # JOB_NODE_LAUNCH_TEMPLATE_NAME and JOB_NODE_ASG_NAME # Launch template and auto scaling group names for job nodes. JOB_NODE_LAUNCH_TEMPLATE_NAME="idmc-elastic-rte-job-node-launch-tmpl" JOB_NODE_ASG_NAME="idmc-elastic-rte-jon-node-asg" # # CONTROL_PLANE_ELB_NAME # The load balancer name for the control plane in high-availability setups. # Used only when High Availability (HA) mode is enabled. CONTROL_PLANE_ELB_NAME="idmc-elastic-rte-control-plane-elb" # # efsNameTag # Tag for EFS (Elastic File System) shared storage to added upon creation export efsNameTag="idmc-elastic-rte-efs-name" # # IS_RUNNING_ON_MASTER # Set to false if this script is not executed on the master node; otherwise, leave as true. IS_RUNNING_ON_MASTER=true # # VPC_NAME # Provide the VPC name if the script is not running directly on the master node. VPC_NAME="<VPC name like vpc_ert>" # # resourceCreationLogFile # Log file name where created resources will be recorded. export resourceCreationLogFile="resource_creation_log.txt"
Step 4. Run the cluster installation script
To run the cluster installation script, run the following command:
nohup ./create_cluster_nodes_ha.sh &
You can monitor the installation process with the following command:
tail -f nohup.out
Note: If you have a proxy enabled, exclude nohup.out from the command to run the cluster installation script:
./create_cluster_nodes_ha.sh
Step 5. Mount the EFS file systems on the master node
Mount each EFS file system on the master node. For information about mounting an EFS file system, refer to the AWS documentation.
Step 6. Verify that the elastic runtime environment is running
To verify that the elastic runtime environment is running in your AWS environment, open the Runtime Environments page in Administrator and expand the elastic runtime environment. Verify that the Data Integration Server is running and that one or more instances are running.
Note: It might take a few minutes for the Data Integration Server to start the instances.