Message | Corrective action |
---|---|
Variable <<NAME>> is not defined | Ensure that the missing variable is defined with the appropriate value in the config.txt file. |
<<COUNT>> variable(s) are not defined in config.txt | Ensure that all variables are defined in the config.txt file. |
<<COUNT>> variable(s) are not defined in config.txt when High Availability (HA) mode is on | Ensure that all variables related to high availability are defined in the config.txt file. |
Failed to get <<VPC_NAME>> | Ensure that VPC_NAME is valid. Check that the AWS roles and privileges for the cluster installer are assigned to the master node. |
Failed to get idmc ami id | Check the AWS roles and privileges assigned to the master node. |
Failed to get Subnet <<NAME>> | Check the AWS roles and privileges assigned to the master node. |
Failed to get root volume name for AMI ID <<AMI_ID>> | Check the AWS roles and privileges assigned to the master node. |
Failed to check if instance <<NAME>> exists | Ensure that the AWS Describe Instances permission is present in instance profile that's attached to the master node. |
Failed to authenticate, please check config.txt | Ensure that the user name and password in the config.txt file are correct. |
Failed to get Subnet of instance <<INSTANCE_ID>> | Ensure that the AWS Describe Instances permission is present in the instance profile that's attached to the master node. |
VPC_NAME should be defined when script is not running on master node | Ensure that VPC_NAME and the value of IS_RUNNING_ON_MASTER are in the config.txt file. |
MASTER_INSTANCE_TYPE is not defined | Ensure that the master instance type is defined on the Environment Configuration tab in Administrator. |
Instance type <<TYPE>> is invalid in region <<NAME>> | Ensure that the region and instance type are defined on the Environment Configuration tab in Administrator. |
Invalid value <<VALUE>> for HA_MODE, Should be true or false | When using the REST API, ensure that the heEnabled field contains a boolean value. |
Only one subnet should be specified in non-HA mode, given <<values>> | When high availability isn't enabled, only one subnet can be specified. Check the subnet names on the Environment Configuration tab in Administrator. |
Exactly 3 subnets should be provided in HA mode, given <<values>> | When high availability is enabled, three subnets must be specified. Check the subnet names on the Environment Configuration tab in Administrator. |
Subnet <<NAME>> does not exist | Ensure that the subnet names on the Environment Configuration tab in Administrator are correct. |
Given Subnets map to only <<COUNT>> Availability Zone(s) | Ensure that the subnet names Environment Configuration tab in Administrator aren't the same as the master node's subnet. |
Message | Corrective action |
---|---|
Proxy connection failed | Ensure that you can reach www.informatica.com using the specified proxy connection details. |
Proxy port [<<PORT>>] is invalid | Check the proxy port number. The port values must be a numeric value between 1 and 65535, inclusive. |
<<file>> does not exist | Ensure that <<FILE>> is one of the following values:
|
Proxy configuration is not valid. | Check and correct the errors that appeared before this message. |
Failed to generate agent token | The ProxyUtil binary failed to generate the proxy.ini file. Contact Informatica Global Customer Support. |
Proxy password not specified | The proxy password wasn't entered. Don't run the installer in background mode when proxy mode is enabled, because the installer needs to read the password from the keyboard. |
Failed to setup proxy config | Check and correct the errors that appeared before this message. |
Failed to set proxy for containerd | Check and correct the errors that appeared before this message. |
Failed to create proxy ini secret | Check and correct the errors that appeared before this message. |
Failed to setup containerd: proxy password not specified. | Check and correct the errors that appeared before this message. |
Message | Corrective action |
---|---|
Failed to create secret <<NAME>> in AWS secrets manager | Ensure that the AWS Create secret privilege is assigned to the instance profile that's attached to the master node. |
Failed to update secret <<NAME>> in AWS secrets manager | Ensure that the AWS Update secret privilege is assigned to the instance profile that's attached to the master node. |
Failed to read value of secret <<NAME>> | Ensure that the AWS Read secret privilege is assigned to the instance profile that's attached to the master node. |
Message | Corrective action |
---|---|
Failed to create security group $SECURITY_GROUP_NAME ingress rule for port <<PORT_NUMBER> | Ensure that AWS Create Security Group Rule privilege is assigned to the instance profile that's attached to the master node. |
Failed to get Security Group <<NAME>> | Ensure that AWS Read Security Group Rule privilege is assigned to the instance profile that's attached to the master node. |
Failed to create security group <NAME>> | Ensure that AWS Create Security Group privilege is assigned to the instance profile that's attached to the master node. |
Failed to create sec group rule for ssh | Check and correct the errors that appeared before this message. |
Failed to create sec group rule for DNS | Check and correct the errors that appeared before this message. |
Failed to create sec group rule for BGP | Check and correct the errors that appeared before this message. |
Failed to create sec group rule for Kube API Service | Check and correct the errors that appeared before this message. |
Failed to create sec group rule for NFS/EFS | Check and correct the errors that appeared before this message. |
Failed to create sec group rule for etcd | Check and correct the errors that appeared before this message. |
Failed to create sec group rule for VXLAN | Check and correct the errors that appeared before this message. |
Failed to create sec group rule for Kubelet API | Check and correct the errors that appeared before this message. |
Failed to create sec group rule for Kube Proxy | Check and correct the errors that appeared before this message. |
Failed to create sec group rule for Kube-scheduler | Check and correct the errors that appeared before this message. |
Failed to create sec group rule for Kube-controller-manager | Check and correct the errors that appeared before this message. |
Failed to create sec group rule for Node Port | Check and correct the errors that appeared before this message. |
Message | Corrective action |
---|---|
<<NAME>> node creation failed | Ensure that the AWS RunInstances permission is present in the instance profile that's attached to the master node. |
Failed to create instance <<NAME>> | Ensure that the AWS RunInstances permission is present in the instance profile that's attached to the master node. |
Nodes were not ready after <<COUNT>> retries | Check the status of the master node and the startup log from the AWS Management Console. |
Failed to get instance id for <<NAME>> | Ensure that the AWS DescribeInstances permission is present in the instance profile that's attached to the master node. |
Message | Corrective action |
---|---|
Failed to create load balancer <<NAME>> | Ensure that the AWS Create ELB permission is present in the instance profile that's attached to the master node. |
Failed to deregister all instances from elb | Ensure that the AWS Deregister Instances permission is present in the instance profile that's attached to the master node. |
Failed to get elb dns | Ensure that the AWS Describe ELB permission is present in the instance profile that's attached to the master node. |
Failed to add master instance <<INSTANCE_ID>> to <ELB_NAME>> | Ensure that the AWS Register Instances permission is present in the instance profile that's attached to the master node. |
Failed to add first master instance <<INSTANCE_ID>> <<IP>> to <ELB_NAME>> | Ensure that the AWS Register Instances permission is present in the instance profile that's attached to the master node. |
Failed to register instance <<INSTANCE_ID>> to load balancer <<NAME>> | Ensure that the AWS Register Instances permission is present in the instance profile that's attached to the master node. |
Failed to get instance health of load balancer <<NAME>> | Ensure that the AWS Describe ELB permission is present in the instance profile that's attached to the master node. |
Failed to get load balancer <<NAME>> details | Ensure that the AWS Describe ELB permission is present in the instance profile that's attached to the master node. |
Message | Corrective action |
---|---|
Failed to create launch template <<NAME>> | Ensure that the AWS Create Launch Template permission is present in the instance profile that's attached to the master node. |
Failed to create ASG <<NAME>> | Ensure that the AWS Create Auto Scaling Group permission is present in the instance profile that's attached to the master node. |
Failed to create ASG after <<COUNT>> attempts | Ensure that the AWS Create Auto Scaling Group permission is present in the instance profile that's attached to the master node. |
Message | Corrective action |
---|---|
Failed to generate ALMS manifest | Check and correct the errors that appeared before this message. |
Failed to get services for RTE <<ID>>, Response Code: <<CODE>> | Ensure that Secure Agent services are enabled in the elastic runtime environment. |
ALMS is not configured for this RTE <<ID>> | Ensure that the Secure Agent services are enabled in the elastic runtime environment and the images are available in the repository. |
ALMS image version not found for this RTE <<ID>> | Check and correct the errors that appeared before this message. |
Could not get version of ALMS to run | Check and correct the errors that appeared before this message. |
Failed to create manifest for cluster autoscaler | Check and correct the errors that appeared before this message. |
Message | Corrective action |
---|---|
Failed to generate manifests for EFS | Ensure that the correct AWS EFS permissions are present in the instance profile that's attached to the master node. |
Failed to create EFS mount targets | Ensure that the correct AWS EFS permissions are present in the instance profile that's attached to the master node. |
Invalid EFS Id $systemDiskId specified in the cluster config | Ensure that the correct AWS EFS permissions are present in the instance profile that's attached to the master node. |
Failed to create mount target for EFS <<ID>> in subnet <<ID>>, secutiry group = <<ID>> | Ensure that the correct AWS EFS permissions are present in the instance profile that's attached to the master node. |
Message | Corrective action |
---|---|
Internal error: Usage: <<NAME>> masterIP isRunningOnMaster command | Service error. Contact Informatica Global Customer Support. |
Internal error: Usage: <NAME>> masterIP isRunningOnMaster fileName | Service error. Contact Informatica Global Customer Support. |
Usage: $0 ELB_NAME instance_id_1 ... instance_id_n | Service error. Contact Informatica Global Customer Support. |
Usage: $0 instance_id AMI_NAME | Service error. Contact Informatica Global Customer Support. |
Usage $0 \"asg_info_1 ... asg_info_n\" node_idle_timeout [toleration] | Service error. Contact Informatica Global Customer Support. |
Usage: $0 ELB_NAME | Service error. Contact Informatica Global Customer Support. |
Usage: $0 proxyHost proxyPort proxyUser | Service error. Contact Informatica Global Customer Support. |
Usage: $0 ec2_instance_type_1 ... ec2_instance_type_n | Service error. Contact Informatica Global Customer Support. |
Usage: $0 PROXY_HOST PROXY_PORT PROXY_USER | Service error. Contact Informatica Global Customer Support. |
Failed to copy file <<NAME>> to <<IP>> | Check and correct the errors that appeared before this message. |
Giving up after <<COUNT>> retries | Check and correct the errors that appeared before this message. |
Failed to get worker info | Check and correct the errors that appeared before this message. |
Failed to deploy cluster components | Check and correct the errors that appeared before this message. |
Failed to process tags | Check the tags in the advanced properties on the Environment Configuration tab in Administrator. |
Failed to update cluster config dateDeployed field for RTE <<ID>>, Response Code: <<CODE>> | Check and correct the errors that appeared before this message. |
Failed to update clusterConfig dateDeployed field. | Check and correct the errors that appeared before this message. |
Failed to get cluster id for RTE <<ID>>, Response Code: <<CODE>> | Check and correct the errors that appeared before this message. |
Failed to get cluster config for RTE <<ID>>, Response Code: <<CODE>> | Check and correct the errors that appeared before this message. |
Failed to get cluster config | Check and correct the errors that appeared before this message. |
Failed to get CspRegionInstanceInfo for Region: <<NAME>>, Response Code: <<NAME>> | Check and correct the errors that appeared before this message. |
Failed to get token from repository <<NAME>> | Ensure that you haven't exceeded your quota of 20 repository tokens. |
Failed to execute `kubectl get namespace | grep -q idmc-system` E0715 22:33:37.559917 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused" The connection to the server localhost:8080 was refused - did you specify the right host or port? | An error occurred during installation. You can ignore this message. |
Instance count mismatch after deleting an EC2 instance | After an EC2 instance is deleted from AWS, it might continue to have the status Up and Running in Informatica Intelligent Cloud Services, so there's a discrepancy in the reported instance count. |