Command Reference > infacmd isp Command Reference > CreateConnection
  

CreateConnection

Defines a connection and the connection options.
To list connection options for an existing connection, run infacmd isp ListConnectionOptions.
The infacmd isp CreateConnection command uses the following syntax:
CreateConnection

<-DomainName|-dn> domain_name

<-UserName|-un> user_name

<-Password|-pd> password

[<-SecurityDomain|-sdn> security_domain]

[<-ResilienceTimeout|-re> timeout_period_in_seconds]

<-ConnectionName|-cn> connection_name

[<-ConnectionId|-cid> connection_id]

<-ConnectionType|-ct> connection_type

[<-ConnectionUserName|-cun> connection_user_name]

[<-ConnectionPassword|-cpd> connection_password]

[<-VendorId|-vid> vendor_id]

[-o options] (name-value pairs separated by space)
The following table describes infacmd isp CreateConnection options and arguments:
Option
Argument
Description
-DomainName
-dn
domain_name
Required. Name of the Informatica domain. You can set the domain name with the -dn option or the environment variable INFA_DEFAULT_DOMAIN. If you set a domain name with both methods, the -dn option takes precedence.
-UserName
-un
user_name
Required if the domain uses Native or LDAP authentication. User name to connect to the domain. You can set the user name with the -un option or the environment variable INFA_DEFAULT_DOMAIN_USER. If you set a user name with both methods, the -un option takes precedence.
Optional if the domain uses Kerberos authentication. To run the command with single sign-on, do not set the user name. If you set the user name, the command runs without single sign-on.
-Password
-pd
password
Required if you specify the user name. Password for the user name. The password is case sensitive. You can set a password with the -pd option or the environment variable INFA_DEFAULT_DOMAIN_PASSWORD. If you set a password with both methods, the password set with the -pd option takes precedence.
-SecurityDomain
-sdn
security_domain
Required if the domain uses LDAP authentication. Optional if the domain uses native authentication or Kerberos authentication. Name of the security domain to which the domain user belongs. You can set a security domain with the -sdn option or the environment variable INFA_DEFAULT_SECURITY_DOMAIN. If you set a security domain name with both methods, the -sdn option takes precedence. The security domain name is case sensitive.
If the domain uses native or LDAP authentication, the default is Native. If the domain uses Kerberos authentication, the default is the LDAP security domain created during installation. The name of the security domain is the same as the user realm specified during installation.
-ResilienceTimeout
-re
timeout_period_in_seconds
Optional. Amount of time in seconds that infacmd attempts to establish or re-establish a connection to the domain. If you omit this option, infacmd uses the timeout value specified in the INFA_CLIENT_RESILIENCE_TIMEOUT environment variable. If no value is specified in the environment variable, the default of 180 seconds is used.
-ConnectionName
-cn
connection_name
Name of the connection. The name is not case sensitive and must be unique within the domain. It cannot exceed 128 characters, contain spaces, or contain the following special characters:
~ ` ! $ % ^ & * ( ) - + = { [ } ] | \ : ; " ' < , > . ? /
- ConnectionId
-cid
connection_id
String that the Data Integration Service uses to identify the connection. The ID is not case sensitive. It must be 255 characters or less and must be unique in the domain. You cannot change this property after you create the connection. Default value is the connection name.
-ConnectionType
-ct
connection_type
Required. Type of connection. Use one of the following connection types:
  • - ADABAS
  • - ADLSGEN1 ( Microsoft Azure Data Lake Storage Gen1)
  • - ADLSGEN2 (Microsoft Azure Data Lake Storage Gen2)
  • - AMAZONKINESIS
  • - AMAZONREDSHIFT
  • - AMAZONS3
  • - AZUREBLOB (Microsoft Azure Blob Storage)
  • - BIGQUERY (Google BigQuery)
  • - BLOCKCHAIN
  • - CASSANDRA
  • - ConfluentKafka
  • - DATABRICKS
  • - DATASIFT
  • - DB2
  • - DB2I
  • - DB2Z
  • - FACEBOOK
  • - GreenplumPT
  • - GOOGLEANALYTICS
  • - GOOGLESTORAGEV2
  • - HADOOP
  • - HBASE
  • - HDFS
  • - HIVE
  • - IBMDB2
  • - IMS
  • - JDBC
  • - JDBCV2
  • - JDEDWARDSENTERPRISEONE
  • - KAFKA
  • - LDAP
  • - LINKEDIN
  • - MAPR-DB
  • - Microsoft Azure SQL Data Warehouse
  • - MSDYNAMICS
  • - NETEZZA
  • - ODATA
  • - ODBC
  • - ORACLE
  • - SALESFORCE
  • - SFMC (Salesforce Marketing Cloud)
  • - SAPAPPLICATIONS
  • - SEQ
  • - SFDC
  • - SNOWFLAKE
  • - SPANNERGOOGLE (Google Cloud Spanner)
  • - SQLSERVER
  • - TABLEAU
  • - TABLEAU V3
  • - TERADATAPARALLELTRANSPORTER
  • - TWITTER
  • - TWITTERSTREAMING
  • - VSAM
  • - WEBCONTENT - KAPOWKATALYST
You can use the infacmd isp ListConnections command to view connection types.
ConnectionUserName
-cun
connection_user_name
Required. Database user name.
-ConnectionPassword
-cpd
connection_password
Required. Password for the database user name. You can set a password with the -cpd option or the environment variable INFA_DEFAULT_CONNECTION_PASSWORD, lf you set the password with both options, the -cpd option takes precedence.
If you are creating an ADABAS, DB2I, DB2Z, IMS, SEQ, or VSAM connection, you can enter a valid PowerExchange passphrase instead of a password. Passphrases for access to databases and data sets on z/OS can be from 9 to 128 characters in length. Passphrases for access to DB2 for i5/OS can be up to 31 characters in length. Passphrases can contain the following characters:
  • - Uppercase and lowercase letters
  • - The numbers 0 to 9
  • - Spaces
  • - The following special characters:
  • ’ - ; # \ , . / ! % & * ( ) _ + { } : @ | < > ?
    Note: The first character is an apostrophe.
Passphrases cannot include single quotation marks (‘), double quotation marks (“), or currency symbols.
If a passphrase contains spaces, you must enclose it with double-quotation marks ("), for example, "This is an example passphrase". If a passphrase contains special characters, you must enclose it with triple double-quotation characters ("""), for example, """This passphrase contains special characters ! % & *.""". If a passphrase contains only alphanumeric characters without spaces, you can enter it without delimiters.
Note: On z/OS, a valid RACF passphrase can be up to 100 characters in length. PowerExchange truncates passphrases longer than 100 characters when passing them to RACF for validation.
To use passphrases, ensure that the PowerExchange Listener runs with a security setting of SECURITY=(1,N) or higher in the DBMOVER member. For more information, see "SECURITY Statement" in the PowerExchange Reference Manual.
To use passphrases for IMS connections, ensure that the following additional requirements are met:
  • - You must configure ODBA access to IMS as described in the PowerExchange Navigator User Guide.
  • - You must use IMS data maps that specify IMS ODBA as the access method. Do not use data maps that specify the DL/1 BATCH access method because this access method requires the use of netport jobs, which do not support passphrases.
  • - The IMS database must be online in the IMS control region to use ODBA access to IMS.
-VendorId
-vid
vendor_id
Optional. ID of the external partner who built the adapter.
-Options
-o
options
Required. Enter name-value pairs separated by spaces. The connection options are different for each connection type.
Use single quote to escape any equal sign or space in the value.

Adabas Connection Options

Use connection options to define an Adabas connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
The following table describes Adabas connection options:
Option
Description
CodePage
Required. Code to read from or write to the database. Use the ISO code page name, such as ISO-8859-6. The code page name is not case sensitive.
ArraySize
Optional. Determines the number of records in the storage array for the threads when the worker threads value is greater than 0. Valid values are from 1 through 5000. Default is 25.
Compression
Optional. Compresses the data to decrease the amount of data Informatica applications write over the network. True or false. Default is false.
EncryptionLevel
Optional. Level of encryption. If you specify AES for the EncryptionType option, specify one of the following values to indicate the level of AES encryption:
  • - 1. Use a 128-bit encryption key.
  • - 2. Use a 192-bit encryption key.
  • - 3. Use a 256-bit encryption key.
Default is 1.
Note: If you specify None for encryption type, the Data Integration Service ignores the encryption level value.
EncryptionType
Optional. Controls whether to use encryption. Specify one of the following values:
  • - None
  • - AES
Default is None.
InterpretAsRows
Optional. If true, the pacing size value represents a number of rows. If false, the pacing size represents kilobytes. Default is false.
Location
Location of the PowerExchange Listener node that can connect to the database. The location is defined in the first parameter of the NODE statement in the PowerExchange dbmover.cfg configuration file.
OffLoadProcessing
Optional. Moves bulk data processing from the source machine to the Data Integration Service machine.
Enter one of the following values:
  • - Auto. The Data Integration Service determines whether to use offload processing.
  • - Yes. Use offload processing.
  • - No. Do not use offload processing.
Default is Auto.
PacingSize
Optional. Slows the data transfer rate in order to reduce bottlenecks. The lower the value, the greater the session performance. Minimum value is 0. Enter 0 for optimal performance. Default is 0.
WorkerThread
Optional. Number of threads that the Data Integration Service uses to process bulk data when offload processing is enabled. For optimal performance, this value should not exceed the number of available processors on the Data Integration Service machine. Valid values are 1 through 64. Default is 0, which disables multithreading.
WriteMode
Enter one of the following write modes:
  • - CONFIRMWRITEON. Sends data to the PowerExchange Listener and waits for a success/no success response before sending more data.
  • - CONFIRMWRITEOFF. Sends data to the PowerExchange Listerner without waiting for a success/no success response. Use this option when the target table can be reloaded if an error occurs.
  • - ASYNCHRONOUSWITHFAULTT. Sends data to the PowerExchangeListener asynchronously with the ability to detect errors.
Default is CONFIRMWRITEON.
EnableConnectionPool
Optional. Enables connection pooling. When you enable connection pooling, the connection pool retains idle connection instances in memory. When you disable connection pooling, the Data Integration Service stops all pooling activity. True or false. Default is false.
ConnectionPoolSize
Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances. Default is 15.
ConnectionPoolMaxIdleTime
Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idletime when it does not exceed the minimum number of idle connection instances. Default is 120.
ConnectionPoolMinConnections
Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0.

Amazon Kinesis Connection Options

Use connection options to define an Amazon Kinesis connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
For example, to create an Amazon Kinesis connection to Kinesis Streams on UNIX using cross-account IAM role, run the following command:
infacmd createConnection -dn <domain name> -un <domain user> -pd <domain password> -cn <connection name> -cid <connection id> -ct AMAZONKINESIS -o "AWS_ACCESS_KEY_ID=<access key id> AWS_SECRET_ACCESS_KEY=<secret access key> ConnectionTimeOut=10000 Region=<RegionName> ServiceType='Kinesis Streams' RoleArn=<ARN of IAM role> ExternalID=<External ID> AuthenticationType='Cross-account IAM Role'"
To create an Amazon Kinesis connection to Kinesis Firehose on UNIX using AWS credential profile, run the following command:
infacmd createConnection -dn <domain name> -un <domain user> -pd <domain password> -cn <connection name> -cid <connection id> -ct AMAZONKINESIS -o "AWS_ACCESS_KEY_ID=<access key id> AWS_SECRET_ACCESS_KEY=<secret access key> ConnectionTimeOut=10000 Region=<RegionName> ServiceType='Kinesis Firehose' Profilename=<AWS credential profile> AuthenticationType='AWS Credential Profile'"
To enter multiple options, separate options with spaces. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes the Amazon Kinesis connection options for the infacmd isp CreateConnection:
Property
Description
AWS_ACCESS_KEY_ID
The access key ID of the Amazon AWS user account.
AWS_SECRET_ACCESS_KEY
The secret access key for your Amazon AWS user account.
ConnectionTimeOut
Number of milliseconds that the Integration service waits to establish a connection to the Kinesis Stream or Kinesis Firehose after which it times out.
Region
Region where the endpoint for your service is available. You can select one of the following values:
  • - us-east-2. Indicates the US East (Ohio) region.
  • - us-east-1. Indicates the US East (N. Virginia) region.
  • - us-west-1. Indicates the US West (N. California) region.
  • - us-west-2. Indicates the US West (Oregon) region.
  • - ap-northeast-1. Indicates the Asia Pacific (Tokyo) region.
  • - ap-northeast-2. Indicates the Asia Pacific (Seoul) region.
  • - ap-northeast-3. Indicates the Asia Pacific (Osaka-Local) region.
  • - ap-south-1. Indicates the Asia Pacific (Mumbai) region.
  • - ap-southeast-1. Indicates the Asia Pacific (Singapore) region.
  • - ap-southeast-2. Indicates the Asia Pacific (Sydney) region.
  • - ca-central-1. Indicates the Canada (Central) region.
  • - cn-north-1. Indicates the China (Beijing) region.
  • - cn-northwest-1. Indicates the China (Ningxia) region.
  • - eu-central-1. Indicates the EU (Frankfurt) region.
  • - eu-west-1. Indicates the EU (Ireland) region.
  • - eu-west-2. Indicates the EU (London) region.
  • - eu-west-3. Indicates the EU (Paris) region.
  • - sa-east-1. Indicates the South America (São Paulo) region.
ServiceType
The type of Kinesis Service that the connection is associated with.
Select one of the following service types:
  • - Kinesis Firehose. Select this service to write to Kinesis Firehose Delivery Stream.
  • - Kinesis Streams. Select this service to read from Kinesis Streams.
Profilename
Required if you use the AWS credential profile authentication type. An AWS credential profile defined in the credentials file. A mapping accesses the AWS credentials through the profile name at run time.
If you do not provide an AWS credential profile name, the mapping uses the access key ID and secret access key that you specify when you create the connection.
RoleArn
Required if you use the cross-account IAM role authentication type. The Amazon Resource Name specifying the role of an IAM user.
ExternalID
Required if you use the cross-account IAM role authentication type and if the external ID is defined by the AWS account. The external ID for an IAM role is an additional restriction that you can use in an IAM role trust policy to designate who can assume the IAM role.
AuthenticationType
The type of authentication.
Select one of the following values:
  • - AWS Credential Profile
  • - Cross-account IAM Role
The default value is AWS Credential Profile.

Amazon Redshift Connection Options

Use connection options to define an Amazon Redshift connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate options with spaces. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes the mandatory Amazon Redshift connection options for the infacmd isp CreateConnection and UpdateConnection commands:
Property
Description
Username
User name of the Amazon Redshift account.
Password
Password for the Amazon Redshift account.
Access Key ID
Amazon S3 bucket access key ID.
Note: Required if you do not use AWS Identity and Access Management (IAM) authentication.
Secret Access Key
Amazon S3 bucket secret access key ID.
Note: Required if you do not use AWS Identity and Access Management (IAM) authentication.
Master Symmetric Key
Optional. Provide a 256-bit AES encryption key in the Base64 format when you enable client-side encryption. You can generate a key using a third-party tool.
If you specify a value, ensure that you specify the encryption type as client side encryption in the advanced target properties.
JDBC URL
Amazon Redshift connection URL.
Cluster Region
Optional. The AWS cluster region in which the bucket you want to access resides.
Select a cluster region if you choose to provide a custom JDBC URL that does not contain a cluster region name in the JDBC URL connection property.
If you specify a cluster region in both Cluster Region and JDBC URL connection properties, the Data Integration Service ignores the cluster region that you specify in the JDBC URL connection property.
To use the cluster region name that you specify in the JDBC URL connection property, select None as the cluster region in this property.
Select one of the following cluster regions:
Select one of the following regions:
  • - Asia Pacific (Mumbai)
  • - Asia Pacific (Seoul)
  • - Asia Pacific (Singapore)
  • - Asia Pacific (Sydney)
  • - Asia Pacific (Tokyo)
  • - AWS GovCloud (US)
  • - Canada (Central)
  • - China (Beijing)
  • - China (Ningxia)
  • - EU (Ireland)
  • - EU (Frankfurt)
  • - EU (London)
  • - EU (Paris)
  • - South America (Sao Paulo)
  • - US East (Ohio)
  • - US East (N. Virginia)
  • - US West (N. California)
  • - US West (Oregon)
Default is None.
You can only read data from or write data to the cluster regions supported by AWS SDK used by PowerExchange for Amazon Redshift.
Customer Master Key ID
Optional. Specify the customer master key ID generated by AWS Key Management Service (AWS KMS) or the Amazon Resource Name (ARN) of your custom key for cross-account access. You must generate the customer master key corresponding to the region where Amazon S3 bucket resides. You can specify any of the following values:
Customer generated customer master key
Enables client-side or server-side encryption.
Default customer master key
Enables client-side or server-side encryption. Only the administrator user of the account can use the default customer master key ID to enable client-side encryption.

Amazon S3 Connection Options

Use connection options to define an Amazon S3.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate options with spaces. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes the mandatory Amazon S3 connection options for the infacmd isp CreateConnection and UpdateConnection commands:
Property
Description
Name
The name of the connection. The name is not case sensitive and must be unique within the domain. You can change this property after you create the connection. The name cannot exceed 128 characters, contain spaces, or contain the following special characters:~ ` ! $ % ^ & * ( ) - + = { [ } ] | \ : ; " ' < , > . ? /
ID
String that the Data Integration Service uses to identify the connection. The ID is not case sensitive. It must be 255 characters or less and must be unique in the domain. You cannot change this property after you create the connection. Default value is the connection name.
Description
Optional. The description of the connection. The description cannot exceed 4,000 characters.
Location
The domain where you want to create the connection.
Type
The Amazon S3 connection type.
Access Key
Access key to access the Amazon S3 bucket. Provide the access key value based on the following authentication methods:
  • - Basic authentication: provide the actual access key value.
  • - IAM authentication: do not provide the access key value.
  • - Temporary security credentials via assume role: provide access key of an IAM user with no permissions to access Amazon S3 bucket.
Secret Key
Secret access key to access the Amazon S3 bucket.
The secret key is associated with the access key and uniquely identifies the account. Provide the access key value based on the following authentication methods:
  • - Basic authentication: provide the actual access secret value.
  • - IAM authentication: do not provide the access secret value.
  • - Temporary security credentials via assume role: provide access secret of an IAM user with no permissions to access Amazon S3 bucket.
IAM Role ARN
The ARN of the IAM role assumed by the user to use the dynamically generated temporary security credentials.
Enter the value of this property if you want to use the temporary security credentials to access the AWS resources.
If you want to use the temporary security credentials with IAM authentication, do not provide the Access Key and Secret Key connection properties. If you want to use the temporary security credentials without IAM authentication, you must enter the value of the Access Key and Secret Key connection properties.
For more information about how to obtain the ARN of the IAM role, see the AWS documentation.
Folder Path
The complete path to Amazon S3 objects. The path must include the bucket name and any folder name.
Do not use a slash at the end of the folder path. For example, <bucket name>/<my folder name>.
Master Symmetric Key
Optional. Provide a 256-bit AES encryption key in the Base64 format when you enable client-side encryption. You can generate a master symmetric key using a third-party tool.
S3 Account Type
The type of the Amazon S3 account.
Select Amazon S3 Storage or S3 Compatible Storage.
Select the Amazon S3 storage option to use the Amazon S3 services. Select the S3 compatible storage option to specify the endpoint for a third-party storage provider such as Scality RING.
By default, Amazon S3 storage is selected.
REST Endpoint
The S3 storage endpoint.
Specify the S3 storage endpoint in HTTP/HTTPs format when you select the S3 compatible storage option. For example, http://s3.isv.scality.com.
Region Name
Select the AWS region in which the bucket you want to access resides.
Select one of the following regions:
  • - Asia Pacific (Mumbai)
  • - Asia Pacific (Seoul)
  • - Asia Pacific (Singapore)
  • - Asia Pacific (Sydney)
  • - Asia Pacific (Tokyo)
  • - AWS GovCloud (US)
  • - Canada (Central)
  • - China (Beijing)
  • - China (Hong Kong)
  • - China (Ningxia)
  • - EU (Ireland)
  • - EU (Frankfurt)
  • - EU (London)
  • - EU (Paris)
  • - South America (Sao Paulo)
  • - US East (Ohio)
  • - US East (N. Virginia)
  • - US West (N. California)
  • - US West (Oregon)
Default is US East (N. Virginia).
Not applicable for S3 compatible storage.
Customer Master Key ID
Optional. Specify the customer master key ID or alias name generated by AWS Key Management Service (AWS KMS) or the Amazon Resource Name (ARN) of your custom key for cross-account access. You must generate the customer master key for the same region where Amazon S3 bucket reside.
You can specify any of the following values:
Customer generated customer master key
Enables client-side or server-side encryption.
Default customer master key
Enables client-side or server-side encryption. Only the administrator user of the account can use the default customer master key ID to enable client-side encryption.
Federated SSO IdP
SAML 2.0-enabled identity provider for the federated user single sign-on to use with the AWS account.
PowerExchange for Amazon S3 supports only the ADFS 3.0 identity provider.
Select None if you do not want to use federated user single sign-on.

Federated user single sign-on connection properties

Configure the following properties when you select ADFS 3.0 in Federated SSO IdP:
Property
Description
Federated User Name
User name of the federated user to access the AWS account through the identity provider.
Federated User Password
Password for the federated user to access the AWS account through the identity provider.
IdP SSO URL
Single sign-on URL of the identity provider for AWS.
SAML Identity Provider ARN
ARN of the SAML identity provider that the AWS administrator created to register the identity provider as a trusted provider.
Role ARN
ARN of the IAM role assumed by the federated user.

Blockchain Connection Options

Use connection options to define a blockchain connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate options with spaces. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes blockchain connection options for infacmd isp CreateConnection and UpdateConnection commands:
Property
Description
swaggerFilePath
The absolute path of the swagger file path that contains the REST API to communicate with the blockchain. The swagger file must be a JSON file that is stored on the Data Integration Service machine. If the swagger file is in a different file format, such as YAML, convert the file to JSON format.
authType*
Authentication method that the run-time engine uses to connect to the REST server. You can use none, basic, digest, or OAuth.
authUserID*
User name to authenticate to the REST server.
authPassword*
Password for the user name to authenticate to the REST server.
oAuthConsumerKey*
Required for the OAuth authentication type. Client key that is associated with the REST server.
oAuthConsumerSecret*
Required for the OAuth authentication type. Client password to connect to the REST server.
oAuthToken*
Required for the OAuth authentication type. Access token to connect to the REST server.
oAuthTokenSecret*
Required for the OAuth authentication type. Password associated with the OAuth token.
proxyType*
Type of proxy. You can use no proxy, platform proxy, or custom.
proxyDetails*
Proxy configuration using the format <host>:<port>.
trustStoreFilePath*
The absolute path of the truststore file that contains the SSL certificate.
trustStorePassword*
Password for the truststore file.
keyStoreFilePath*
The absolute path of the keystore file that contains the keys and certificates required to establish a two-way secure connection with the REST server.
keyStorePassword*
Password for the keystore file.
advancedProperties
List of advanced properties to access an asset on the blockchain. Specify the advanced properties using name-value pairs that are separated by a semicolon.
You can use the following advanced properties:
  • - X-API-KEY. Required if you authenticate to the REST server using an API key.
The advanced properties that you configure in the connection override the values for the corresponding advanced properties in the blockchain data object. For example, if the connection and the data object both specify a base URL, the value in the connection overrides the value in the data object.
cookies
Required based on how the REST API is implemented. List of cookie properties to specify the cookie information that is passed to the REST server. Specify the properties using name-value pairs that are separated by a semicolon.
The cookie properties that you configure in the connection override the values for the corresponding cookie properties in the blockchain data object.
* The property is ignored. To use the functionality, configure the property as an advanced property and provide a name-value pair based on the property name in the swagger file.
For example, configure the following name-value pair to use basic authorization:
Authorization=Basic <credentials>

Cassandra Connection Options

Use connection options to define the Cassandra connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
For example,
./infacmd.sh createConnection -dn Domain_Adapters_1020_Uni -un Administrator -pd Administrator -cn Cassandra_test2 -ct CASSANDRA -cun cloud2 -cpd cloud2 -o HostName=invrlx7acdb01 DefaultKeyspace=cloud SQLIDENTIFIERCHARACTER='""(quotes)' SSLMODE=disabled AdditonalConnectionProperties='BinaryColumnLength=10000;DecimalColumnScale=19;EnableCaseSensitive=0;EnableNullInsert=1;EnablePaging=0;
Separate multiple options with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes Cassandra connection options for infacmd isp CreateConnection and UpdateConnection commands:
Property
Description
HostName
Host name or IP address of the Cassandra server.
Port
Cassandra server port number. Default is 9042.
User Name
-cun
User name to access the Cassandra server.
Password
-cpd
Password corresponding to the user name to access the Cassandra server.
DefaultKeyspace
Name of the Cassandra keyspace to use by default.
SQLIDENTIFIERCHARACTER
Type of character that the database uses to enclose delimited identifiers in SQL or CQL queries. The available characters depend on the database type.
Specify None if the database uses regular identifiers. When the Data Integration Service generates SQL or CQL queries, the service does not place delimited characters around any identifiers.
Specify a character if the database uses delimited identifiers. When the Data Integration Service generates SQL or CQL queries, the service encloses delimited identifiers within this character.
SSLMODE
Not applicable for PowerExchange for Cassandra JDBC.
Enter disabled.
AdditionalConnectionProperties
Enter one or more JDBC connection parameters in the following format:
<param1>=<value>;<param2>=<value>;<param3>=<value>
PowerExchange for Cassandra JDBC supports the following JDBC connection parameters:
  • - BinaryColumnLength
  • - DecimalColumnScale
  • - EnableCaseSensitive
  • - EnableNullInsert
  • - EnablePaging
  • - RowsPerPage
  • - StringColumnLength
  • - VTTableNameSeparator

Confluent Kafka Connection Options

Use connection options to define a Confluent Kafka connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
For example, to create a Confluent Kafka connection on UNIX, run the following command:
sh infacmd.sh createConnection -dn <domain name> -un <domain user> -pd <domain password> -cn <connection name> -cid <connection id> -ct ConfluentKafka -o "kfkBrkList='<host1:port1>,<host2:port2>,<host3:port3>' kafkabrokerversion='<version>' schemaregistryurl='<schema registry URL>'"

Databricks Connection Options

Use connection options to define a Databricks connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes Databricks connection options for the infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
connectionId
String that the Data Integration Service uses to identify the connection. The ID is not case sensitive. It must be 255 characters or less and must be unique in the domain. You cannot change this property after you create the connection. Default value is the connection name.
connectionType
Required. Type of connection is Databricks.
name
The name of the connection. The name is not case sensitive and must be unique within the domain. You can change this property after you create the connection. The name cannot exceed 128 characters, contain spaces, or contain the following special characters:
~ ` ! $ % ^ & * ( ) - + = { [ } ] | \ : ; " ' < , > . ? /
databricksExecutionParameterList
Advanced properties that are unique to the Databricks Spark engine.
To enter multiple properties, separate each name-value pair with the following text: &:.
Use Informatica advanced properties only at the request of Informatica Global Customer Support.
clusterConfigID
Name of the cluster configuration associated with the Databricks environment.
Required if you do not configure the cloud provisioning configuration.
provisionConnectionId
Name of the cloud provisioning configuration associated with a cloud platform such as Microsoft Azure.
Required if you do not configure the cluster configuration.
stagingDirectory
The directory where the Databricks Spark engine stages run-time files.
If you specify a directory that does not exist, the Data Integration Service creates it at run time.
If you do not provide a directory path, the run-time staging files are written to /<cluster staging directory>/DATABRICKS.

DataSift Connection Options

Use connection options to define a DataSift connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes DataSift connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
userName
DataSift username for the DataSift user account.
apiKey
API key. The Developer API key is displayed in the Dashboard or Settings page in the DataSift account.

DB2 for i5/OS Connection Options

Use DB2I connection options to define the DB2 for i5/OS connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
Separate multiple options with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes DB2 for i5/OS connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
DatabaseName
Database instance name.
EnvironmentSQL
Optional. SQL commands to set the database environment when you connect to the database. The Data Integration Service executes the connection environment SQL each time it connects to the database.
Note: Enclose special characters in double quotes.
CodePage
Required. Code page used to read from a source database or write to a target database or file.
ArraySize
Optional. Determines the number of records in the storage array for the threads when the worker threads value is greater than 0. Valid values are from 1 through 5000. Default is 25.
Compression
Optional. Compresses the data to decrease the amount of data to write over the network. Default is false.
EncryptionLevel
Optional. Level of encryption. If you specify AES for the EncryptionType option, specify one of the following values to indicate the level of AES encryption:
  • - 1. Use a 128-bit encryption key.
  • - 2. Use a 192-bit encryption key.
  • - 3. Use a 256-bit encryption key.
Default is 1.
Note: If you specify None for encryption type, the Data Integration Service ignores the encryption level value.
EncryptionType
Optional. Controls whether to use encryption. Specify one of the following values:
  • - None
  • - AES
Default is None.
InterpretAsRows
Optional. Represent pacing size as a number of rows. If false, the pacing size represents kilobytes. Default is false.
Location
Location of the PowerExchange Listener node that can connect to the database. The location is defined in the first parameter of the NODE statement in the PowerExchange dbmover.cfg configuration file.
PacingSize
Optional. Amount of data the source system can pass to the PowerExchange Listener. Configure the pacing size if an external application, a database, or the Data Integration Service node is a bottleneck. The lower the value, the faster the performance.
Minimum value is 0. Enter 0 for maximum performance. Default is 0.
RejectFile
Optional. Enter the reject file name and path. Reject files contain rows that were not written to the database.
WriteMode
Enter one of the following write modes:
  • - CONFIRMWRITEON. Sends data to the PowerExchange Listener and waits for a success/no success response before sending more data.
  • - CONFIRMWRITEOFF. Sends data to the PowerExchange Listener without waiting for a success/no success response. Use this option when the target table can be reloaded if an error occurs.
  • - ASYNCHRONOUSWITHFAULTT. Sends data to the PowerExchange Listener asynchronously with the ability to detect errors.
Default is CONFIRMWRITEON.
DatabaseFileOverrides
Specifies the i5/OS database file override. The format is:
from_file/to_library/to_file/to_member
Where:
  • - from_file is the file to be overridden
  • - to_library is the new library to use
  • - to_file is the file in the new library to use
  • - to_member is optional and is the member in the new library and file to use. *FIRST is used if nothing is specified.
You can specify up to 8 unique file overrides on a single connection. A single override applies to a single source or target. When you specify more than one file override, enclose the string of file overrides in double quotes and include a space between each file override.
IsolationLevel
Commit scope of the transaction. Select one of the following values:
  • - None
  • - CS. Cursor stability.
  • - RR. Repeatable Read.
  • - CHG. Change.
  • - ALL
Default is CS.
LibraryList
List of libraries that PowerExchange searches to qualify the table name for Select, Insert, Delete, or Update statements. PowerExchange searches the list if the table name is unqualified.
Separate libraries with commas.
EnableConnectionPool
Optional. Enables parallel processing when loading data into a table in bulk mode. Used for Oracle. True or false. Default is true.
ConnectionPoolSize
Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances.
ConnectionPoolMaxIdleTime
Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idletime when it does not exceed the minimum number of idle connection instances.
ConnectionPoolMinConnections
Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0.

DB2 for z/OS Connection Options

Use DB2Z connection options to define the IBM for DB2 z/OS connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
Separate multiple options with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes DB2Z connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
DataAccessConnectString
Connection string used to access data from the database.
<database name>
EnvironmentSQL
Optional. SQL commands to set the database environment when you connect to the database. The Data Integration Service executes the connection environment SQL each time it connects to the database.
Note: Enclose special characters in double quotes.
CodePage
Required. Code page used to read from a source database or write to a target database or file.
ArraySize
Optional. Determines the number of records in the storage array for the threads when the worker threads value is greater than 0. Valid values are from 1 through 5000. Default is 25.
Compression
Optional. Compresses the data to decrease the amount of data to write over the network. Default is false.
CorrelationID
Optional. Label to apply to a DB2 task or query to allow DB2 for z/OS to account for the resource. Enter up to 8 bytes of alphanumeric characters.
EncryptionLevel
Optional. Level of encryption. If you specify AES for the EncryptionType option, specify one of the following values to indicate the level of AES encryption:
  • - 1. Use a 128-bit encryption key.
  • - 2. Use a 192-bit encryption key.
  • - 3. Use a 256-bit encryption key.
Default is 1.
Note: If you specify None for encryption type, the Data Integration Service ignores the encryption level value.
EncryptionType
Optional. Controls whether to use encryption. Specify one of the following values:
  • - None
  • - AES
Default is None.
InterpretAsRows
Optional. Represent pacing size as a number of rows. If false, the pacing size represents kilobytes. Default is false.
Location
Location of the PowerExchange listener node that can connect to the database. The node is defined in the PowerExchange dbmover.cfg configuration file.
OffloadProcessing
Optional. Moves bulk data processing from the VSAM source to the Data Integration Service machine.
Enter one of the following values:
  • - Auto. The Data Integration Service determines whether to use offload processing.
  • - Yes. Use offload processing.
  • - No. Do not use offload processing.
Default is Auto.
PacingSize
Optional. Amount of data the source system can pass to the PowerExchange Listener. Configure the pacing size if an external application, a database, or the Data Integration Service node is a bottleneck. The lower the value, the faster the performance.
Minimum value is 0. Enter 0 for maximum performance. Default is 0.
RejectFile
Optional. Enter the reject file name and path. Reject files contain rows that were not written to the database.
WorkerThread
Optional. Number of threads that the Data Integration Service uses to process bulk data when offload processing is enabled. For optimal performance, this value should not exceed the number of available processors on the Data Integration Service machine. Valid values are 1 through 64. Default is 0, which disables multithreading.
WriteMode
Enter one of the following write modes:
  • - CONFIRMWRITEON. Sends data to the PowerExchange Listener and waits for a success/no success response before sending more data.
  • - CONFIRMWRITEOFF. Sends data to the PowerExchange Listener without waiting for a success/no success response. Use this option when the target table can be reloaded if an error occurs.
  • - ASYNCHRONOUSWITHFAULTT. Sends data to the PowerExchange Listener asynchronously with the ability to detect errors.
Default is CONFIRMWRITEON.
EnableConnectionPool
Optional. Enables parallel processing when loading data into a table in bulk mode. Used for Oracle. True or false. Default is true.
ConnectionPoolSize
Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances.
ConnectionPoolMaxIdleTime
Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idletime when it does not exceed the minimum number of idle connection instances.
ConnectionPoolMinConnections
Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0.

Facebook Connection Options

Use connection options to define a Facebook connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes Facebook connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
ConsumerKey
The App ID that you get when you create the application in Facebook. Facebook uses the key to identify the application.
ConsumerSecret
The App Secret that you get when you create the application in Facebook. Facebook uses the secret to establish ownership of the consumer key.
AccessToken
Access token that the OAuth Utility returns. Facebook uses this token instead of the user credentials to access the protected resources.
AccessSecret
Access secret is not required for Facebook connection.
Scope
Permissions for the application. Enter the permissions you used to configure OAuth.

Greenplum Connection Options

Use connection options to define a Greenplum connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes the Greenplum connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
UserName
Required. User name with permissions to access the Greenplum database.
Password
Required. Password to connect to the Greenplum database.
driverName
Required. Name of the Greenplum JDBC driver.
For example:com.pivotal.jdbc.GreenplumDriver
For more information about the driver, see the Greenplum documentation.
connectionString
Required. Greenplum JDBC connection URL.
For example: jdbc:pivotal:greenplum://<hostname>:<port>;DatabaseName=<database_name>
For more information about the connection URL, see the Greenplum documentation.
hostName
Required. Host name or IP address of the Greenplum server.
portNumber
Optional. Greenplum server port number.
If you enter 0, the gpload utility reads from the environment variable $PGPORT.
Default is 5432.
databaseName
Required. Name of the database that you want to connect to.
enableSSL
Required. Set this option to true to establish secure communication between the gpload utility and the Greenplum server over SSL.
SSLCertificatePath
Required if you enable SSL. Path where the SSL certificates for the Greenplum server are stored.

Google Analytics Connection Options

Use connection options to define the Google Analytics connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
Separate multiple options with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
For example,
./infacmd.sh createconnection dn Domain_Google -un Administrator -pd Administrator -cn GA_cmd -ct GOOGLEANALYTICS -o "SERVICEACCOUNTID=serviceaccount@api-project-12345.iam.gserviceaccount.com SERVICEACCOUNTKEY='---BEGIN PRIVATE KEY---\nabcd1234322dsa\n---END PRIVATE KEY----\n' PROJECTID=api-project-12333667"
The following table describes Google Analytics connection options for infacmd isp CreateConnection and UpdateConnection commands:
Property
Description
SERVICEACCOUNTID
Required. Specifies the client_email value present in the JSON file that you download after you create a service account.
SERVICEACCOUNTKEY
Required. Specifies the private_key value present in the JSON file that you download after you create a service account.

Google BigQuery Connection Options

Use connection options to define the Google BigQuery connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
Separate multiple options with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
For example,
./infacmd.sh createconnection -dn Domain_Adapters_1041_Uni -un Administrator -pd Administrator -cn GBQ_BDM -ct BIGQUERY -o "CLIENTEMAIL='ics-test@api-project-80697026669.iam.gserviceaccount.com' PRIVATEKEY='-----BEGIN PRIVATE KEY-----\nMIIgfdzhgy74587igu787tio9QEFAASCBKgwggSkAgEAAoIBAQCy+2Dbh\n-----END PRIVATE KEY-----\n' PROJECTID=api-project-86699686669 CONNECTORTYPE=Simple SCHEMALOCATION='gs://0_europe-west6_region' STORAGEPATH='gs://0_europe-west6_region' DATASETNAMEFORCUSTOMQUERY='europe_west6' REGIONID='europe-west6'" ;
The following table describes Google BigQuery connection options for infacmd isp CreateConnection and UpdateConnection commands:
Property
Description
CLIENTEMAIL
Required. Specifies the client_email value present in the JSON file that you download after you create a service account in Google BigQuery.
PRIVATEKEY
Required. Specifies the private_key value present in the JSON file that you download after you create a service account in Google BigQuery.
Connection Mode
CONNECTORTYPE
Required. The connection mode that you want to use to read data from or write data to Google BigQuery.
Enter one of the following connection modes:
  • - Simple. Flattens each field within the Record data type field as a separate field in the mapping.
  • - Hybrid. Displays all the top-level fields in the Google BigQuery table including Record data type fields. PowerExchange for Google BigQuery displays the top-level Record data type field as a single field of the String data type in the mapping.
  • - Complex. Displays all the columns in the Google BigQuery table as a single field of the String data type in the mapping.
Default is Simple.
Schema Definition File Path
SCHEMALOCATION
Required. Specifies a directory on the client machine where the PowerExchange for Google BigQuery must create a JSON file with the sample schema of the Google BigQuery table. The JSON file name is the same as the Google BigQuery table name.
Alternatively, you can specify a storage path in Google Cloud Storage where the PowerExchange for Google BigQuery must create a JSON file with the sample schema of the Google BigQuery table. You can download the JSON file from the specified storage path in Google Cloud Storage to a local machine.
PROJECTID
Required. Specifies the project_id value present in the JSON file that you download after you create a service account in Google BigQuery.
If you have created multiple projects with the same service account, enter the ID of the project that contains the dataset that you want to connect to.
STORAGEPATH
Required when you read or write large volumes of data.
Path in Google Cloud Storage where PowerExchange for Google BigQuery creates a local stage file to store the data temporarily.
You can either enter the bucket name or the bucket name and folder name.
For example, enter gs://<bucket_name> or gs://<bucket_name>/<folder_name>
REGIONID
The region name where the Google BigQuery dataset resides.
For example, if you want to connect to a Google BigQuery dataset that resides in Las Vegas region, specify us-west4 as the Region ID.
Note: In the Storage Path connection property, ensure that you specify a bucket name or the bucket name and folder name that resides in the same region as the dataset in Google BigQuery.
For more information about the regions supported by Google BigQuery, see the following Google BigQuery documentation:https://cloud.google.com/bigquery/docs/locations

Google Cloud Spanner Connection Options

Use connection options to define the Google Cloud Spanner connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
Separate multiple options with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
For example,
./infacmd.sh createconnection dn Domain_Google -un Administrator -pd Administrator -cn Spanner_cmd -ct SPANNERGOOGLE -o "CLIENTEMAIL=serviceaccount@api-project-12345.iam.gserviceaccount.com PRIVATEKEY='---BEGIN PRIVATE KEY---\nabcd1234322dsa\n---END PRIVATE KEY----\n' INSTANCEID=spanner-testing PROJECTID=api-project-12333667"
The following table describes Google Cloud Spanner connection options for infacmd isp CreateConnection and UpdateConnection commands:
Property
Description
CLIENTEMAIL
Required. Specifies the client_email value present in the JSON file that you download after you create a service account in Google Cloud Spanner.
PRIVATEKEY
Required. Specifies the private_key value present in the JSON file that you download after you create a service account in Google Cloud Spanner.
PROJECTID
Required. Specifies the project_id value present in the JSON file that you download after you create a service account in Google Cloud Spanner.
If you have created multiple projects with the same service account, enter the ID of the project that contains the dataset that you want to connect to.
INSTANCEID
Required. Name of the instance that you created in Google Cloud Spanner.

Google Cloud Storage Connection Options

Use connection options to define the Google Cloud Storage connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
Separate multiple options with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
For example,
./infacmd.sh createconnection dn Domain_Google -un Administrator -pd Administrator -cn GCS_cmd -ct GOOGLESTORAGEV2 -o "CLIENTEMAIL=serviceaccount@api-project-12345.iam.gserviceaccount.com PRIVATEKEY='---BEGIN PRIVATE KEY---\nabcd1234322dsa\n---END PRIVATE KEY----\n' PROJECTID=api-project-12333667"
The following table describes Google Cloud Storage connection options for infacmd isp CreateConnection and UpdateConnection commands:
Property
Description
CLIENTEMAIL
Required. Specifies the client_email value present in the JSON file that you download after you create a service account.
PRIVATEKEY
Required. Specifies the private_key value present in the JSON file that you download after you create a service account.
PROJECTID
Required. Specifies the project_id value present in the JSON file that you download after you create a service account.
If you have created multiple projects with the same service account, enter the ID of the project that contains the bucket that you want to connect to.

Hadoop Connection Options

Use connection options to define a Hadoop connection.
Enter connection options in the following format:
... -o option_name='value' option_name='value' ...
To enter multiple options, separate them with a space.
To enter advanced properties, use the following format:
... -o engine_nameAdvancedProperties="'advanced.property.name=value'"
For example:
... -o blazeAdvancedProperties="'infrgrid.orchestrator.svc.sunset.time=3'"
The following table describes Hadoop connection options for infacmd isp CreateConnection and UpdateConnection commands that you configure when you want to use the Hadoop connection:
Option
Description
connectionId
String that the Data Integration Service uses to identify the connection. The ID is not case sensitive. It must be 255 characters or less and must be unique in the domain. You cannot change this property after you create the connection. Default value is the connection name.
connectionType
Required. Type of connection is Hadoop.
name
The name of the connection. The name is not case sensitive and must be unique within the domain. You can change this property after you create the connection. The name cannot exceed 128 characters, contain spaces, or contain the following special characters:
~ ` ! $ % ^ & * ( ) - + = { [ } ] | \ : ; " ' < , > . ? /
blazeJobMonitorURL
The host name and port number for the Blaze Job Monitor.
Use the following format:
<hostname>:<port>
Where
  • - <hostname> is the host name or IP address of the Blaze Job Monitor server.
  • - <port> is the port on which the Blaze Job Monitor listens for remote procedure calls (RPC).
For example, enter: myhostname:9080
blazeYarnQueueName
The YARN scheduler queue name used by the Blaze engine that specifies available resources on a cluster. The name is case sensitive.
blazeAdvancedProperties
Advanced properties that are unique to the Blaze engine.
To enter multiple properties, separate each name-value pair with the following text: &:.
Use Informatica custom properties only at the request of Informatica Global Customer Support.
blazeMaxPort
The maximum value for the port number range for the Blaze engine.
Default value is 12600
blazeMinPort
The minimum value for the port number range for the Blaze engine.
Default value is 12300
blazeUserName
The owner of the Blaze service and Blaze service logs.
When the Hadoop cluster uses Kerberos authentication, the default user is the Data Integration Service SPN user. When the Hadoop cluster does not use Kerberos authentication and the Blaze user is not configured, the default user is the Data Integration Service user.
blazeStagingDirectory
The HDFS file path of the directory that the Blaze engine uses to store temporary files. Verify that the directory exists. The YARN user, Blaze engine user, and mapping impersonation user must have write permission on this directory.
Default is /blaze/workdir. If you clear this property, the staging files are written to the Hadoop staging directory /tmp/blaze_<user name>.
clusterConfigId
The cluster configuration ID associated with the Hadoop cluster. You must enter a configuration ID to set up a Hadoop connection.
hiveStagingDatabaseName
Namespace for Hive staging tables. Use the name default for tables that do not have a specified database name.
engineType
Execution engine to run HiveServer2 tasks on the Spark engine. Default is MRv2. You can choose MRv2 or Tez according to the engine type that the Hadoop distribution uses:
  • - Amazon EMR - Tez
  • - Azure HDI - Tez
  • - Cloudera CDH - MRv2
  • - Cloudera CDP - Tez
  • - Hortonworks HDP - Tez
  • - MapR - MRv2
environmentSQL
SQL commands to set the Hadoop environment. The Data Integration Service executes the environment SQL at the beginning of each Hive script generated in a Hive execution plan.
The following rules and guidelines apply to the usage of environment SQL:
  • - Use the environment SQL to specify Hive queries.
  • - Use the environment SQL to set the classpath for Hive user-defined functions and then use environment SQL or PreSQL to specify the Hive user-defined functions. You cannot use PreSQL in the data object properties to specify the classpath. If you use Hive user-defined functions, you must copy the .jar files to the following directory:<Informatica installation directory>/services/shared/hadoop/<Hadoop distribution name>/extras/hive-auxjars
  • - You can use environment SQL to define Hadoop or Hive parameters that you want to use in the PreSQL commands or in custom queries.
hadoopExecEnvExecutionParameterList
Custom properties that are unique to the Hadoop connection.
You can specify multiple properties.
Use the following format:
<property1>=<value>
To specify multiple properties use &: as the property separator.
If more than one Hadoop connection is associated with the same cluster configuration, you can override configuration set property values.
Use Informatica custom properties only at the request of Informatica Global Customer Support.
hadoopRejDir
The remote directory where the Data Integration Service moves reject files when you run mappings.
Enable the reject directory using rejDirOnHadoop.
impersonationUserName
Required if the Hadoop cluster uses Kerberos authentication. Hadoop impersonation user. The user name that the Data Integration Service impersonates to run mappings in the Hadoop environment.
The Data Integration Service runs mappings based on the user that is configured. Refer the following order to determine which user the Data Integration Services uses to run mappings:
  1. 1. Operating system profile user. The mapping runs with the operating system profile user if the profile user is configured. If there is no operating system profile user, the mapping runs with the Hadoop impersonation user.
  2. 2. Hadoop impersonation user. The mapping runs with the Hadoop impersonation user if the operating system profile user is not configured. If the Hadoop impersonation user is not configured, the Data Integration Service runs mappings with the Data Integration Service user.
  3. 3. Data Integration Service user. The mapping runs with the Data Integration Service user if the operating system profile user and the Hadoop impersonation user are not configured.
hiveWarehouseDirectoryOnHDFS
Optional. The absolute HDFS file path of the default database for the warehouse that is local to the cluster.
If you do not configure the Hive warehouse directory, the Hive engine first tries to write to the directory specified in the cluster configuration property hive.metastore.warehouse.dir. If the cluster configuration does not have the property, the Hive engine writes to the default directory /user/hive/warehouse.
metastoreDatabaseDriver
Driver class name for the JDBC data store. For example, the following class name specifies a MySQL driver:
com.mysql.jdbc.Driver
You can get the value for the Metastore Database Driver from hive-site.xml. The Metastore Database Driver appears as the following property in hive-site.xml:
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
metastoreDatabasePassword
The password for the metastore user name.
You can get the value for the Metastore Database Password from hive-site.xml. The Metastore Database Password appears as the following property in hive-site.xml:
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>password</value>
</property>
metastoreDatabaseURI
The JDBC connection URI used to access the data store in a local metastore setup. Use the following connection URI:
jdbc:<datastore type>://<node name>:<port>/<database name>
where
  • - <node name> is the host name or IP address of the data store.
  • - <data store type> is the type of the data store.
  • - <port> is the port on which the data store listens for remote procedure calls (RPC).
  • - <database name> is the name of the database.
For example, the following URI specifies a local metastore that uses MySQL as a data store:
jdbc:mysql://hostname23:3306/metastore
You can get the value for the Metastore Database URI from hive-site.xml. The Metastore Database URI appears as the following property in hive-site.xml:
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://MYHOST/metastore</value>
</property>
metastoreDatabaseUserName
The metastore database user name.
You can get the value for the Metastore Database User Name from hive-site.xml. The Metastore Database User Name appears as the following property in hive-site.xml:
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hiveuser</value>
</property>
metastoreMode
Controls whether to connect to a remote metastore or a local metastore. By default, local is selected. For a local metastore, you must specify the Metastore Database URI, Metastore Database Driver, Username, and Password. For a remote metastore, you must specify only the Remote Metastore URI.
You can get the value for the Metastore Execution Mode from hive-site.xml. The Metastore Execution Mode appears as the following property in hive-site.xml:
<property>
<name>hive.metastore.local</name>
<value>true</true>
</property>
Note: The hive.metastore.local property is deprecated in hive-site.xml for Hive server versions 0.9 and above. If the hive.metastore.local property does not exist but the hive.metastore.uris property exists, and you know that the Hive server has started, you can set the connection to a remote metastore.
remoteMetastoreURI
The metastore URI used to access metadata in a remote metastore setup. For a remote metastore, you must specify the Thrift server details.
Use the following connection URI:
thrift://<hostname>:<port>
Where
  • - <hostname> is name or IP address of the Thrift metastore server.
  • - <port> is the port on which the Thrift server is listening.
For example, enter: thrift://myhostname:9083/
You can get the value for the Remote Metastore URI from hive-site.xml. The Remote Metastore URI appears as the following property in hive-site.xml:
<property>
<name>hive.metastore.uris</name>
<value>thrift://<n.n.n.n>:9083</value>
<description> IP address or fully-qualified domain name and port of the metastore host</description>
</property>
rejDirOnHadoop
Enables hadoopRejDir. Used to specify a location to move reject files when you run mappings.
If enabled, the Data Integration Service moves mapping files to the HDFS location listed in hadoopRejDir.
By default, the Data Integration Service stores the mapping files based on the RejectDir system parameter.
sparkEventLogDir
Optional. The HDFS file path of the directory that the Spark engine uses to log events.
sparkAdvancedProperties
Advanced properties that are unique to the Spark engine.
To enter multiple properties, separate each name-value pair with the following text: &:.
Use Informatica custom properties only at the request of Informatica Global Customer Support.
sparkStagingDirectory
The HDFS file path of the directory that the Spark engine uses to store temporary files for running jobs. The YARN user, Data Integration Service user, and mapping impersonation user must have write permission on this directory.
By default, the temporary files are written to the Hadoop staging directory /tmp/spark_<user name>.
sparkYarnQueueName
The YARN scheduler queue name used by the Spark engine that specifies available resources on a cluster. The name is case sensitive.
stgDataCompressionCodecClass
Codec class name that enables data compression and improves performance on temporary staging tables. The codec class name corresponds to the code type.
stgDataCompressionCodecType
Hadoop compression library for a compression codec class name.
You can choose None, Zlib, Gzip, Snappy, Bz2, LZO, or Custom.
Default is None.

HBase Connection Options

Use connection options to define an HBase connection. You can use an HBase connection to connect to an HBase table or a MapR-DB table.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes the HBase connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
DATABASETYPE
Required when you create an HBase connection for a MapR-DB table. Set the value to MapR-DB. Default is HBase.
clusterConfigId
The cluster configuration ID associated with the Hadoop cluster. You must enter a configuration ID to set up a Hadoop connection.
maprdbpath
Required if you create an HBase connection to connect to a MapR-DB table.
Set the value to the database path that contains the MapR-DB table that you want to connect to. Enter a valid MapR cluster path. Enclose the value in single quotes.
When you create an HBase data object for MapR-DB, you can browse only tables that exist in the path that you specify in this option. You cannot access tables that are available in sub-directories in the specified path.
For example, if you specify the maprdbpath as /user/customers/, you can access the tables in the customers directory. However, if the customers directory contains a sub-directory named regions, you cannot access the tables in the following directory:
/user/customers/regions

HDFS Connection Options

Use connection options to define an HDFS connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes the HDFS connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
userName
User name to access HDFS.
nameNodeURI
The URI to access the storage system.
You can find the value for fs.defaultFS in the core-site.xml configuration set of the cluster configuration.
clusterConfigId
The cluster configuration ID associated with the Hadoop cluster. You must enter a configuration ID to set up a Hadoop connection.

Hive Connection Options

Use connection options to define a Hive connection.
Enter connection options in the following format:
... -o option_name='value' option_name='value' ...
To enter multiple options, separate them with a space.
The following table describes Hive connection options for infacmd isp CreateConnection and UpdateConnection commands that you configure when you want to use the Hive connection:
Option
Description
connectionType
Required. Type of connection is HIVE.
name
The name of the connection. The name is not case sensitive and must be unique within the domain. You can change this property after you create the connection. The name cannot exceed 128 characters, contain spaces, or contain the following special characters:
~ ` ! $ % ^ & * ( ) - + = { [ } ] | \ : ; " ' < , > . ? /
environmentSQL
SQL commands to set the Hadoop environment. In native environment type, the Data Integration Service executes the environment SQL each time it creates a connection to Hive metastore. If the Hive connection is used to run mappings in the Hadoop cluster, the Data Integration Service executes the environment SQL at the beginning of each Hive session.
The following rules and guidelines apply to the usage of environment SQL in both the connection modes:
  • - Use the environment SQL to specify Hive queries.
  • - Use the environment SQL to set the classpath for Hive user-defined functions and then use either environment SQL or PreSQL to specify the Hive user-defined functions. You cannot use PreSQL in the data object properties to specify the classpath. If you use Hive user-defined functions, you must copy the .jar files to the following directory:
  • <Informatica installation directory>/services/shared/hadoop/<Hadoop distribution name>/extras/hive-auxjars
  • - You can also use environment SQL to define Hadoop or Hive parameters that you intend to use in the PreSQL commands or in custom queries.
If the Hive connection is used to run mappings in the Hadoop cluster, only the environment SQL of the Hive connection is executed. The different environment SQL commands for the connections of the Hive source or target are not executed, even if the Hive sources and targets are on different clusters.
quoteChar
The type of character used to identify special characters and reserved SQL keywords, such as WHERE. The Data Integration Service places the selected character around special characters and reserved SQL keywords. The Data Integration Service also uses this character for the Support mixed-case identifiers property.
clusterConfigId
The cluster configuration ID associated with the Hadoop cluster. You must enter a configuration ID to set up a Hadoop connection.

Properties to Access Hive as Source or Target

The following table describes the mandatory options for infacmd isp CreateConnection and UpdateConnection commands that you configure when you want to use the Hive connection to access Hive data:
Property
Description
hiveJdbcDriverClassName
Name of the JDBC driver class.
metadataConnString
The JDBC connection URI used to access the metadata from the Hadoop server.
The connection string uses the following format:
jdbc:hive://<hostname>:<port>/<db>
Where
  • - hostname is name or IP address of the machine on which the Hive server is running.
  • - port is the port on which the Hive server is listening.
  • - db is the database to which you want to connect. If you do not provide the database details, the Data Integration Service uses the default database details.
To connect to HiveServer 2, use the connection string format that Apache Hive implements for that specific Hadoop Distribution. For more information about Apache Hive connection string formats, see the Apache Hive documentation.
If the Hadoop cluster uses SSL or TLS authentication, you must add ssl=true to the JDBC connection URI. For example: jdbc:hive2://<hostname>:<port>/<db>;ssl=true
If you use self-signed certificate for SSL or TLS authentication, ensure that the certificate file is available on the client machine and the Data Integration Service machine. For more information, see the Informatica Big Data Management Cluster Integration Guide.
bypassHiveJDBCServer
JDBC driver mode. Enable this option to use the embedded JDBC driver (embedded mode).
To use the JDBC embedded mode, perform the following tasks:
  • - Verify that Hive client and Informatica Services are installed on the same machine.
  • - Configure the Hive connection properties to run mappings in the Hadoop cluster.
If you choose the non-embedded mode, you must configure the Data Access Connection String.
The JDBC embedded mode is preferred to the non-embedded mode.
sqlAuthorized
When you select the option to observe fine-grained SQL authentication in a Hive source, the mapping observes row and column-level restrictions on data access. If you do not select the option, the Blaze run-time engine ignores the restrictions, and results include restricted data.
Applicable to Hadoop clusters where Sentry or Ranger security modes are enabled.
connectString
The connection string used to access data from the Hadoop data store. The non-embedded JDBC mode connection string must be in the following format:
jdbc:hive://<hostname>:<port>/<db>
Where
  • - hostname is name or IP address of the machine on which the Hive server is running.
  • - port is the port on which the Hive server is listening. Default is 10000.
  • - db is the database to which you want to connect. If you do not provide the database details, the Data Integration Service uses the default database details.
To connect to HiveServer 2, use the connection string format that Apache Hive implements for that specific Hadoop Distribution. For more information about Apache Hive connection string formats, see the Apache Hive documentation.
If the Hadoop cluster uses SSL or TLS authentication, you must add ssl=true to the JDBC connection URI. For example: jdbc:hive2://<hostname>:<port>/<db>;ssl=true
If you use self-signed certificate for SSL or TLS authentication, ensure that the certificate file is available on the client machine and the Data Integration Service machine. For more information, see the Informatica Big Data Management Cluster Integration Guide.

Properties to Run Mappings in the Hadoop Cluster

The following table describes the mandatory options for infacmd isp CreateConnection and UpdateConnection commands that you configure when you want to use the Hive connection to run Informatica mappings in the Hadoop cluster:
Property
Description
databaseName
Namespace for tables. Use the name default for tables that do not have a specified database name.
customProperties
Configures or overrides Hive or Hadoop cluster properties in the hive-site.xml configuration set on the machine on which the Data Integration Service runs. You can specify multiple properties.
Select Edit to specify the name and value for the property. The property appears in the following format:
<property1>=<value>
When you specify multiple properties, &: appears as the property separator.
The maximum length for the format is 1 MB.
If you enter a required property for a Hive connection, it overrides the property that you configure in the Advanced Hive/Hadoop Properties.
The Data Integration Service adds or sets these properties for each map-reduce job. You can verify these properties in the JobConf of each mapper and reducer job. Access the JobConf of each job from the Jobtracker URL under each map-reduce job.
The Data Integration Service writes messages for these properties to the Data Integration Service logs. The Data Integration Service must have the log tracing level set to log each row or have the log tracing level set to verbose initialization tracing.
For example, specify the following properties to control and limit the number of reducers to run a mapping job:
mapred.reduce.tasks=2&:hive.exec.reducers.max=10
stgDataCompressionCodecClass
Codec class name that enables data compression and improves performance on temporary staging tables. The codec class name corresponds to the code type.
stgDataCompressionCodecType
Hadoop compression library for a compression codec class name.
You can choose None, Zlib, Gzip, Snappy, Bz2, LZO, or Custom.
Default is None.

IBM DB2 Connection Options

Use connection options to define the IBM DB2 connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
Separate multiple options with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes IBM DB2 connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
PassThruEnabled
Optional. Enables pass-through security for the connection. When you enable pass-through security for a connection, the domain uses the client user name and password to log into the corresponding database, instead of the credentials defined in the connection object.
MetadataAccessConnectString
Required. JDBC connection URL used to access metadata from the database.
jdbc:informatica:db2://<host name>:<port>;DatabaseName=<database name>
When you import a table from the Developer tool or Analyst tool, by default, all tables are displayed under the default schema name. To view tables under a specific schema instead of the default schema, you can specify the schema name from which you want to import the table. Include the ischemaname parameter in the URL to specify the schema name. For example, use the following syntax to import a table from a specific schema:
jdbc:informatica:db2://<host name>:<port>;DatabaseName=<database name>;ischemaname=<schema_name>
To search for a table in multiple schemas and import it, you can specify multiple schema names in the ischemaname parameter. The schema name is case sensitive. You cannot use special characters when you specify multiple schema names. Use the pipe (|) character to separate multiple schema names. For example, use the following syntax to search for a table in three schemas and import it:
jdbc:informatica:db2://<host name>:<port>;DatabaseName=<database name>;ischemaname=<schema_name1>|<schema_name2>|<schema_name3>
AdvancedJDBCSecurityOptions
Optional. Database parameters for metadata access to a secure database. Informatica treats the value of the AdvancedJDBCSecurityOptions field as sensitive data and encrypts the parameter string.
To connect to a secure database, include the following parameters:
  • - EncryptionMethod. Required. Indicates whether data is encrypted when transmitted over the network. This parameter must be set to SSL.
  • - ValidateServerCertificate. Optional. Indicates whether Informatica validates the certificate that is sent by the database server.
  • If this parameter is set to True, Informatica validates the certificate that is sent by the database server. If you specify the HostNameInCertificate parameter, Informatica also validates the host name in the certificate.
    If this parameter is set to false, Informatica does not validate the certificate that is sent by the database server. Informatica ignores any truststore information that you specify.
  • - HostNameInCertificate. Optional. Host name of the machine that hosts the secure database. If you specify a host name, Informatica validates the host name included in the connection string against the host name in the SSL certificate.
  • - TrustStore. Required. Path and file name of the truststore file that contains the SSL certificate for the database.
  • - TrustStorePassword. Required. Password for the truststore file for the secure database.
Note: For a complete list of the secure JDBC parameters, see the DataDirect JDBC documentation.
Informatica appends the secure JDBC parameters to the connection string. If you include the secure JDBC parameters directly in the connection string, do not enter any parameters in the AdvancedJDBCSecurityOptions field.
DataAccessConnectString
Connection string used to access data from the database.
Enter the connection string in the following format:
<database name>
CodePage
Required. Code page used to read from a source database or write to a target database.
EnvironmentSQL
Optional. SQL commands to set the database environment when you connect to the database. The Data Integration Service executes the connection environment SQL each time it connects to the database.
For example, ALTER SESSION SET CURRENT_SCHEMA=INFA_USR;
Note: Enclose special characters in double quotes.
TransactionSQL
Optional. SQL commands to execute before each transaction. The Data Integration Service executes the transaction SQL at the beginning of each transaction.
For example, SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
Note: Enclose special characters in double quotes.
Tablespace
Optional. The tablespace name of the database.
QuoteChar
Optional. The character that you will use for quotes in this connection.
The type of character used to identify special characters and reserved SQL keywords, such as WHERE. The Data Integration Service places the selected character around special characters and reserved SQL keywords. The Data Integration Service also uses this character for the QuoteChar property. Default is 0.
EnableQuotes
Optional. Select to enable quotes or not for this connection.
When enabled, the Data Integration Service places identifier characters around table, view, schema, synonym, and column names when generating and executing SQL against these objects in the connection. Use if the objects have mixed-case or lowercase names. Valid values are True or False. Default is True.
EnableConnectionPool
Optional. Enables connection pooling. When you enable connection pooling, the connection pool retains idle connection instances in memory. When you disable connection pooling, the Data Integration Service stops all pooling activity. Valid values are True or False. Default is True.
ConnectionPoolSize
Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances. Default is 15.
ConnectionPoolMaxIdleTime
Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idletime when it does not exceed the minimum number of idle connection instances. Default is 120.
ConnectionPoolMinConnections
Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0.

IMS Connection Options

Use connection options to define an IMS connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes IMS connection options:
Option
Description
CodePage
Required. Code to read from or write to the database. Use the ISO code page name, such as ISO-8859-6. The code page name is not case sensitive.
ArraySize
Optional. Determines the number of records in the storage array for the threads when the worker threads value is greater than 0. Valid values are from 1 through 5000. Default is 25.
Compression
Optional. Compresses the data to decrease the amount of data Informatica applications write over the network. True or false. Default is false.
EncryptionLevel
Optional. Level of encryption. If you specify AES for the EncryptionType option, specify one of the following values to indicate the level of AES encryption:
  • - 1. Use a 128-bit encryption key.
  • - 2. Use a 192-bit encryption key.
  • - 3. Use a 256-bit encryption key.
Default is 1.
Note: If you specify None for encryption type, the Data Integration Service ignores the encryption level value.
EncryptionType
Optional. Controls whether to use encryption. Specify one of the following values:
  • - None
  • - AES
Default is None.
InterpretAsRows
Optional. If true, the pacing size value represents a number of rows. If false, the pacing size represents kilobytes. Default is false.
Location
Location of the PowerExchange Listener node that can connect to the database. The location is defined in the first parameter of the NODE statement in the PowerExchange dbmover.cfg configuration file.
OffLoadProcessing
Optional. Moves bulk data processing from the source machine to the Data Integration Service machine.
Enter one of the following values:
  • - Auto. The Data Integration Service determines whether to use offload processing.
  • - Yes. Use offload processing.
  • - No. Do not use offload processing.
Default is Auto.
PacingSize
Optional. Slows the data transfer rate in order to reduce bottlenecks. The lower the value, the greater the session performance. Minimum value is 0. Enter 0 for optimal performance. Default is 0.
WorkerThread
Optional. Number of threads that the Data Integration Service uses to process bulk data when offload processing is enabled. For optimal performance, this value should not exceed the number of available processors on the Data Integration Service machine. Valid values are 1 through 64. Default is 0, which disables multithreading.
WriteMode
Enter one of the following write modes:
  • - CONFIRMWRITEON. Sends data to the PowerExchange Listener and waits for a success/no success response before sending more data.
  • - CONFIRMWRITEOFF. Sends data to the PowerExchange Listerner without waiting for a success/no success response. Use this option when the target table can be reloaded if an error occurs.
  • - ASYNCHRONOUSWITHFAULTT. Sends data to the PowerExchangeListener asynchronously with the ability to detect errors.
Default is CONFIRMWRITEON.
EnableConnectionPool
Optional. Enables connection pooling. When you enable connection pooling, the connection pool retains idle connection instances in memory. When you disable connection pooling, the Data Integration Service stops all pooling activity. True or false. Default is false.
ConnectionPoolSize
Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances. Default is 15.
ConnectionPoolMaxIdleTime
Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idletime when it does not exceed the minimum number of idle connection instances. Default is 120.
ConnectionPoolMinConnections
Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0.

JDBC Connection Options

Use connection options to define a JDBC connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate options with spaces. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes JDBC connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
JDBCDriverClassName
The Java class that you use to connect to the database.
The following list provides the driver class name that you can enter for the applicable database type:
  • - DataDirect JDBC driver class name for Oracle:
  • com.informatica.jdbc.oracle.OracleDriver
  • - DataDirect JDBC driver class name for IBM DB2:
  • com.informatica.jdbc.db2.DB2Driver
  • - DataDirect JDBC driver class name for Microsoft SQL Server:
  • com.informatica.jdbc.sqlserver.SQLServerDriver
  • - DataDirect JDBC driver class name for Sybase ASE:
  • com.informatica.jdbc.sybase.SybaseDriver
  • - DataDirect JDBC driver class name for Informix:
  • com.informatica.jdbc.informix.InformixDriver
  • - DataDirect JDBC driver class name for MySQL:
  • com.informatica.jdbc.mysql.MySQLDriver
For more information about which driver class to use with specific databases, see the vendor documentation.
MetadataConnString
The URL that you use to connect to the database.
The following list provides the connection string that you can enter for the applicable database type:
  • - DataDirect JDBC driver for Oracle:
  • jdbc:informatica:oracle://<hostname>:<port>;SID=<sid>
  • - DataDirect JDBC driver for IBM DB2:
  • jdbc:informatica:db2://<hostname>:<port>;DatabaseName=<database name>
  • - DataDirect JDBC driver for Microsoft SQL Server:
  • jdbc:informatica:sqlserver://<host>:<port>;DatabaseName=<database name>
  • - DataDirect JDBC driver for Sybase ASE:
  • jdbc:informatica:sybase://<host>:<port>;DatabaseName=<database name>
  • - DataDirect JDBC driver for Informix:
  • jdbc:informatica:informix://<host>:<port>;informixServer=<informix server name>;databaseName=<dbName>
  • - DataDirect JDBC driver for MySQL:
  • jdbc:informatica:mysql://<host>:<port>;DatabaseName=<database name>
For more information about the connection string to use for specific databases, see the vendor documentation for the URL syntax.
EnvironmentSQL
Optional. SQL commands to set the database environment when you connect to the database. The Data Integration Service executes the connection environment SQL each time it connects to the database.
For example, ALTER SESSION SET CURRENT_SCHEMA=INFA_USR;
Note: Enclose special characters in double quotation marks.
TransactionSQL
Optional. SQL commands to execute before each transaction. The Data Integration Service executes the transaction SQL at the beginning of each transaction.
For example, SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
Note: Enclose special characters in double quotes.
QuoteChar
Optional. The character that you will use for quotes in this connection.
The type of character used to identify special characters and reserved SQL keywords, such as WHERE. The Data Integration Service places the selected character around special characters and reserved SQL keywords. The Data Integration Service also uses this character for the QuoteChar property. Default is DOUBLE_QUOTE.
EnableQuotes
Optional. Select to enable quotes or not for this connection.
When enabled, the Data Integration Service places identifier characters around table, view, schema, synonym, and column names when generating and executing SQL against these objects in the connection. Use if the objects have mixed-case or lowercase names. Valid values are True or False. Default is True.
hadoopConnector
Required if you want to enable Sqoop connectivity for the data object that uses the JDBC connection. The Data Integration Service runs the mapping in the Hadoop run-time environment through Sqoop.
You can configure Sqoop connectivity for relational data objects, customized data objects, and logical data objects that are based on a JDBC-compliant database.
Set the value to SQOOP_146 to enable Sqoop connectivity.
hadoopConnectorArgs
Optional. Enter the arguments that Sqoop must use to connect to the database. Enclose the Sqoop arguments within single quotes. Separate multiple arguments with a space.
For example, hadoopConnectorArgs='--<Sqoop argument 1> --<Sqoop argument 2>'
To read data from or write data to Teradata through Teradata Connector for Hadoop (TDCH) specialized connectors for Sqoop, define the TDCH connection factory class in the hadoopConnectorArgs argument. The connection factory class varies based on the TDCH Sqoop Connector that you want to use.
  • - To use Cloudera Connector Powered by Teradata, configure the hadoopConnectorArgs argument as follows:
  • hadoopConnectorArgs='-Dsqoop.connection.factories=com.cloudera.connector.teradata.TeradataManagerFactory'
  • - To use Hortonworks Connector for Teradata (powered by the Teradata Connector for Hadoop), configure the hadoopConnectorArgs argument as follows:
  • hadoopConnectorArgs='-Dsqoop.connection.factories=org.apache.sqoop.teradata.TeradataManagerFactory'
If you do not enter Sqoop arguments, the Data Integration Service constructs the Sqoop command based on the JDBC connection properties.

JDBC V2 Connection Options

Use connection options to define a JDBC V2 connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
For example,
./infacmd.sh createConnection -dn Domain_irl63ppd06 -un Administrator -pd SAM123 -cn PostgreSQL -cid PostgreSQL -ct JDBC_V2 -cun
adaptersX1 -cpd adaptersX1 -o "connectionstring=' jdbc:postgresql://aurorapostgres-appsdk.c5wj9sntucrg.ap-south-1.rds.amazonaws.com:5432/
JDBCV2' jdbcdriverclassname='org.postgresql.Driver' schemaname='public' subtype='PostgreSQL' supportmixedcaseidentifier='true'
quoteChar='(quotes)'"
To enter multiple options, separate options with spaces. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes JDBC V2 connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
username
The database user name.
User name with permissions to either access the Azure SQL Database, PostgreSQL, or relational database.
password
The password for the database user name.
schemaname
The schema name to connect in the database,
jdbcdriverclassname
Name of the JDBC driver class.
The following list provides the driver class name that you can enter for the applicable database type:
  • - JDBC driver class name for Azure SQL Database:
  • com.microsoft.sqlserver.jdbc.SQLServerDriver
  • - JDBC driver class name for Aurora PostgreSQL:
  • org.postgresql.Driver
For more information about which driver class to use with specific databases, see the vendor documentation.
connectionstring
Connection string to connect to the database.
Use the following connection string:
jdbc:<subprotocol>:<subname>
The following list provides sample connection strings that you can enter for the applicable database type:
  • - Connection string for Azure SQL Database JDBC driver:
  • jdbc:informatica:oracle://<host>:<port>;SID=<value>
  • - Connection string for Aurora PostgreSQL JDBC driver:
  • jdbc:postgresql://<host>:<port>[/dbname]
For more information about the connection string to use with specific drivers, see the vendor documentation.
subtype
The database type to which you want to connect.
You can select from the following database types to connect:
  • - Azure SQL Database. Connects to Azure SQL Database.
  • - PostgreSQL. Connects to Aurora PostgreSQL database.
  • - Others . Connects to any database that supports the Type 4 JDBC driver.
supportmixedcaseidentifier
Enable if the database uses case-sensitive identifiers. When enabled, the Data Integration Service encloses all identifiers within the character selected for the SQL Identifier Character property.
For example, PostgreSQL database supports mixed-cased characters. You must enable this property to connect to the PostgreSQL database.
When the SQL Identifier Character property is set to none, the Support Mixed-case Identifiers property is disabled.
quoteChar
Type of character that the database uses to enclose delimited identifiers in SQL queries. The available characters depend on the database type.
Select (None) if the database uses regular identifiers. When the Data Integration Service generates SQL queries, the service does not place delimited characters around any identifiers.
Select a character if the database uses delimited identifiers. When the Data Integration Service generates SQL queries, the service encloses delimited identifiers within this character.

JD Edwards EnterpriseOne Connection Options

Use connection options to define a JD Edwards EnterpriseOne connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
For example,
infacmd.bat createConnection -dn DomainName -un Domain_UserName -pd Domain_Pwd -cn conName -cid
conID -ct JDEE1 -o userName=JDEE1_DB_UserName password=JDEE1_DB_Pwd enterpriseServer=JDE_ServerName
enterprisePort=JDE_DB_Port environment=JDE_Environment role=role JDBCUserName=JDEE1_DB_UserName
JDBCPassword=JDEE1_DB_Pwd JDBCCONNECTIONSTRING='DB connection string' JDBCDriverClassName='jdbc driver classname'
To enter multiple options, separate them with a space. To enter a value that contains a space or other nonalphanumeric character, enclose the value in quotation marks.
The following table describes the mandatory JD Edwards EnterpriseOne connection options for the infacmd isp CreateConnection and UpdateConnection commands:
Property
Description
userName
JD Edwards EnterpriseOne user name.
password
Password for the JD Edwards EnterpriseOne user name. The password is case sensitive.
enterpriseServer
The host name of the JD Edwards EnterpriseOne server that you want to access.
enterprisePort
The port number to access the JD Edwards EnterpriseOne server.
environment
Name of the JD Edwards EnterpriseOne environment you want to connect to.
role
Role of the JD Edwards EnterpriseOne user.

Kafka Connection Options

Use connection options to define a Kafka connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes Kafka connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
connectionId
String that the Data Integration Service uses to identify the connection. The ID is not case sensitive. It must be 255 characters or less and must be unique in the domain. You cannot change this property after you create the connection. Default value is the connection name.
connectionType
Required. Type of connection is KAFKA.
name
Required. The name of the connection. The name is not case sensitive and must be unique within the domain. You can change this property after you create the connection. The name cannot exceed 128 characters, contain spaces, or contain the following special characters: ~ ` ! $ % ^ & * ( ) - + = { [ } ] | \ : ; " ' < , > . ? /
connRetryTimeout
Number of seconds the Integration Service attempts to reconnect to the Kafka broker. If the source or target is not available for the time you specify, the mapping execution stops to avoid any data loss.
kafkaBrokerVersion
The version of the Kafka messaging broker. You can enter one of the following values:
  • - 0.10.1.x-2.0.0
kfkBrkList
The IP address and port combinations of the Kafka messaging system broker list. The IP address and port combination has the following format:
<IP Address>:<port>
You can enter multiple comma-separated IP address and port combinations
zkHostPortList
The IP address and port combination of Apache ZooKeeper which maintains the configuration of the Kafka messaging broker. The IP address and port combination has the following format:
<IP Address>:<port>
You can enter multiple comma-separated IP address and port combinations.

Kudu Connection Options

Use connection options to define an Kudu connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes the Kudu connection options for infacmd isp CreateConnection and UpdateConnection commands:
Property
Description
Name
The name of the connection. The name is not case sensitive and must be unique within the domain. You can change this property after you create the connection. The name cannot exceed 128 characters, contain spaces, or contain the following special characters: ~ ` ! $ % ^ & * ( ) - + = { [ } ] | \ : ; " ' < , > . ? /
ID
String that the Data Integration Service uses to identify the connection. The ID is not case sensitive. It must be 255 characters or less and must be unique in the domain. You cannot change this property after you create the connection.
Default value is the connection name.
Description
The description of the connection. The description cannot exceed 4,000 characters.
Location
The domain where you want to create the connection.
Type
The connection type. Select Kudu.

LDAP Connection Options

Use connection options to define an LDAP connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
For example,
infacmd.sh createConnection -dn DomainName -un Domain_UserName -pd Domain_Pwd -cn conname -cid conname -ct ldap -o
hostName=hostIPAddress port=port_number userName=ldapUserName password=LDAPPWD
To enter multiple options, separate them with a space. To enter a value that contains a space or other nonalphanumeric character, enclose the value in quotation marks.
The following table describes the mandatory LDAP connection options for the infacmd isp CreateConnection and UpdateConnection commands:
Property
Description
hostName
The host name of the LDAP directory server that you want to access.
port
The port number to access the LDAP directory server.
userName
LDAP user name.
password
Password for the LDAP user name. The password is case sensitive.

LinkedIn Connection Options

Use connection options to define a LinkedIn connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes LinkedIn connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
ConsumerKey
The API key that you get when you create the application in LinkedIn. LinkedIn uses the key to identify the application.
ConsumerSecret
The Secret key that you get when you create the application in LinkedIn. LinkedIn uses the secret to establish ownership of the consumer key.
AccessToken
Access token that the OAuth Utility returns. The LinkedIn application uses this token instead of the user credentials to access the protected resources.
AccessSecret
Access secret that the OAuth Utility returns. The secret establishes ownership of a token.

MapR-DB Connection Options

Use connection options to define an HBase connection for MapR-DB.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or non-alphanumeric character, enclose the value in quotation marks.
The following table describes the HBase connection options for MapR-DB for the infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
DATABASETYPE
Required. Set the value to MapR-DB and enclose the value in single quotes.
clusterConfigId
The cluster configuration ID associated with the Hadoop cluster. You must enter a configuration ID to set up an HBase connection for MapR-DB.
maprdbpath
Required. Set the value to the database path that contains the MapR-DB table that you want to connect to. Enter a valid MapR cluster path. Enclose the value in single quotes.
When you create an HBase data object for MapR-DB, you can browse only tables that exist in the path that you specify in this option. You cannot access tables that are available in sub-directories in the specified path.
For example, if you specify the maprdbpath as /user/customers/, you can access the tables in the customers directory. However, if the customers directory contains a sub-directory named regions, you cannot access the tables in the following directory:
/user/customers/regions

Microsoft Azure Blob Storage Connection Options

Use connection options to define a Microsoft Azure Blob Storage Connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or non-alphanumeric character, enclose the value in quotation marks.
The following table describes the Microsoft Azure Blob Storage Connection options for the infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
accountName
Name of the Microsoft Azure Blob Storage account.
authenticationtype
Authorization type. You can select any of the following authorization mechanisms:
  • - Shared Key Authorization
  • - Shared Access Signatures
accountKey
Microsoft Azure Blob Storage access key.
sharedaccesssignature
Shared Access Signatures.
Note: Even if you do not want to use shared access permission to create a connection, define the option in the command line as follows:
sharedaccesssignature=' '
containerName
The root container or sub-folders with the absolute path.
enspointSuffix
Type of Microsoft Azure end-points. You can specify any of the following end-points:
  • - core.windows.net: Default
  • - core.usgovcloudapi.net: To select the US government Microsoft Azure end-points
  • - core.chinacloudapi.cn: Not applicable

Microsoft Azure Data Lake Storage Gen1 Connection Options

Use connection options to define a Microsoft Azure Data Lake Storage Gen1 Connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or non-alphanumeric character, enclose the value in quotation marks.
The following table describes the Microsoft Azure Data Lake Storage Gen1 Connection options for the infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
ADLSAccountName
Microsoft Azure Data Lake Storage Gen1 account name or the service name.
ClientId
The ID of your application to complete the OAuth Authentication in the Active Directory.
ClientSecret
The client secret key to complete the OAuth Authentication in the Active Directory.
Directory
Path of an existing directory under given file system. The default is root directory.
AuthEndpoint
The OAuth 2.0 token endpoint from where access code is generated based on the Client ID and Client secret is completed.
For more information about creating a client ID and client secret, contact the Azure administrator or see Microsoft Azure Data Lake Storage Gen1 documentation.

Microsoft Azure Data Lake Storage Gen2 Connection Options

Use connection options to define a Microsoft Azure Data Lake Storage Gen2 Connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or non-alphanumeric character, enclose the value in quotation marks.
The following table describes the Microsoft Azure Data Lake Storage Gen2 Connection options for the infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
accountName
Microsoft Azure Data Lake Storage Gen2 account name or the service name.
clientID
The ID of your application to complete the OAuth Authentication in the Active Directory.
clientSecret
Client secret key to complete the OAuth Authentication in the Active Directory.
tenantID
Directory ID of the Azure Active Directory.
fileSystemName
Name of an existing file system in Microsoft Azure Data Lake Storage Gen2.
directoryPath
Path of an existing directory under given file system. The default is root directory.
For more information about creating a client ID, client secret, tenant ID, and file system name, contact the Azure administrator or see Microsoft Azure Data Lake Storage Gen2 documentation.

Microsoft Azure SQL Data Warehouse Connection Options

Use connection options to define a Microsoft Azure SQL Data Warehouse Connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or non-alphanumeric character, enclose the value in quotation marks.
The following table describes the Microsoft Azure SQL Data Warehouse Connection options for the infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
JdbcUrl
Microsoft Azure SQL Data Warehouse JDBC connection string. For example, you can enter the following connection string:
jdbc:sqlserver://<Server>.database.windows.net:1433;database=<Database>
JdbcUsername
User name to connect to the Microsoft Azure SQL Data Warehouse account.
JdbcPassword
Password to connect to the Microsoft Azure SQL Data Warehouse account.
SchemaName
Name of the schema in Microsoft Azure SQL Data Warehouse.
BlobAccountName
Name of the Microsoft Azure Storage account to stage the files.
BlobAccountKey
Microsoft Azure Storage access key to stage the files.
EndPointSuffix
Type of Microsoft Azure end-points. You can specify any of the following end-points:
  • - core.windows.net: Default
  • - core.usgovcloudapi.net: To select the US government Microsoft Azure end-points
  • - core.chinacloudapi.cn: Not applicable
VNetRule
Enable to connect to a Microsoft Azure SQL Data Warehouse endpoint residing in a virtual network (VNet).

Microsoft SQL Server Connection Options

Use connection options to define the Microsoft SQL Server connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes Microsoft SQL Server connection options for the infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
UseTrustedConnection
Optional. The Integration Service uses Windows authentication to access the Microsoft SQL Server database. The user name that starts the Integration Service must be a valid Windows user with access to the Microsoft SQL Server database. True or false. Default is false.
PassThruEnabled
Optional. Enables pass-through security for the connection. When you enable pass-through security for a connection, the domain uses the client user name and password to log into the corresponding database, instead of the credentials defined in the connection object.
MetadataAccessConnectString
JDBC connection URL to access metadata from the database.
Use the following connection URL:
jdbc:informatica:sqlserver://<host name>:<port>;DatabaseName=<database name>
To test the connection with NTLM authentication, include the following parameters in the connection string:
  • - AuthenticationMethod. The NTLM authentication version to use.
  • Note: UNIX supports NTLMv1 and NTLMv2 but not NTLM.
  • - Domain. The domain that the SQL server belongs to.
The following example shows the connection string for an SQL server that uses NTLMv2 authentication in an NT domain named Informatica.com:
jdbc:informatica:sqlserver://host01:1433;DatabaseName=SQL1;AuthenticationMethod=ntlm2java;Domain=Informatica.com
If you connect with NTLM authentication, you can enable the Use trusted connection option in the MS SQL Server connection properties. If you connect with NTLMv1 or NTLMv2 authentication, you must provide the user name and password in the connection properties.
AdvancedJDBCSecurityOptions
Optional. Database parameters for metadata access to a secure database. Informatica treats the value of the AdvancedJDBCSecurityOptions field as sensitive data and encrypts the parameter string.
To connect to a secure database, include the following parameters:
  • - EncryptionMethod. Required. Indicates whether data is encrypted when transmitted over the network. This parameter must be set to SSL.
  • - ValidateServerCertificate. Optional. Indicates whether Informatica validates the certificate that is sent by the database server.
  • If this parameter is set to True, Informatica validates the certificate that is sent by the database server. If you specify the HostNameInCertificate parameter, Informatica also validates the host name in the certificate.
    If this parameter is set to false, Informatica does not validate the certificate that is sent by the database server. Informatica ignores any truststore information that you specify.
  • - HostNameInCertificate. Optional. Host name of the machine that hosts the secure database. If you specify a host name, Informatica validates the host name included in the connection string against the host name in the SSL certificate.
  • - TrustStore. Required. Path and file name of the truststore file that contains the SSL certificate for the database.
  • - TrustStorePassword. Required. Password for the truststore file for the secure database.
Note: For a complete list of the secure JDBC parameters, see the DataDirect JDBC documentation.
Informatica appends the secure JDBC parameters to the connection string. If you include the secure JDBC parameters directly to the connection string, do not enter any parameters in the AdvancedJDBCSecurityOptions field.
DataAccessConnectString
Required. Connection string used to access data from the database.
Enter the connection string in the following format:
<server name>@<database name>
DomainName
Optional. The name of the domain where Microsoft SQL Server is running.
PacketSize
Optional. Increase the network packet size to allow larger packets of data to cross the network at one time.
CodePage
Required. Code to read from or write to the database. Use the ISO code page name, such as ISO-8859-6. The code page name is not case sensitive.
UseDSN
Required. Determines whether the Data Integration Service must use the Data Source Name for the connection.
If you set the option value to true, the Data Integration Service retrieves the database name and server name from the DSN.
If you set the option value to false, you must enter the database name and server name.
ProviderType
Required. The connection provider that you want to use to connect to the Microsoft SQL Server database.
You can define one of the following values:
  • - 0. Set the value to 0 if you want to use the ODBC provider type. Default is 0.
  • - 1. Set the value to 1 if you want to use the OLEDB provider type.
OwnerName
Optional. The table owner name.
SchemaName
Optional. The name of the schema in the database. You must specify the schema name for the Profiling Warehouse if the schema name is different from the database user name. You must specify the schema name for the data object cache database if the schema name is different from the database user name and if you configure user-managed cache tables.
EnvironmentSQL
Optional. SQL commands to set the database environment when you connect to the database. The Data Integration Service executes the connection environment SQL each time it connects to the database.
For example, ALTER SESSION SET CURRENT_SCHEMA=INFA_USR;
Note: Enclose special characters in double quotes.
TransactionSQL
Optional. SQL commands to execute before each transaction. The Data Integration Service executes the transaction SQL at the beginning of each transaction.
For example, SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
Note: Enclose special characters in double quotes.
QuoteChar
Optional. The character that you will use for quotes in this connection.
The type of character used to identify special characters and reserved SQL keywords, such as WHERE. The Data Integration Service places the selected character around special characters and reserved SQL keywords. The Data Integration Service also uses this character for the QuoteChar property. Default is 0.
EnableQuotes
Optional. Choose to enable quotes or not for this connection.
When enabled, the Data Integration Service places identifier characters around table, view, schema, synonym, and column names when generating and executing SQL against these objects in the connection. Use if the objects have mixed-case or lowercase names. Valid values are True or False. Default is True.
EnableConnectionPool
Optional. Enables connection pooling. When you enable connection pooling, the connection pool retains idle connection instances in memory. When you disable connection pooling, the Data Integration Service stops all pooling activity. Valid values are True or False. Default is True.
ConnectionPoolSize
Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances. Default is 15.
ConnectionPoolMaxIdleTime
Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idletime when it does not exceed the minimum number of idle connection instances. Default is 120.
ConnectionPoolMinConnections
Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0.

Microsoft Dynamics CRM Connection Options

Use connection options to define a Microsoft Dynamics CRM connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
For example,
./infacmd.sh createconnection -dn Domain_Adapters_1020_Uni -un Administrator -pd Administrator -cn msd_cmdline_AD -cid msd_cmdline_edit -ct MSDYNAMICS -o
"AuthenticationType=Passport DiscoveryServiceURL=https://disco.crm8.dynamics.com/XRMServices/2011/Discovery.svc Username=skmanja@InformaticaLLC.onmicrosoft.com
Password=AwesomeDay103 OrganizationName=org00faf3b6 Domain=<dummy value> SECURITYTOKENSERVICE=<dummy value>"
Separate multiple options with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes Microsoft Dynamics CRM connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
AuthenticationType
Required. Authentication type for the connection. Provide one of the following authentication types:
  • - Passport. Often used for online deployment and online deployment combined with Internet-facing deployment of Microsoft Dynamics CRM.
  • - Claims-based. Often used for on-premise and Internet-facing deployment of Microsoft Dynamics CRM.
  • - Active directory. Often used for on-premise deployment of Microsoft Dynamics CRM.
DiscoveryServiceURL
Required. URL of the Microsoft Dynamics CRM service.
Use the following format: <http/https>://<Application server name>:<port>/XRMService/2011/Discovery.svc
To find the Discovery Service URL, log in to the Microsoft Live instance and click Settings > Customization > Developer Resources.
Domain
Required. Domain to which the user belongs. You must provide the complete domain name. For example, msd.sampledomain.com.
Configure domain for active directory and claims-based authentication.
Note: If you select Passport authentication type, you must provide a dummy value for Domain.
ConfigFilesForMetadata
Configuration directory for the client.
Default directory is: <INFA_HOME>/clients/DeveloperClient/msdcrm/conf
OrganizationName
Required. Microsoft Dynamics CRM organization name. Organization names are case sensitive.
For Microsoft Live authentication, use the Microsoft Live Organization Unique Name.
To find the Organization Unique Name, log in to the Microsoft Live instance and click Settings > Customization > Developer Resources
Password
Required. Password to authenticate the user.
ConfigFilesForData
Configuration directory for the server.
If the server file is located in a different directory, specify the directory path.
SecurityTokenService
Required. Microsoft Dynamics CRM security token service URL. For example, https://sts1.<company>.com.
Configure for claims-based authentication.
Note: If you select Passport or Active Directory authentication type, you must provide a dummy value for SecurityTokenService.
Username
Required. User ID registered with Microsoft Dynamics CRM.
UseMetadataConfigForDataAccess
Select this option if the configuration file and server file are in the same directory.
If the server file is in a different directory, uncheck this option and specify the directory path in the Data Access field. Provide one of the following values:
  • - true for checked
  • - false for unchecked
KeyStoreFileName
Contains the keys and certificates required for secure communication.
If you want to use the Java cacerts file, clear this field.
KeyStorePassword
Password for the infa_keystore.jks file.
If you want to use the Java cacerts file, clear this field.
TrustStoreFileName
Set the INFA_TRUSTSTORE in the environment variables. The directory must contain the truststore file infa_truststore.jks. If the file is not available at the path specified, the Data Integration Service checks for the certificate in the Java cacerts file.
If you want to use the Java cacerts file, clear this field.
TrustStorePassword
Password for the infa_keystore.jks file.
If you want to use the Java cacerts file, clear this field.

Netezza Connection Options

Use connection options to define a Netezza connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes the Netezza connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
connectionString
Required. Name of the ODBC data source that you create to connect to the Netezza database.
jdbcUrl
Required. JDBC URL that the Developer tool must use when it connects to the Netezza database. Use the following format:
jdbc:netezza://<hostname>:<port>/<database name>
username
Required. User name with the appropriate permissions to access the Netezza database.
password
Required. Password for the database user name.
timeout
Required. Number of seconds that the Developer tool waits for a response from the Netezza database before it closes the connection.

OData Connection Options

Use connection options to define an OData connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes the OData connection options for infacmd isp CreateConnection and UpdateConnection commands:
Property
Description
URL
Required. OData service root URL that exposes the data that you want to read.
securityType
Optional. Security protocol that the Developer tool must use to establish a secure connection with the OData server.
Enter one of the following values:
  • - None
  • - SSL
  • - TLS
trustStoreFileName
Required if you enter a security type.
Name of the truststore file that contains the public certificate for the OData server.
trustStorePassword
Required if you enter a security type.
Password for the truststore file that contains the public certificate for the OData server.
keyStoreFileName
Required if you enter a security type.
Name of the keystore file that contains the private key for the OData server.
keyStorePassword
Required if you enter a security type.
Password for the keystore file that contains the private key for the OData server.

ODBC Connection Options

Use connection options to define the ODBC connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes ODBC connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
PassThruEnabled
Optional. Enables pass-through security for the connection. When you enable pass-through security for a connection, the domain uses the client user name and password to log into the corresponding database, instead of the credentials defined in the connection object.
DataAccessConnectString
Connection string used to access data from the database.
Enter the connection string in the following format:
<database name>
CodePage
Required. Code page used to read from a source database or write to a target database or file.
EnvironmentSQL
Optional. SQL commands to set the database environment when you connect to the database. The Data Integration Service executes the connection environment SQL each time it connects to the database.
For example, ALTER SESSION SET CURRENT_SCHEMA=INFA_USR;
Note: Enclose special characters in double quotes.
TransactionSQL
Optional. SQL commands to execute before each transaction. The Data Integration Service executes the transaction SQL at the beginning of each transaction.
For example, SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
Note: Enclose special characters in double quotes.
QuoteChar
Optional. The character that you will use for quotes in this connection.
The type of character used to identify special characters and reserved SQL keywords, such as WHERE. The Data Integration Service places the selected character around special characters and reserved SQL keywords. The Data Integration Service also uses this character for the QuoteChar property. Default is 4.
ODBC Provider
Optional. The type of database to which the Data Integration Service connects using ODBC. For pushdown optimization, specify the database type to enable the Data Integration Service to generate native database SQL. The options are as follows:
  • - Other
  • - Sybase
  • - Microsoft_SQL_Server
  • - Teradata
  • - Netezza
  • - Greenplum
Default is Other.
EnableQuotes
Optional. Choose to enable quotes or not for this connection.
When enabled, the Data Integration Service places identifier characters around table, view, schema, synonym, and column names when generating and executing SQL against these objects in the connection. Use if the objects have mixed-case or lowercase names. Valid values are True or False. Default is False.
EnableConnectionPool
Optional. Enables connection pooling. When you enable connection pooling, the connection pool retains idle connection instances in memory. When you disable connection pooling, the Data Integration Service stops all pooling activity. Valid values are True or False. Default is True.
ConnectionPoolSize
Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances. Default is 15.
ConnectionPoolMaxIdleTime
Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idle time when it does not exceed the minimum number of idle connection instances. Default is 120.
ConnectionPoolMinConnections
Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0.

Oracle Connection Options

Use connection options to define the Oracle connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes Oracle connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
PassThruEnabled
Optional. Enables pass-through security for the connection. When you enable pass-through security for a connection, the domain uses the client user name and password to log into the corresponding database, instead of the credentials defined in the connection object.
MetadataAccessConnectString
JDBC connection URL used to access metadata from the database.
jdbc:informatica:oracle://<host_name>:<port>;SID=<database name>
AdvancedJDBCSecurityOptions
Optional. Database parameters for metadata access to a secure database. Informatica treats the value of the AdvancedJDBCSecurityOptions field as sensitive data and encrypts the parameter string.
To connect to a secure database, include the following parameters:
  • - EncryptionMethod. Required. Indicates whether data is encrypted when transmitted over the network. This parameter must be set to SSL.
  • - ValidateServerCertificate. Optional. Indicates whether Informatica validates the certificate that is sent by the database server.
  • If this parameter is set to true, Informatica validates the certificate that is sent by the database server. If you specify the HostNameInCertificate parameter, Informatica also validates the host name in the certificate.
    If this parameter is set to false, Informatica does not validate the certificate that is sent by the database server. Informatica ignores any truststore information that you specify.
  • - HostNameInCertificate. Optional. Host name of the machine that hosts the secure database. If you specify a host name, Informatica validates the host name included in the connection string against the host name in the SSL certificate.
  • - TrustStore. Required. Path and file name of the truststore file that contains the SSL certificate for the database.
  • - TrustStorePassword. Required. Password for the truststore file for the secure database.
  • - KeyStore. Required. Path and file name of the keystore file.
  • - KeyStorePassword. Password for the keystore file for the secure database.
Note: For a complete list of the secure JDBC parameters, see the DataDirect JDBC documentation.
Informatica appends the secure JDBC parameters to the connection string. If you include the secure JDBC parameters directly to the connection string, do not enter any parameters in the AdvancedJDBCSecurityOptions field.
DataAccessConnectString
Connection string used to access data from the database.
Enter the connection string in the following format from the TNSNAMES entry:
<database name>
CodePage
Required. Code page used to read from a source database or write to a target database or file.
EnvironmentSQL
Optional. SQL commands to set the database environment when you connect to the database. The Data Integration Service executes the connection environment SQL each time it connects to the database.
For example, ALTER SESSION SET CURRENT_SCHEMA=INFA_USR;
Note: Enclose special characters in double quotes.
TransactionSQL
Optional. SQL commands to execute before each transaction. The Data Integration Service executes the transaction SQL at the beginning of each transaction.
For example, SET TRANSACTION ISOLATION LEVEL SERIALIZABLE;
Note: Enclose special characters in double quotes.
EnableParallelMode
Optional. Enables parallel processing when loading data into a table in bulk mode. Used for Oracle. True or false. Default is false.
QuoteChar
Optional. The character that you will use for quotes in this connection.
The type of character used to identify special characters and reserved SQL keywords, such as WHERE. The Data Integration Service places the selected character around special characters and reserved SQL keywords. The Data Integration Service also uses this character for the QuoteChar property. Default is 0.
EnableQuotes
Optional. Choose to enable quotes or not for this connection.
When enabled, the Data Integration Service places identifier characters around table, view, schema, synonym, and column names when generating and executing SQL against these objects in the connection. Use if the objects have mixed-case or lowercase names. Valid values are True or False. Default is True.
EnableConnectionPool
Optional. Enables connection pooling. When you enable connection pooling, the connection pool retains idle connection instances in memory. When you disable connection pooling, the Data Integration Service stops all pooling activity. Valid values are True or False. Default is True.
ConnectionPoolSize
Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances. Default is 15.
ConnectionPoolMaxIdleTime
Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idletime when it does not exceed the minimum number of idle connection instances. Default is 120.
ConnectionPoolMinConnections
Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0.

Salesforce Connection Options

Use connection options to define a Salesforce connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
Example for Salesforce connection using infacmd

infacmd createConnection -dn DomainName -un Domain_UserName -pd Domain_Pwd -cn Connection_Name -cid Connection_ID -ct SALESFORCE -o userName=salesforceUserName password=salesforcePWD SERVICE_URL=https://login.salesforce.com/services/Soap/u/42.0
Example for OAuth Salesforce connection using pmcmd

pmcmd createConnection -s Salesforce -n ConnectionName -u -p -l CodePage -k ConnectionType=OAuth RefreshToken=salesforceRefreshToken ConsumerKey=salesforceConsumerKey ConsumerSecret= salesforceConsumerSecret Service_URL=https://login.salesforce.com/services/Soap/u/42.0
Example for Standard Salesforce connection using pmcmd

pmcmd createConnection -s Salesforce -n ConnectionName -u salesforceUserName -p salesforcePWD -l CodePage -k ConnectionType=Standard Service_URL=https://login.salesforce.com/services/Soap/u/42.0

To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes Salesforce connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
Username
Salesforce username.
Password
Password for the Salesforce user name. The password is case sensitive.
To access Salesforce outside the trusted network of your organization, you must append a security token to your password to log in to the API or a desktop client.
To receive or reset your security token, log in to Salesforce and click Setup > My Personal Information > Reset My Security Token.
Refresh Token
For OAuth Salesforce connection. The Refresh Token of Salesforce generated using the Consumer Key and Consumer Secret.
Consumer Key
For OAuth Salesforce connection. The Consumer Key obtained from Salesforce, required to generate the Refresh Token. For more information about how to generate the Consumer Key, see the Salesforce documentation.
Consumer Secret
For OAuth Salesforce connection. The Consumer Secret obtained from Salesforce, required to generate the Refresh Token. For more information about how to generate the Consumer Secret, see the Salesforce documentation.
Connection Type
Select the Standard or OAuth Salesforce connection.
Service URL
URL of the Salesforce service that you want to access. In a test or development environment, you might want to access the Salesforce Sandbox testing environment. For more information about the Salesforce Sandbox, see the Salesforce documentation.

Salesforce Marketing Cloud Connection Options

Use connection options to define a Salesforce Marketing Cloud connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
Example for infacmd createConnection command:
./infacmd.sh createConnection -dn DomainName -un Domain_UserName -pd Domain_Pwd -cn Connection_Name -cid Connection_ID -ct SFMC -o salesforce_marketing_cloud_url=https://webservice.s7.exacttarget.com/etframework.wsdl userName=SFMCUserName password=SFMCpwd clientid=SFMCclientid clientsecret=SFMCclientsecret enable_logging=true UTC_Offset=UTC+05:30 Batch_Size=1
Example for infacmd updateConnection command:
./infacmd.sh updateConnection -dn DomainName -un Domain_UserName -pd Domain_Pwd -cn Connection_Name -o salesforce_marketing_cloud_url=https://mc6tbszr9y72l86wknwg5w3c3k7q.soap.marketingcloudapis.com/etframework.wsdl userName=SFMCUserName password=SFMCpwd clientid=SFMCclientid clientsecret=SFMCclientsecret enable_logging=true UTC_Offset=UTC+05:30 Batch_Size=1
Example for infacmd removeConnection command:
./infacmd.sh removeConnection -dn DomainName -un Domain_UserName -pd Domain_Pwd -cn Connection_Name
The following table describes Salesforce Marketing Cloud connection options for infacmd.sh createConnection, updateConnection, and remove commands:
Connection property
Description
Domain Name
Informatica domain where you want to create the connection.
Domain User Name
User name of the domain.
Domain Password
Password for the domain.
Connection Name
Name of the Salesforce Marketing Cloud connection.
Connection ID
The Data Integration Service uses the ID to identify the connection.
Salesforce Marketing Cloud Url
The URL that the Data Integration Service uses to connect to the Salesforce Marketing Cloud WSDL.
The following URL is an example for OAuth 1.0 URL:
https://webservice.s7.exacttarget.com/etframework.wsdl
The following URL is an example for OAuth 2.0 URL:
https://<SUBDOMAIN>.soap.marketingcloudapis.com/etframework.wsdl
Informatica recommends that you upgrade to OAuth 2.0 before Salesforce Marketing Cloud drops support for OAuth 1.0.
Username
User name of the Salesforce Marketing Cloud account.
Password
Password for the Salesforce Marketing Cloud account.
ClientId
The client ID of Salesforce Marketing Cloud required to generate a valid access token.
ClientSecret
The client secret of Salesforce Marketing Cloud required to generate a valid access token.
Enable Logging
When you enable logging you can see the session log for the tasks.
UTC Offset
The Secure Agent uses the UTC offset connection property to read data from and write data to Salesforce Marketing Cloud in UTC offset time zone.
Batch Size
Number of rows that the Secure Agent writes in a batch to the target.
When you insert or update data and specify the contact key, the data associated with the specified contact ID is inserted or updated in a batch to Salesforce Marketing Cloud. When you upsert data to Salesforce Marketing Cloud, do not specify the contact key.

SAPAPPLICATIONS Connection Options

Use connection options to define the SAPAPPLICATIONS connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
Separate multiple options with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes SAPAPPLICATIONS connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
UserName
Required. SAP system user name.
Password
Required. Password for the user name.
HostName
Required. Host name of the SAP application.
ClientNumber
Required. SAP client number.
SystemNumber
Required. SAP system number.
Language
Optional. SAP Logon language.

Sequential Connection Options

Use SEQ connection options to define a connection to a sequential z/OS data set.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes SEQ connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
CodePage
Required. Code to read from or write to the sequential file. Use the ISO code page name, such as ISO-8859-6. The code page name is not case sensitive.
ArraySize
Optional. Determines the number of records in the storage array for the threads when the worker threads value is greater than 0. Valid values are from 1 through 5000. Default is 25.
Compression
Optional. Compresses the data to decrease the amount of data that Informatica applications write over the network. True or false. Default is false.
EncryptionLevel
Optional. Level of encryption. If you specify AES for the EncryptionType option, specify one of the following values to indicate the level of AES encryption:
  • - 1. Use a 128-bit encryption key.
  • - 2. Use a 192-bit encryption key.
  • - 3. Use a 256-bit encryption key.
Default is 1.
Note: If you specify None for encryption type, the Data Integration Service ignores the encryption level value.
EncryptionType
Optional. Enter one of the following values for the encryption type:
  • - None
  • - AES
Default is None.
Optional. Controls whether to use encryption. Specify one of the following values:
  • - None
  • - AES
Default is None.
InterpretAsRows
Optional. If true, the pacing size value represents a number of rows. If false, the pacing size represents kilobytes. Default is false.
Location
Location of the PowerExchange Listener node that can connect to the data source. The location is defined in the first parameter of the NODE statement in the PowerExchange dbmover.cfg configuration file.
OffLoadProcessing
Optional. Moves bulk data processing from the data source machine to the Data Integration Service machine.
Enter one of the following values:
  • - Auto. The Data Integration Service determines whether to use offload processing.
  • - Yes. Use offload processing.
  • - No. Do not use offload processing.
Default is Auto.
PacingSize
Optional. Slows the data transfer rate in order to reduce bottlenecks. The lower the value, the greater the session performance. Minimum value is 0. Enter 0 for optimal performance. Default is 0.
WorkerThread
Optional. Number of threads that the Data Integration Service uses to process bulk data when offload processing is enabled. For optimal performance, this value should not exceed the number of available processors on the Data Integration Service machine. Valid values are 1 through 64. Default is 0, which disables multithreading.
WriteMode
Enter one of the following write modes:
  • - CONFIRMWRITEON. Sends data to the Data Integration Service and waits for a success/no success response before sending more data.
  • - CONFIRMWRITEOFF. Sends data to the Data Integration Service without waiting for a success/no success response. Use this option when the target table can be reloaded if an error occurs.
  • - ASYNCHRONOUSWITHFAULTT. Sends data to the Data Integration Service asynchronously with the ability to detect errors.
Default is CONFIRMWRITEON.
EnableConnectionPool
Optional. Enables connection pooling. When you enable connection pooling, the connection pool retains idle connection instances in memory. When you disable connection pooling, the Data Integration Service stops all pooling activity. True or false. Default is false.
ConnectionPoolSize
Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances. Default is 15.
ConnectionPoolMaxIdleTime
Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idletime when it does not exceed the minimum number of idle connection instances. Default is 120.
ConnectionPoolMinConnections
Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0.

Snowflake Connection Options

Use connection options to define a Snowflake connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
For example,
./infacmd.sh createconnection -dn Domain_Snowflake -un Administartor -pd Administrator -cn Snowflake_CLI -ct SNOWFLAKE -o "user=INFAADPQA password=passwd account=informatica role=ROLE_PC_AUTO warehouse=QAAUTO_WH"
To enter multiple options, separate them with a space. To enter a value that contains a space or other nonalphanumeric character, enclose the value in quotation marks.
The following table describes the mandatory Snowflake connection options for the infacmd isp CreateConnection and UpdateConnection commands:
Property
Description
connectionId
String that the Data Integration Service uses to identify the connection.
connectionType
The connection type. Type of connection is SnowFlake.
name
The name of the connection.
account
The name of the Snowflake account.
additionalparam
Enter one or more JDBC connection parameters in the following format:
<param1>=<value>&<param2>=<value>&<param3>=<value>....
For example:
user=jon&warehouse=mywh&db=mydb&schema=public
password
The password to connect to the Snowflake account.
role
The Snowflake role assigned to the user.
user
The user name to connect to the Snowflake account.
warehouse
The Snowflake warehouse name.

Tableau Connection Options

Use connection options to define a Tableau connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
For example,
./infacmd.sh createconnection -dn Domain -un Username -pd Password -cn Connection name -ct TABLEAU -o "connectionURL= contentURL= password= tableauProduct='Tableau Server' username=infaadmin site='' tabcmdInstallLocation='' tableauServer=true"
To enter multiple options, separate them with a space. To enter a value that contains a space or other nonalphanumeric character, enclose the value in quotation marks.
The following table describes the mandatory Tableau connection options for the infacmd isp CreateConnection and UpdateConnection commands:
Connection Property
Description
Tableau Product
The name of the Tableau product to which you want to connect.
You can choose one of the following Tableau products to publish the TDE or TWBX file:
  • - Tableau Desktop. Creates a TDE file in the Data Integration Service machine. You can then manually import the TDE file to Tableau Desktop.
  • - Tableau Server. Publishes the generated TDE or TWBX file to Tableau Server.
  • - Tableau Online. Publishes the generated TDE or TWBX file to Tableau Online.
Connection URL
URL of Tableau Server or Tableau Online to which you want to publish the TDE or TWBX file. The URL has the following format: http://<Host name of Tableau Server or Tableau Online>:<port>
User Name
User name of the Tableau Server or Tableau Online account.
Password
Password for the Tableau Server or Tableau Online account.
Content URL
The name of the site on Tableau Server or Tableau Online where you want to publish the TDE or TWBX file.
Contact the Tableau administrator to provide the site name.

Tableau V3 Connection Options

Use connection options to define a Tableau V3 connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
For example,
./infacmd.sh createConnection -dn Domain -un Username -pd Password -cn Connection name -ct tableau_server -ct TABLEAU V3 -o "connectionURL= site= password= tableauProduct='Tableau Server' username="
To enter multiple options, separate options with spaces. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes the mandatory Tableau V3 connection options for the infacmd isp CreateConnection and UpdateConnection commands:
Connection Property
Description
Tableau Product
The name of the Tableau product to which you want to connect.
You can choose one of the following Tableau products to publish the .hyper or TWBX file:
Tableau Desktop
Creates a .hyper file in the Data Integration Service machine. You can then manually import the .hyper file to Tableau Desktop.
Tableau Server
Publishes the generated .hyper or TWBX file to Tableau Server.
Tableau Online
Publishes the generated .hyper or TWBX file to Tableau Online.
Connection URL
The URL of Tableau Server or Tableau Online to which you want to publish the .hyper or TWBX file.
Enter the URL in the following format: http://<Host name of Tableau Server or Tableau Online>:<port>
User Name
The user name of the Tableau Server or Tableau Online account.
Password
The password for the Tableau Server or Tableau Online account.
Site ID
The ID of the site on Tableau Server or Tableau Online where you want to publish the or TWBX file.
Note: Contact the Tableau administrator to provide the site ID.
Schema File Path
The path to a sample .hyper file from where the Data Integration Service imports the Tableau metadata.
Enter one of the following options for the schema file path:
  • - Absolute path to the .hyper file.
  • - Directory path for the .hyper files.
  • - Empty directory path.
The path you specify for the schema file becomes the default path for the target .hyper file. If you do not specify a file path, the Data Integration Service uses the following default file path for the target .hyper file:
<Data Integration Service installation directory>/apps/Data_Integration_Server/<latest version>/bin/rtdm

Teradata Parallel Transporter Connection Options

Use connection options to define a Teradata PT connection.
Enter connection options in the following format:
... -o option_name='value' option_name='value' ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes Teradata PT connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
UserName
Required. Teradata database user name with the appropriate write permissions to access the database.
Password
Required. Password for the Teradata database user name.
DriverName
Required. Name of the Teradata JDBC driver.
ConnectionString
Required. JDBC URL to fetch metadata.
TDPID
Required. Name or IP address of the Teradata database machine.
databaseName
Required. Teradata database name.
If you do not enter a database name, Teradata PT API uses the default login database name.
DataCodePage
Optional. Code page associated with the database.
When you run a mapping that loads to a Teradata target, the code page of the Teradata PT connection must be the same as the code page of the Teradata target.
Default is UTF-8.
Tenacity
Optional. Number of hours that Teradata PT API continues trying to log on when the maximum number of operations run on the Teradata database.
Must be a positive, non-zero integer. Default is 4.
MaxSessions
Optional. Maximum number of sessions that Teradata PT API establishes with the Teradata database.
Must be a positive, non-zero integer. Default is 4.
MinSessions
Optional. Minimum number of Teradata PT API sessions required for the Teradata PT API job to continue.
Must be a positive integer between 1 and the Max Sessions value. Default is 1.
Sleep
Optional. Number of minutes that Teradata PT API pauses before it retries to log on when the maximum number of operations run on the Teradata database.
Must be a positive, non-zero integer. Default is 6.
useMetadataJdbcUrl
Optional.
Set this option to true to Indicate that the Teradata Connector for Hadoop (TDCH) must use the JDBC URL that you specified in the connection string.
Set this option to false to specify a different JDBC URL that TDCH must use when it runs the mapping.
tdchJdbcUrl
Required.
JDBC URL that TDCH must use when it runs the mapping.
dataEncryption
Required.
Enables full security encryption of SQL requests, responses, and data on Windows.
To enable data encryption on Unix, add the command UseDataEncryption=Yes to the DSN in the odbc.ini file.
authenticationType
Required. Authenticates the user.
Enter of the following values for the type of the authentication:
  • - Native. Authenticates your user name and password against the Teradata database specified in the connection.
  • - LDAP. Authenticates user credentials against the external LDAP directory service.
Default is Native.
hadoopConnector
Required if you want to enable Sqoop connectivity for the data object that uses the JDBC connection. The Data Integration Service runs the mapping in the Hadoop run-time environment through Sqoop.
You can configure Sqoop connectivity for relational data objects, customized data objects, and logical data objects that are based on a JDBC-compliant database.
Set the value to SQOOP_146 to enable Sqoop connectivity.
hadoopConnectorArgs
Optional. Enter the arguments that Sqoop must use to connect to the database. Enclose the Sqoop arguments within single quotes. Separate multiple arguments with a space.
For example, hadoopConnectorArgs='--<Sqoop argument 1> --<Sqoop argument 2>'
To read data from or write data to Teradata through Teradata Connector for Hadoop (TDCH) specialized connectors for Sqoop, define the TDCH connection factory class in the hadoopConnectorArgs argument. The connection factory class varies based on the TDCH Sqoop Connector that you want to use.
  • - To use Cloudera Connector Powered by Teradata, configure the hadoopConnectorArgs argument as follows:
  • hadoopConnectorArgs='-Dsqoop.connection.factories=com.cloudera.connector.teradata.TeradataManagerFactory'
  • - To use Hortonworks Connector for Teradata (powered by the Teradata Connector for Hadoop), configure the hadoopConnectorArgs argument as follows:
  • hadoopConnectorArgs='-Dsqoop.connection.factories=org.apache.sqoop.teradata.TeradataManagerFactory'
If you do not enter Sqoop arguments, the Data Integration Service constructs the Sqoop command based on the JDBC connection properties.

Twitter Connection Options

Use connection options to define a Twitter connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes Twitter connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
ConsumerKey
The consumer key that you get when you create the application in Twitter. Twitter uses the key to identify the application.
ConsumerSecret
The consumer secret that you get when you create the Twitter application. Twitter uses the secret to establish ownership of the consumer key.
AccessToken
Access token that the OAuth Utility returns. Twitter uses this token instead of the user credentials to access the protected resources.
AccessSecret
Access secret that the OAuth Utility returns. The secret establishes ownership of a token.

Twitter Streaming Connection Options

Use connection options to define a Twitter Streaming connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes Twitter Streaming connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
HoseType
Streaming API methods. You can specify the following methods:
  • - Filter. The Twitter statuses/filter method returns public statuses that match the search criteria.
  • - Sample. The Twitter statuses/sample method returns a random sample of all public statuses.
UserName
Twitter user screen name.
Password
Twitter password.

VSAM Connection Options

Use connection options to define a VSAM connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes VSAM connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
CodePage
Required. Code to read from or write to the VSAM file. Use the ISO code page name, such as ISO-8859-6. The code page name is not case sensitive.
ArraySize
Optional. Determines the number of records in the storage array for the threads when the worker threads value is greater than 0. Valid values are from 1 through 5000. Default is 25.
Compression
Optional. Compresses the data to decrease the amount of data Informatica applications write over the network. True or false. Default is false.
EncryptionLevel
Optional. Level of encryption. If you specify AES for the EncryptionType option, specify one of the following values to indicate the level of AES encryption:
  • - 1. Use a 128-bit encryption key.
  • - 2. Use a 192-bit encryption key.
  • - 3. Use a 256-bit encryption key.
Default is 1.
Note: If you specify None for encryption type, the Data Integration Service ignores the encryption level value.
EncryptionType
Optional. Controls whether to use encryption. Specify one of the following values:
  • - None
  • - AES
Default is None.
InterpretAsRows
Optional. If true, the pacing size value represents a number of rows. If false, the pacing size represents kilobytes. Default is false.
Location
Location of the PowerExchange listener node that can connect to VSAM. The node is defined in the PowerExchange dbmover.cfg configuration file.
OffLoadProcessing
Optional. Moves bulk data processing from the VSAM source to the Data Integration Service machine.
Enter one of the following values:
  • - Auto. The Data Integration Service determines whether to use offload processing.
  • - Yes. Use offload processing.
  • - No. Do not use offload processing.
Default is Auto.
PacingSize
Optional. Slows the data transfer rate in order to reduce bottlenecks. The lower the value, the greater the session performance. Minimum value is 0. Enter 0 for optimal performance. Default is 0.
WorkerThread
Optional. Number of threads that the Data Integration Service uses to process bulk data when offload processing is enabled. For optimal performance, this value should not exceed the number of available processors on the Data Integration Service machine. Valid values are 1 through 64. Default is 0, which disables multithreading.
WriteMode
Enter one of the following write modes:
  • - CONFIRMWRITEON. Sends data to the Data Integration Service and waits for a success/no success response before sending more data.
  • - CONFIRMWRITEOFF. Sends data to the Data Integration Service without waiting for a success/no success response. Use this option when the target table can be reloaded if an error occurs.
  • - ASYNCHRONOUSWITHFAULTT. Sends data to the Data Integration Service asynchronously with the ability to detect errors.
Default is CONFIRMWRITEON.
EnableConnectionPool
Optional. Enables connection pooling. When you enable connection pooling, the connection pool retains idle connection instances in memory. When you disable connection pooling, the Data Integration Service stops all pooling activity. True or false. Default is false.
ConnectionPoolSize
Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances. Default is 15.
ConnectionPoolMaxIdleTime
Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idletime when it does not exceed the minimum number of idle connection instances. Default is 120.
ConnectionPoolMinConnections
Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0.

Web Content-Kapow Katalyst Connection Options

Use connection options to define a Web Content-Kapow Katalyst connection.
Enter connection options in the following format:
... -o option_name=value option_name=value ...
To enter multiple options, separate them with a space. To enter a value that contains a space or other non-alphanumeric character, enclose the value in quotation marks.
The following table describes Web Content-Kapow Katalyst connection options for infacmd isp CreateConnection and UpdateConnection commands:
Option
Description
ManagementConsoleURL
URL of the Local Management Console where the robot is uploaded.
The URL must start with http or https. For example, http://localhost:50080.
RQLServicePort
The port number where the socket service listens for the RQL service.
Enter a value from 1 through 65535. Default is 50000.
Username
User name required to access the Local Management Console.
Password
Password to access the Local Management Console.