Option | Argument | Description |
---|---|---|
-DomainName -dn | domain_name | Required. Name of the Informatica domain. You can set the domain name with the -dn option or the environment variable INFA_DEFAULT_DOMAIN. If you set a domain name with both methods, the -dn option takes precedence. |
-UserName -un | user_name | Required if the domain uses Native or LDAP authentication. User name to connect to the domain. You can set the user name with the -un option or the environment variable INFA_DEFAULT_DOMAIN_USER. If you set a user name with both methods, the -un option takes precedence. Optional if the domain uses Kerberos authentication. To run the command with single sign-on, do not set the user name. If you set the user name, the command runs without single sign-on. |
-Password -pd | password | Required if you specify the user name. Password for the user name. The password is case sensitive. You can set a password with the -pd option or the environment variable INFA_DEFAULT_DOMAIN_PASSWORD. If you set a password with both methods, the password set with the -pd option takes precedence. |
-SecurityDomain -sdn | security_domain | Required if the domain uses LDAP authentication. Optional if the domain uses native authentication or Kerberos authentication. Name of the security domain to which the domain user belongs. You can set a security domain with the -sdn option or the environment variable INFA_DEFAULT_SECURITY_DOMAIN. If you set a security domain name with both methods, the -sdn option takes precedence. The security domain name is case sensitive. If the domain uses native or LDAP authentication, the default is Native. If the domain uses Kerberos authentication, the default is the LDAP security domain created during installation. The name of the security domain is the same as the user realm specified during installation. |
-ResilienceTimeout -re | timeout_period_in_seconds | Optional. Amount of time in seconds that infacmd attempts to establish or re-establish a connection to the domain. If you omit this option, infacmd uses the timeout value specified in the INFA_CLIENT_RESILIENCE_TIMEOUT environment variable. If no value is specified in the environment variable, the default of 180 seconds is used. |
-ConnectionName -cn | connection_name | Name of the connection. The name is not case sensitive and must be unique within the domain. It cannot exceed 128 characters, contain spaces, or contain the following special characters: ~ ` ! $ % ^ & * ( ) - + = { [ } ] | \ : ; " ' < , > . ? / |
- ConnectionId -cid | connection_id | String that the Data Integration Service uses to identify the connection. The ID is not case sensitive. It must be 255 characters or less and must be unique in the domain. You cannot change this property after you create the connection. Default value is the connection name. |
-ConnectionType -ct | connection_type | Required. Type of connection. Use one of the following connection types:
You can use the infacmd isp ListConnections command to view connection types. |
ConnectionUserName -cun | connection_user_name | Required. Database user name. |
-ConnectionPassword -cpd | connection_password | Required. Password for the database user name. You can set a password with the -cpd option or the environment variable INFA_DEFAULT_CONNECTION_PASSWORD, lf you set the password with both options, the -cpd option takes precedence. If you are creating an ADABAS, DB2I, DB2Z, IMS, SEQ, or VSAM connection, you can enter a valid PowerExchange passphrase instead of a password. Passphrases for access to databases and data sets on z/OS can be from 9 to 128 characters in length. Passphrases for access to DB2 for i5/OS can be up to 31 characters in length. Passphrases can contain the following characters:
’ - ; # \ , . / ! % & * ( ) _ + { } : @ | < > ? Note: The first character is an apostrophe. Passphrases cannot include single quotation marks (‘), double quotation marks (“), or currency symbols. If a passphrase contains spaces, you must enclose it with double-quotation marks ("), for example, "This is an example passphrase". If a passphrase contains special characters, you must enclose it with triple double-quotation characters ("""), for example, """This passphrase contains special characters ! % & *.""". If a passphrase contains only alphanumeric characters without spaces, you can enter it without delimiters. Note: On z/OS, a valid RACF passphrase can be up to 100 characters in length. PowerExchange truncates passphrases longer than 100 characters when passing them to RACF for validation. To use passphrases, ensure that the PowerExchange Listener runs with a security setting of SECURITY=(1,N) or higher in the DBMOVER member. For more information, see "SECURITY Statement" in the PowerExchange Reference Manual. To use passphrases for IMS connections, ensure that the following additional requirements are met:
|
-VendorId -vid | vendor_id | Optional. ID of the external partner who built the adapter. |
-Options -o | options | Required. Enter name-value pairs separated by spaces. The connection options are different for each connection type. |
Option | Description |
---|---|
CodePage | Required. Code to read from or write to the database. Use the ISO code page name, such as ISO-8859-6. The code page name is not case sensitive. |
ArraySize | Optional. Determines the number of records in the storage array for the threads when the worker threads value is greater than 0. Valid values are from 1 through 5000. Default is 25. |
Compression | Optional. Compresses the data to decrease the amount of data Informatica applications write over the network. True or false. Default is false. |
EncryptionLevel | Optional. Level of encryption. If you specify AES for the EncryptionType option, specify one of the following values to indicate the level of AES encryption:
Default is 1. Note: If you specify None for encryption type, the Data Integration Service ignores the encryption level value. |
EncryptionType | Optional. Controls whether to use encryption. Specify one of the following values:
Default is None. |
InterpretAsRows | Optional. If true, the pacing size value represents a number of rows. If false, the pacing size represents kilobytes. Default is false. |
Location | Location of the PowerExchange Listener node that can connect to the database. The location is defined in the first parameter of the NODE statement in the PowerExchange dbmover.cfg configuration file. |
OffLoadProcessing | Optional. Moves bulk data processing from the source machine to the Data Integration Service machine. Enter one of the following values:
Default is Auto. |
PacingSize | Optional. Slows the data transfer rate in order to reduce bottlenecks. The lower the value, the greater the session performance. Minimum value is 0. Enter 0 for optimal performance. Default is 0. |
WorkerThread | Optional. Number of threads that the Data Integration Service uses to process bulk data when offload processing is enabled. For optimal performance, this value should not exceed the number of available processors on the Data Integration Service machine. Valid values are 1 through 64. Default is 0, which disables multithreading. |
WriteMode | Enter one of the following write modes:
Default is CONFIRMWRITEON. |
EnableConnectionPool | Optional. Enables connection pooling. When you enable connection pooling, the connection pool retains idle connection instances in memory. When you disable connection pooling, the Data Integration Service stops all pooling activity. True or false. Default is false. |
ConnectionPoolSize | Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances. Default is 15. |
ConnectionPoolMaxIdleTime | Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idletime when it does not exceed the minimum number of idle connection instances. Default is 120. |
ConnectionPoolMinConnections | Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0. |
Property | Description |
---|---|
Username | User name of the Amazon Redshift account. |
Password | Password for the Amazon Redshift account. |
ClusterNodeType | Node type of the Amazon Redshift cluster. You can select the following options:
For more information about nodes in the cluster, see the Amazon Redshift documentation. |
NumberOfNodesinCluster | Number of nodes in the Amazon Redshift cluster. For more information about nodes in the cluster, see the Amazon Redshift documentation. |
JDBC URL | Amazon Redshift connection URL. |
Property | Description |
---|---|
FolderPath | The complete path to Amazon S3 objects. The path must include the bucket name and any folder name. Do not use a slash at the end of the folder path. For example, <bucket name>/<my folder name> Note: If you do not specify the region name when you create the Amazon S3 connection using command line interface, the US East (N. Virginia) region name is taken as default in the RegionName property. |
Option | Description |
---|---|
userName | DataSift username for the DataSift user account. |
apiKey | API key. The Developer API key is displayed in the Dashboard or Settings page in the DataSift account. |
Option | Description |
---|---|
DatabaseName | Database instance name. |
EnvironmentSQL | Optional. SQL commands to set the database environment when you connect to the database. The Data Integration Service executes the connection environment SQL each time it connects to the database. Note: Enclose special characters in double quotes. |
CodePage | Required. Code page used to read from a source database or write to a target database or file. |
ArraySize | Optional. Determines the number of records in the storage array for the threads when the worker threads value is greater than 0. Valid values are from 1 through 5000. Default is 25. |
Compression | Optional. Compresses the data to decrease the amount of data to write over the network. Default is false. |
EncryptionLevel | Optional. Level of encryption. If you specify AES for the EncryptionType option, specify one of the following values to indicate the level of AES encryption:
Default is 1. Note: If you specify None for encryption type, the Data Integration Service ignores the encryption level value. |
EncryptionType | Optional. Controls whether to use encryption. Specify one of the following values:
Default is None. |
InterpretAsRows | Optional. Represent pacing size as a number of rows. If false, the pacing size represents kilobytes. Default is false. |
Location | Location of the PowerExchange Listener node that can connect to the database. The location is defined in the first parameter of the NODE statement in the PowerExchange dbmover.cfg configuration file. |
PacingSize | Optional. Amount of data the source system can pass to the PowerExchange Listener. Configure the pacing size if an external application, a database, or the Data Integration Service node is a bottleneck. The lower the value, the faster the performance. Minimum value is 0. Enter 0 for maximum performance. Default is 0. |
RejectFile | Optional. Enter the reject file name and path. Reject files contain rows that were not written to the database. |
WriteMode | Enter one of the following write modes:
Default is CONFIRMWRITEON. |
DatabaseFileOverrides | Specifies the i5/OS database file override. The format is: from_file/to_library/to_file/to_member Where:
You can specify up to 8 unique file overrides on a single connection. A single override applies to a single source or target. When you specify more than one file override, enclose the string of file overrides in double quotes and include a space between each file override. |
IsolationLevel | Commit scope of the transaction. Select one of the following values:
Default is CS. |
LibraryList | List of libraries that PowerExchange searches to qualify the table name for Select, Insert, Delete, or Update statements. PowerExchange searches the list if the table name is unqualified. Separate libraries with commas. |
EnableConnectionPool | Optional. Enables parallel processing when loading data into a table in bulk mode. Used for Oracle. True or false. Default is true. |
ConnectionPoolSize | Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances. |
ConnectionPoolMaxIdleTime | Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idletime when it does not exceed the minimum number of idle connection instances. |
ConnectionPoolMinConnections | Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0. |
Option | Description |
---|---|
DataAccessConnectString | Connection string used to access data from the database. <database name> |
EnvironmentSQL | Optional. SQL commands to set the database environment when you connect to the database. The Data Integration Service executes the connection environment SQL each time it connects to the database. Note: Enclose special characters in double quotes. |
CodePage | Required. Code page used to read from a source database or write to a target database or file. |
ArraySize | Optional. Determines the number of records in the storage array for the threads when the worker threads value is greater than 0. Valid values are from 1 through 5000. Default is 25. |
Compression | Optional. Compresses the data to decrease the amount of data to write over the network. Default is false. |
CorrelationID | Optional. Label to apply to a DB2 task or query to allow DB2 for z/OS to account for the resource. Enter up to 8 bytes of alphanumeric characters. |
EncryptionLevel | Optional. Level of encryption. If you specify AES for the EncryptionType option, specify one of the following values to indicate the level of AES encryption:
Default is 1. Note: If you specify None for encryption type, the Data Integration Service ignores the encryption level value. |
EncryptionType | Optional. Controls whether to use encryption. Specify one of the following values:
Default is None. |
InterpretAsRows | Optional. Represent pacing size as a number of rows. If false, the pacing size represents kilobytes. Default is false. |
Location | Location of the PowerExchange listener node that can connect to the database. The node is defined in the PowerExchange dbmover.cfg configuration file. |
OffloadProcessing | Optional. Moves bulk data processing from the VSAM source to the Data Integration Service machine. Enter one of the following values:
Default is Auto. |
PacingSize | Optional. Amount of data the source system can pass to the PowerExchange Listener. Configure the pacing size if an external application, a database, or the Data Integration Service node is a bottleneck. The lower the value, the faster the performance. Minimum value is 0. Enter 0 for maximum performance. Default is 0. |
RejectFile | Optional. Enter the reject file name and path. Reject files contain rows that were not written to the database. |
WorkerThread | Optional. Number of threads that the Data Integration Service uses to process bulk data when offload processing is enabled. For optimal performance, this value should not exceed the number of available processors on the Data Integration Service machine. Valid values are 1 through 64. Default is 0, which disables multithreading. |
WriteMode | Enter one of the following write modes:
Default is CONFIRMWRITEON. |
EnableConnectionPool | Optional. Enables parallel processing when loading data into a table in bulk mode. Used for Oracle. True or false. Default is true. |
ConnectionPoolSize | Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances. |
ConnectionPoolMaxIdleTime | Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idletime when it does not exceed the minimum number of idle connection instances. |
ConnectionPoolMinConnections | Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0. |
Option | Description |
---|---|
ConsumerKey | The App ID that you get when you create the application in Facebook. Facebook uses the key to identify the application. |
ConsumerSecret | The App Secret that you get when you create the application in Facebook. Facebook uses the secret to establish ownership of the consumer key. |
AccessToken | Access token that the OAuth Utility returns. Facebook uses this token instead of the user credentials to access the protected resources. |
AccessSecret | Access secret is not required for Facebook connection. |
Scope | Permissions for the application. Enter the permissions you used to configure OAuth. |
Option | Description |
---|---|
UserName | Required. User name with permissions to access the Greenplum database. |
Password | Required. Password to connect to the Greenplum database. |
driverName | Required. Name of the Greenplum JDBC driver. For example:com.pivotal.jdbc.GreenplumDriver For more information about the driver, see the Greenplum documentation. |
connectionString | Required. Greenplum JDBC connection URL. For example: jdbc:pivotal:greenplum://<hostname>:<port>;DatabaseName=<database_name> For more information about the connection URL, see the Greenplum documentation. |
hostName | Required. Host name or IP address of the Greenplum server. |
portNumber | Optional. Greenplum server port number. If you enter 0, the gpload utility reads from the environment variable $PGPORT. Default is 5432. |
databaseName | Required. Name of the database that you want to connect to. |
enableSSL | Required. Set this option to true to establish secure communication between the gpload utility and the Greenplum server over SSL. |
SSLCertificatePath | Required if you enable SSL. Path where the SSL certificates for the Greenplum server are stored. |
Option | Description |
---|---|
connectionId | String that the Data Integration Service uses to identify the connection. The ID is not case sensitive. It must be 255 characters or less and must be unique in the domain. You cannot change this property after you create the connection. Default value is the connection name. |
connectionType | Required. Type of connection is Hadoop. |
name | The name of the connection. The name is not case sensitive and must be unique within the domain. You can change this property after you create the connection. The name cannot exceed 128 characters, contain spaces, or contain the following special characters: ~ ` ! $ % ^ & * ( ) - + = { [ } ] | \ : ; " ' < , > . ? / |
blazeJobMonitorURL | The host name and port number for the Blaze Job Monitor. Use the following format: <hostname>:<port> Where
For example, enter: myhostname:9080 |
blazeYarnQueueName | The YARN scheduler queue name used by the Blaze engine that specifies available resources on a cluster. The name is case sensitive. |
blazeExecutionParameterList | Custom properties that are unique to the Blaze engine. To enter multiple properties, separate each name-value pair with the following text: &:. Use Informatica custom properties only at the request of Informatica Global Customer Support. |
blazeMaxPort | The maximum value for the port number range for the Blaze engine. Default value is 12600 |
blazeMinPort | The minimum value for the port number range for the Blaze engine. Default value is 12300 |
blazeUserName | The owner of the Blaze service and Blaze service logs. When the Hadoop cluster uses Kerberos authentication, the default user is the Data Integration Service SPN user. When the Hadoop cluster does not use Kerberos authentication and the Blaze user is not configured, the default user is the Data Integration Service user. |
blazeStagingDirectory | The HDFS file path of the directory that the Blaze engine uses to store temporary files. Verify that the directory exists. The YARN user, Blaze engine user, and mapping impersonation user must have write permission on this directory. Default is /blaze/workdir. If you clear this property, the staging files are written to the Hadoop staging directory /tmp/blaze_<user name>. |
clusterConfigId | The cluster configuration ID associated with the Hadoop cluster. You must enter a configuration ID to set up a Hadoop connection. |
hiveStagingDatabaseName | Namespace for Hive staging tables. Use the name default for tables that do not have a specified database name. |
engineType | The engine that the Hadoop environment uses to run a mapping on the Hadoop cluster. You can choose MRv2 or Tez. You can select Tez if it is configured for the Hadoop cluster. Default is MRv2. |
environmentSQL | SQL commands to set the Hadoop environment. The Data Integration Service executes the environment SQL at the beginning of each Hive script generated in a Hive execution plan. The following rules and guidelines apply to the usage of environment SQL:
|
hadoopExecEnvExecutionParameterList | Custom properties that are unique to the Hadoop connection. You can specify multiple properties. Use the following format: <property1>=<value> To specify multiple properties use &: as the property separator. If more than one Hadoop connection is associated with the same cluster configuration, you can override configuration set property values. Use Informatica custom properties only at the request of Informatica Global Customer Support. |
hadoopRejDir | The remote directory where the Data Integration Service moves reject files when you run mappings. Enable the reject directory using rejDirOnHadoop. |
impersonationUserName | Required if the Hadoop cluster uses Kerberos authentication. Hadoop impersonation user. The user name that the Data Integration Service impersonates to run mappings in the Hadoop environment. The Data Integration Service runs mappings based on the user that is configured. Refer the following order to determine which user the Data Integration Services uses to run mappings:
|
hiveWarehouseDirectoryOnHDFS | Optional. The absolute HDFS file path of the default database for the warehouse that is local to the cluster. If you do not configure the Hive warehouse directory, the Hive engine first tries to write to the directory specified in the cluster configuration property hive.metastore.warehouse.dir. If the cluster configuration does not have the property, the Hive engine writes to the default directory /user/hive/warehouse. |
metastoreDatabaseDriver | Driver class name for the JDBC data store. For example, the following class name specifies a MySQL driver: com.mysql.jdbc.Driver You can get the value for the Metastore Database Driver from hive-site.xml. The Metastore Database Driver appears as the following property in hive-site.xml: <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> |
metastoreDatabasePassword | The password for the metastore user name. You can get the value for the Metastore Database Password from hive-site.xml. The Metastore Database Password appears as the following property in hive-site.xml: <property> <name>javax.jdo.option.ConnectionPassword</name> <value>password</value> </property> |
metastoreDatabaseURI | The JDBC connection URI used to access the data store in a local metastore setup. Use the following connection URI: jdbc:<datastore type>://<node name>:<port>/<database name> where
For example, the following URI specifies a local metastore that uses MySQL as a data store: jdbc:mysql://hostname23:3306/metastore You can get the value for the Metastore Database URI from hive-site.xml. The Metastore Database URI appears as the following property in hive-site.xml: <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://MYHOST/metastore</value> </property> |
metastoreDatabaseUserName | The metastore database user name. You can get the value for the Metastore Database User Name from hive-site.xml. The Metastore Database User Name appears as the following property in hive-site.xml: <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hiveuser</value> </property> |
metastoreMode | Controls whether to connect to a remote metastore or a local metastore. By default, local is selected. For a local metastore, you must specify the Metastore Database URI, Metastore Database Driver, Username, and Password. For a remote metastore, you must specify only the Remote Metastore URI. You can get the value for the Metastore Execution Mode from hive-site.xml. The Metastore Execution Mode appears as the following property in hive-site.xml: <property> <name>hive.metastore.local</name> <value>true</true> </property> Note: The hive.metastore.local property is deprecated in hive-site.xml for Hive server versions 0.9 and above. If the hive.metastore.local property does not exist but the hive.metastore.uris property exists, and you know that the Hive server has started, you can set the connection to a remote metastore. |
remoteMetastoreURI | The metastore URI used to access metadata in a remote metastore setup. For a remote metastore, you must specify the Thrift server details. Use the following connection URI: thrift://<hostname>:<port> Where
For example, enter: thrift://myhostname:9083/ You can get the value for the Remote Metastore URI from hive-site.xml. The Remote Metastore URI appears as the following property in hive-site.xml: <property> <name>hive.metastore.uris</name> <value>thrift://<n.n.n.n>:9083</value> <description> IP address or fully-qualified domain name and port of the metastore host</description> </property> |
rejDirOnHadoop | Enables hadoopRejDir. Used to specify a location to move reject files when you run mappings. If enabled, the Data Integration Service moves mapping files to the HDFS location listed in hadoopRejDir. By default, the Data Integration Service stores the mapping files based on the RejectDir system parameter. |
sparkEventLogDir | Optional. The HDFS file path of the directory that the Spark engine uses to log events. |
sparkExecutionParameterList | An optional list of configuration parameters to apply to the Spark engine. You can change the default Spark configuration properties values, such as spark.executor.memory or spark.driver.cores. Use the following format: <property1>=<value> To enter multiple properties, separate each name-value pair with the following text: &: |
sparkStagingDirectory | The HDFS file path of the directory that the Spark engine uses to store temporary files for running jobs. The YARN user, Data Integration Service user, and mapping impersonation user must have write permission on this directory. By default, the temporary files are written to the Hadoop staging directory /tmp/spark_<user name>. |
sparkYarnQueueName | The YARN scheduler queue name used by the Spark engine that specifies available resources on a cluster. The name is case sensitive. |
stgDataCompressionCodecClass | Codec class name that enables data compression and improves performance on temporary staging tables. The codec class name corresponds to the code type. |
stgDataCompressionCodecType | Hadoop compression library for a compression codec class name. You can choose None, Zlib, Gzip, Snappy, Bz2, LZO, or Custom. Default is None. |
Option | Description |
---|---|
DATABASETYPE | Required when you create an HBase connection for a MapR-DB table. Set the value to MapR-DB. Default is HBase. |
clusterConfigId | The cluster configuration ID associated with the Hadoop cluster. You must enter a configuration ID to set up a Hadoop connection. |
maprdbpath | Required if you create an HBase connection to connect to a MapR-DB table. Set the value to the database path that contains the MapR-DB table that you want to connect to. Enter a valid MapR cluster path. Enclose the value in single quotes. When you create an HBase data object for MapR-DB, you can browse only tables that exist in the path that you specify in this option. You cannot access tables that are available in sub-directories in the specified path. For example, if you specify the maprdbpath as /user/customers/, you can access the tables in the customers directory. However, if the customers directory contains a sub-directory named regions, you cannot access the tables in the following directory: /user/customers/regions |
Option | Description |
---|---|
userName | User name to access HDFS. |
nameNodeURI | The URI to access the storage system. You can find the value for fs.defaultFS in the core-site.xml configuration set of the cluster configuration. |
clusterConfigId | The cluster configuration ID associated with the Hadoop cluster. You must enter a configuration ID to set up a Hadoop connection. |
Option | Description |
---|---|
connectionType | Required. Type of connection is HIVE. |
name | The name of the connection. The name is not case sensitive and must be unique within the domain. You can change this property after you create the connection. The name cannot exceed 128 characters, contain spaces, or contain the following special characters: ~ ` ! $ % ^ & * ( ) - + = { [ } ] | \ : ; " ' < , > . ? / |
relationalSourceAndTarget | Hive connection mode. Set this option to true if you want to use the connection to access the Hive data warehouse. If you want to access Hive target, you need to enable the same connection or another Hive connection to run the mapping in the Hadoop cluster. If you enable relational source and target, you must provide the metadataDatabaseString option. |
pushDownMode | Hive connection mode. Set this option to true if you want to use the connection to run mappings in the Hadoop cluster. If you enable the connection for pushdown mode, you must provide the options to run the Informatica mappings in the Hadoop cluster. |
environmentSQL | SQL commands to set the Hadoop environment. In native environment type, the Data Integration Service executes the environment SQL each time it creates a connection to Hive metastore. If the Hive connection is used to run mappings in the Hadoop cluster, the Data Integration Service executes the environment SQL at the beginning of each Hive session. The following rules and guidelines apply to the usage of environment SQL in both the connection modes:
If the Hive connection is used to run mappings in the Hadoop cluster, only the environment SQL of the Hive connection is executed. The different environment SQL commands for the connections of the Hive source or target are not executed, even if the Hive sources and targets are on different clusters. |
quoteChar | The type of character used to identify special characters and reserved SQL keywords, such as WHERE. The Data Integration Service places the selected character around special characters and reserved SQL keywords. The Data Integration Service also uses this character for the Support mixed-case identifiers property. |
clusterConfigId | The cluster configuration ID associated with the Hadoop cluster. You must enter a configuration ID to set up a Hadoop connection. |
Property | Description |
---|---|
hiveJdbcDriverClassName | Name of the JDBC driver class. |
metadataConnString | The JDBC connection URI used to access the metadata from the Hadoop server. The connection string uses the following format: jdbc:hive://<hostname>:<port>/<db> Where
To connect to HiveServer 2, use the connection string format that Apache Hive implements for that specific Hadoop Distribution. For more information about Apache Hive connection string formats, see the Apache Hive documentation. If the Hadoop cluster uses SSL or TLS authentication, you must add ssl=true to the JDBC connection URI. For example: jdbc:hive2://<hostname>:<port>/<db>;ssl=true If you use self-signed certificate for SSL or TLS authentication, ensure that the certificate file is available on the client machine and the Data Integration Service machine. For more information, see the Informatica Big Data Management Cluster Integration Guide. |
bypassHiveJDBCServer | JDBC driver mode. Enable this option to use the embedded JDBC driver (embedded mode). To use the JDBC embedded mode, perform the following tasks:
If you choose the non-embedded mode, you must configure the Data Access Connection String. The JDBC embedded mode is preferred to the non-embedded mode. |
sqlAuthorized | When you select the option to observe fine-grained SQL authentication in a Hive source, the mapping observes row and column-level restrictions on data access. If you do not select the option, the Blaze run-time engine ignores the restrictions, and results include restricted data. Applicable to Hadoop clusters where Sentry or Ranger security modes are enabled. |
connectString | The connection string used to access data from the Hadoop data store. The non-embedded JDBC mode connection string must be in the following format: jdbc:hive://<hostname>:<port>/<db> Where
To connect to HiveServer 2, use the connection string format that Apache Hive implements for that specific Hadoop Distribution. For more information about Apache Hive connection string formats, see the Apache Hive documentation. If the Hadoop cluster uses SSL or TLS authentication, you must add ssl=true to the JDBC connection URI. For example: jdbc:hive2://<hostname>:<port>/<db>;ssl=true If you use self-signed certificate for SSL or TLS authentication, ensure that the certificate file is available on the client machine and the Data Integration Service machine. For more information, see the Informatica Big Data Management Cluster Integration Guide. |
Property | Description |
---|---|
databaseName | Namespace for tables. Use the name default for tables that do not have a specified database name. |
customProperties | Configures or overrides Hive or Hadoop cluster properties in the hive-site.xml configuration set on the machine on which the Data Integration Service runs. You can specify multiple properties. Select Edit to specify the name and value for the property. The property appears in the following format: <property1>=<value> When you specify multiple properties, &: appears as the property separator. The maximum length for the format is 1 MB. If you enter a required property for a Hive connection, it overrides the property that you configure in the Advanced Hive/Hadoop Properties. The Data Integration Service adds or sets these properties for each map-reduce job. You can verify these properties in the JobConf of each mapper and reducer job. Access the JobConf of each job from the Jobtracker URL under each map-reduce job. The Data Integration Service writes messages for these properties to the Data Integration Service logs. The Data Integration Service must have the log tracing level set to log each row or have the log tracing level set to verbose initialization tracing. For example, specify the following properties to control and limit the number of reducers to run a mapping job: mapred.reduce.tasks=2&:hive.exec.reducers.max=10 |
stgDataCompressionCodecClass | Codec class name that enables data compression and improves performance on temporary staging tables. The codec class name corresponds to the code type. |
stgDataCompressionCodecType | Hadoop compression library for a compression codec class name. You can choose None, Zlib, Gzip, Snappy, Bz2, LZO, or Custom. Default is None. |
Option | Description |
---|---|
PassThruEnabled | Optional. Enables pass-through security for the connection. When you enable pass-through security for a connection, the domain uses the client user name and password to log into the corresponding database, instead of the credentials defined in the connection object. |
MetadataAccessConnectString | Required. JDBC connection URL used to access metadata from the database. jdbc:informatica:db2://<host name>:<port>;DatabaseName=<database name> When you import a table from the Developer tool or Analyst tool, by default, all tables are displayed under the default schema name. To view tables under a specific schema instead of the default schema, you can specify the schema name from which you want to import the table. Include the ischemaname parameter in the URL to specify the schema name. For example, use the following syntax to import a table from a specific schema: jdbc:informatica:db2://<host name>:<port>;DatabaseName=<database name>;ischemaname=<schema_name> To search for a table in multiple schemas and import it, you can specify multiple schema names in the ischemaname parameter. The schema name is case sensitive. You cannot use special characters when you specify multiple schema names. Use the pipe (|) character to separate multiple schema names. For example, use the following syntax to search for a table in three schemas and import it: jdbc:informatica:db2://<host name>:<port>;DatabaseName=<database name>;ischemaname=<schema_name1>|<schema_name2>|<schema_name3> |
AdvancedJDBCSecurityOptions | Optional. Database parameters for metadata access to a secure database. Informatica treats the value of the AdvancedJDBCSecurityOptions field as sensitive data and encrypts the parameter string. To connect to a secure database, include the following parameters:
If this parameter is set to True, Informatica validates the certificate that is sent by the database server. If you specify the HostNameInCertificate parameter, Informatica also validates the host name in the certificate. If this parameter is set to false, Informatica does not validate the certificate that is sent by the database server. Informatica ignores any truststore information that you specify. Note: For a complete list of the secure JDBC parameters, see the DataDirect JDBC documentation. Informatica appends the secure JDBC parameters to the connection string. If you include the secure JDBC parameters directly in the connection string, do not enter any parameters in the AdvancedJDBCSecurityOptions field. |
DataAccessConnectString | Connection string used to access data from the database. Enter the connection string in the following format: <database name> |
CodePage | Required. Code page used to read from a source database or write to a target database. |
EnvironmentSQL | Optional. SQL commands to set the database environment when you connect to the database. The Data Integration Service executes the connection environment SQL each time it connects to the database. For example, ALTER SESSION SET CURRENT_SCHEMA=INFA_USR; Note: Enclose special characters in double quotes. |
TransactionSQL | Optional. SQL commands to execute before each transaction. The Data Integration Service executes the transaction SQL at the beginning of each transaction. For example, SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; Note: Enclose special characters in double quotes. |
Tablespace | Optional. The tablespace name of the database. |
QuoteChar | Optional. The character that you will use for quotes in this connection. The type of character used to identify special characters and reserved SQL keywords, such as WHERE. The Data Integration Service places the selected character around special characters and reserved SQL keywords. The Data Integration Service also uses this character for the QuoteChar property. Default is 0. |
EnableQuotes | Optional. Select to enable quotes or not for this connection. When enabled, the Data Integration Service places identifier characters around table, view, schema, synonym, and column names when generating and executing SQL against these objects in the connection. Use if the objects have mixed-case or lowercase names. Valid values are True or False. Default is True. |
EnableConnectionPool | Optional. Enables connection pooling. When you enable connection pooling, the connection pool retains idle connection instances in memory. When you disable connection pooling, the Data Integration Service stops all pooling activity. Valid values are True or False. Default is True. |
ConnectionPoolSize | Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances. Default is 15. |
ConnectionPoolMaxIdleTime | Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idletime when it does not exceed the minimum number of idle connection instances. Default is 120. |
ConnectionPoolMinConnections | Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0. |
Option | Description |
---|---|
CodePage | Required. Code to read from or write to the database. Use the ISO code page name, such as ISO-8859-6. The code page name is not case sensitive. |
ArraySize | Optional. Determines the number of records in the storage array for the threads when the worker threads value is greater than 0. Valid values are from 1 through 5000. Default is 25. |
Compression | Optional. Compresses the data to decrease the amount of data Informatica applications write over the network. True or false. Default is false. |
EncryptionLevel | Optional. Level of encryption. If you specify AES for the EncryptionType option, specify one of the following values to indicate the level of AES encryption:
Default is 1. Note: If you specify None for encryption type, the Data Integration Service ignores the encryption level value. |
EncryptionType | Optional. Controls whether to use encryption. Specify one of the following values:
Default is None. |
InterpretAsRows | Optional. If true, the pacing size value represents a number of rows. If false, the pacing size represents kilobytes. Default is false. |
Location | Location of the PowerExchange Listener node that can connect to the database. The location is defined in the first parameter of the NODE statement in the PowerExchange dbmover.cfg configuration file. |
OffLoadProcessing | Optional. Moves bulk data processing from the source machine to the Data Integration Service machine. Enter one of the following values:
Default is Auto. |
PacingSize | Optional. Slows the data transfer rate in order to reduce bottlenecks. The lower the value, the greater the session performance. Minimum value is 0. Enter 0 for optimal performance. Default is 0. |
WorkerThread | Optional. Number of threads that the Data Integration Service uses to process bulk data when offload processing is enabled. For optimal performance, this value should not exceed the number of available processors on the Data Integration Service machine. Valid values are 1 through 64. Default is 0, which disables multithreading. |
WriteMode | Enter one of the following write modes:
Default is CONFIRMWRITEON. |
EnableConnectionPool | Optional. Enables connection pooling. When you enable connection pooling, the connection pool retains idle connection instances in memory. When you disable connection pooling, the Data Integration Service stops all pooling activity. True or false. Default is false. |
ConnectionPoolSize | Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances. Default is 15. |
ConnectionPoolMaxIdleTime | Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idletime when it does not exceed the minimum number of idle connection instances. Default is 120. |
ConnectionPoolMinConnections | Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0. |
Option | Description |
---|---|
JDBCDriverClassName | The Java class that you use to connect to the database. The following list provides the driver class name that you can enter for the applicable database type:
com.informatica.jdbc.oracle.OracleDriver com.informatica.jdbc.db2.DB2Driver com.informatica.jdbc.sqlserver.SQLServerDriver com.informatica.jdbc.sybase.SybaseDriver com.informatica.jdbc.informix.InformixDriver com.informatica.jdbc.mysql.MySQLDriver For more information about which driver class to use with specific databases, see the vendor documentation. |
MetadataConnString | The URL that you use to connect to the database. The following list provides the connection string that you can enter for the applicable database type:
jdbc:informatica:oracle://<hostname>:<port>;SID=<sid> jdbc:informatica:db2://<hostname>:<port>;DatabaseName=<database name> jdbc:informatica:sqlserver://<host>:<port>;DatabaseName=<database name> jdbc:informatica:sybase://<host>:<port>;DatabaseName=<database name> jdbc:informatica:informix://<host>:<port>;informixServer=<informix server name>;databaseName=<dbName> jdbc:informatica:mysql://<host>:<port>;DatabaseName=<database name> For more information about the connection string to use for specific databases, see the vendor documentation for the URL syntax. |
EnvironmentSQL | Optional. SQL commands to set the database environment when you connect to the database. The Data Integration Service executes the connection environment SQL each time it connects to the database. For example, ALTER SESSION SET CURRENT_SCHEMA=INFA_USR; Note: Enclose special characters in double quotation marks. |
TransactionSQL | Optional. SQL commands to execute before each transaction. The Data Integration Service executes the transaction SQL at the beginning of each transaction. For example, SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; Note: Enclose special characters in double quotes. |
QuoteChar | Optional. The character that you will use for quotes in this connection. The type of character used to identify special characters and reserved SQL keywords, such as WHERE. The Data Integration Service places the selected character around special characters and reserved SQL keywords. The Data Integration Service also uses this character for the QuoteChar property. Default is DOUBLE_QUOTE. |
EnableQuotes | Optional. Select to enable quotes or not for this connection. When enabled, the Data Integration Service places identifier characters around table, view, schema, synonym, and column names when generating and executing SQL against these objects in the connection. Use if the objects have mixed-case or lowercase names. Valid values are True or False. Default is True. |
hadoopConnector | Required if you want to enable Sqoop connectivity for the data object that uses the JDBC connection. The Data Integration Service runs the mapping in the Hadoop run-time environment through Sqoop. You can configure Sqoop connectivity for relational data objects, customized data objects, and logical data objects that are based on a JDBC-compliant database. Set the value to SQOOP_146 to enable Sqoop connectivity. |
hadoopConnectorArgs | Optional. Enter the arguments that Sqoop must use to connect to the database. Enclose the Sqoop arguments within single quotes. Separate multiple arguments with a space. For example, hadoopConnectorArgs='--<Sqoop argument 1> --<Sqoop argument 2>' To read data from or write data to Teradata through Teradata Connector for Hadoop (TDCH) specialized connectors for Sqoop, define the TDCH connection factory class in the hadoopConnectorArgs argument. The connection factory class varies based on the TDCH Sqoop Connector that you want to use.
hadoopConnectorArgs='-Dsqoop.connection.factories=com.cloudera.connector.teradata.TeradataManagerFactory' hadoopConnectorArgs='-Dsqoop.connection.factories=org.apache.sqoop.teradata.TeradataManagerFactory' If you do not enter Sqoop arguments, the Data Integration Service constructs the Sqoop command based on the JDBC connection properties. |
Property | Description |
---|---|
userName | JD Edwards EnterpriseOne user name. |
password | Password for the JD Edwards EnterpriseOne user name. The password is case sensitive. |
enterpriseServer | The host name of the JD Edwards EnterpriseOne server that you want to access. |
enterprisePort | The port number to access the JD Edwards EnterpriseOne server. |
environment | Name of the JD Edwards EnterpriseOne environment you want to connect to. |
role | Role of the JD Edwards EnterpriseOne user. |
Property | Description |
---|---|
hostName | The host name of the LDAP directory server that you want to access. |
port | The port number to access the LDAP directory server. |
userName | LDAP user name. |
password | Password for the LDAP user name. The password is case sensitive. |
Option | Description |
---|---|
ConsumerKey | The API key that you get when you create the application in LinkedIn. LinkedIn uses the key to identify the application. |
ConsumerSecret | The Secret key that you get when you create the application in LinkedIn. LinkedIn uses the secret to establish ownership of the consumer key. |
AccessToken | Access token that the OAuth Utility returns. The LinkedIn application uses this token instead of the user credentials to access the protected resources. |
AccessSecret | Access secret that the OAuth Utility returns. The secret establishes ownership of a token. |
Option | Description |
---|---|
DATABASETYPE | Required. Set the value to MapR-DB and enclose the value in single quotes. |
clusterConfigId | The cluster configuration ID associated with the Hadoop cluster. You must enter a configuration ID to set up an HBase connection for MapR-DB. |
maprdbpath | Required. Set the value to the database path that contains the MapR-DB table that you want to connect to. Enter a valid MapR cluster path. Enclose the value in single quotes. When you create an HBase data object for MapR-DB, you can browse only tables that exist in the path that you specify in this option. You cannot access tables that are available in sub-directories in the specified path. For example, if you specify the maprdbpath as /user/customers/, you can access the tables in the customers directory. However, if the customers directory contains a sub-directory named regions, you cannot access the tables in the following directory: /user/customers/regions |
Option | Description |
---|---|
ACCOUNTKEY | Name of the Microsoft Azure Blob Storage account. |
ACCOUNTNAME | Microsoft Azure Storage access key. |
CONTAINERNAME | Microsoft Azure Blob Storage container name. |
Option | Description |
---|---|
ADLSACCOUNTNAME | The name of the Microsoft Azure Data Lake Store. |
CLIENTID | The ID of your application to complete the OAuth Authentication in the Active Directory. For more information on creating a client ID, see https://docs.microsoft.com/en-us/azure/data-lake-store/data-lake-store-authenticate-using-active-directory. |
CLIENTSECRET | The client secret key to complete the OAuth Authentication in the Active Directory. |
DIRECTORY | The Microsoft Azure Data Lake Store directory that you use to read data or write data. The default is root directory. |
AUTHENDPOINT | The OAuth 2.0 token endpoint from where access code is generated based on the Client ID and Client secret is completed. |
Option | Description |
---|---|
JDBCURL | Microsoft Azure SQL Data Warehouse JDBC connection string. For example, you can enter the following connection string: jdbc:sqlserver://<Server>.database.windows.net:1433;database=<Database> |
JDBCUSERNAME | User name to connect to the Microsoft Azure SQL Data Warehouse account. |
JDBCPASSWORD | Password to connect to the Microsoft Azure SQL Data Warehouse account. |
SCHEMANAME | Name of the schema in Microsoft Azure SQL Data Warehouse. |
BLOBACCOUNTNAME | Name of the Microsoft Azure Storage account to stage the files. |
BLOBACCOUNTKEY | Microsoft Azure Storage access key to stage the files. |
Option | Description |
---|---|
UseTrustedConnection | Optional. The Integration Service uses Windows authentication to access the Microsoft SQL Server database. The user name that starts the Integration Service must be a valid Windows user with access to the Microsoft SQL Server database. True or false. Default is false. |
PassThruEnabled | Optional. Enables pass-through security for the connection. When you enable pass-through security for a connection, the domain uses the client user name and password to log into the corresponding database, instead of the credentials defined in the connection object. |
MetadataAccessConnectString | JDBC connection URL to access metadata from the database. Use the following connection URL: jdbc:informatica:sqlserver://<host name>:<port>;DatabaseName=<database name> To test the connection with NTLM authentication, include the following parameters in the connection string:
Note: UNIX supports NTLMv1 and NTLMv2 but not NTLM. The following example shows the connection string for an SQL server that uses NTLMv2 authentication in an NT domain named Informatica.com: jdbc:informatica:sqlserver://host01:1433;DatabaseName=SQL1;AuthenticationMethod=ntlm2java;Domain=Informatica.com If you connect with NTLM authentication, you can enable the Use trusted connection option in the MS SQL Server connection properties. If you connect with NTLMv1 or NTLMv2 authentication, you must provide the user name and password in the connection properties. |
AdvancedJDBCSecurityOptions | Optional. Database parameters for metadata access to a secure database. Informatica treats the value of the AdvancedJDBCSecurityOptions field as sensitive data and encrypts the parameter string. To connect to a secure database, include the following parameters:
If this parameter is set to True, Informatica validates the certificate that is sent by the database server. If you specify the HostNameInCertificate parameter, Informatica also validates the host name in the certificate. If this parameter is set to false, Informatica does not validate the certificate that is sent by the database server. Informatica ignores any truststore information that you specify. Note: For a complete list of the secure JDBC parameters, see the DataDirect JDBC documentation. Informatica appends the secure JDBC parameters to the connection string. If you include the secure JDBC parameters directly to the connection string, do not enter any parameters in the AdvancedJDBCSecurityOptions field. |
DataAccessConnectString | Required. Connection string used to access data from the database. Enter the connection string in the following format: <server name>@<database name> |
DomainName | Optional. The name of the domain where Microsoft SQL Server is running. |
PacketSize | Optional. Increase the network packet size to allow larger packets of data to cross the network at one time. |
CodePage | Required. Code to read from or write to the database. Use the ISO code page name, such as ISO-8859-6. The code page name is not case sensitive. |
UseDSN | Required. Determines whether the Data Integration Service must use the Data Source Name for the connection. If you set the option value to true, the Data Integration Service retrieves the database name and server name from the DSN. If you set the option value to false, you must enter the database name and server name. |
ProviderType | Required. The connection provider that you want to use to connect to the Microsoft SQL Server database. You can define one of the following values:
|
OwnerName | Optional. The table owner name. |
SchemaName | Optional. The name of the schema in the database. You must specify the schema name for the Profiling Warehouse if the schema name is different from the database user name. You must specify the schema name for the data object cache database if the schema name is different from the database user name and if you configure user-managed cache tables. |
EnvironmentSQL | Optional. SQL commands to set the database environment when you connect to the database. The Data Integration Service executes the connection environment SQL each time it connects to the database. For example, ALTER SESSION SET CURRENT_SCHEMA=INFA_USR; Note: Enclose special characters in double quotes. |
TransactionSQL | Optional. SQL commands to execute before each transaction. The Data Integration Service executes the transaction SQL at the beginning of each transaction. For example, SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; Note: Enclose special characters in double quotes. |
QuoteChar | Optional. The character that you will use for quotes in this connection. The type of character used to identify special characters and reserved SQL keywords, such as WHERE. The Data Integration Service places the selected character around special characters and reserved SQL keywords. The Data Integration Service also uses this character for the QuoteChar property. Default is 0. |
EnableQuotes | Optional. Choose to enable quotes or not for this connection. When enabled, the Data Integration Service places identifier characters around table, view, schema, synonym, and column names when generating and executing SQL against these objects in the connection. Use if the objects have mixed-case or lowercase names. Valid values are True or False. Default is True. |
EnableConnectionPool | Optional. Enables connection pooling. When you enable connection pooling, the connection pool retains idle connection instances in memory. When you disable connection pooling, the Data Integration Service stops all pooling activity. Valid values are True or False. Default is True. |
ConnectionPoolSize | Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances. Default is 15. |
ConnectionPoolMaxIdleTime | Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idletime when it does not exceed the minimum number of idle connection instances. Default is 120. |
ConnectionPoolMinConnections | Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0. |
Option | Description |
---|---|
AuthenticationType | Required. Authentication type for the connection. Provide one of the following authentication types:
|
DiscoveryServiceURL | Required. URL of the Microsoft Dynamics CRM service. Use the following format: <http/https>://<Application server name>:<port>/XRMService/2011/Discovery.svc To find the Discovery Service URL, log in to the Microsoft Live instance and click Settings > Customization > Developer Resources. |
Domain | Required. Domain to which the user belongs. You must provide the complete domain name. For example, msd.sampledomain.com. Configure domain for active directory and claims-based authentication. Note: If you select Passport authentication type, you must provide a dummy value for Domain. |
ConfigFilesForMetadata | Configuration directory for the client. Default directory is: <INFA_HOME>/clients/DeveloperClient/msdcrm/conf |
OrganizationName | Required. Microsoft Dynamics CRM organization name. Organization names are case sensitive. For Microsoft Live authentication, use the Microsoft Live Organization Unique Name. To find the Organization Unique Name, log in to the Microsoft Live instance and click Settings > Customization > Developer Resources |
Password | Required. Password to authenticate the user. |
ConfigFilesForData | Configuration directory for the server. If the server file is located in a different directory, specify the directory path. |
SecurityTokenService | Required. Microsoft Dynamics CRM security token service URL. For example, https://sts1.<company>.com. Configure for claims-based authentication. Note: If you select Passport or Active Directory authentication type, you must provide a dummy value for SecurityTokenService. |
Username | Required. User ID registered with Microsoft Dynamics CRM. |
UseMetadataConfigForDataAccess | Select this option if the configuration file and server file are in the same directory. If the server file is in a different directory, uncheck this option and specify the directory path in the Data Access field. Provide one of the following values:
|
KeyStoreFileName | Contains the keys and certificates required for secure communication. If you want to use the Java cacerts file, clear this field. |
KeyStorePassword | Password for the infa_keystore.jks file. If you want to use the Java cacerts file, clear this field. |
TrustStoreFileName | Set the INFA_TRUSTSTORE in the environment variables. The directory must contain the truststore file infa_truststore.jks. If the file is not available at the path specified, the Data Integration Service checks for the certificate in the Java cacerts file. If you want to use the Java cacerts file, clear this field. |
TrustStorePassword | Password for the infa_keystore.jks file. If you want to use the Java cacerts file, clear this field. |
Option | Description |
---|---|
connectionString | Required. Name of the ODBC data source that you create to connect to the Netezza database. |
jdbcUrl | Required. JDBC URL that the Developer tool must use when it connects to the Netezza database. Use the following format: jdbc:netezza://<hostname>:<port>/<database name> |
username | Required. User name with the appropriate permissions to access the Netezza database. |
password | Required. Password for the database user name. |
timeout | Required. Number of seconds that the Developer tool waits for a response from the Netezza database before it closes the connection. |
Property | Description |
---|---|
URL | Required. OData service root URL that exposes the data that you want to read. |
securityType | Optional. Security protocol that the Developer tool must use to establish a secure connection with the OData server. Enter one of the following values:
|
trustStoreFileName | Required if you enter a security type. Name of the truststore file that contains the public certificate for the OData server. |
trustStorePassword | Required if you enter a security type. Password for the truststore file that contains the public certificate for the OData server. |
keyStoreFileName | Required if you enter a security type. Name of the keystore file that contains the private key for the OData server. |
keyStorePassword | Required if you enter a security type. Password for the keystore file that contains the private key for the OData server. |
Option | Description |
---|---|
PassThruEnabled | Optional. Enables pass-through security for the connection. When you enable pass-through security for a connection, the domain uses the client user name and password to log into the corresponding database, instead of the credentials defined in the connection object. |
DataAccessConnectString | Connection string used to access data from the database. Enter the connection string in the following format: <database name> |
CodePage | Required. Code page used to read from a source database or write to a target database or file. |
EnvironmentSQL | Optional. SQL commands to set the database environment when you connect to the database. The Data Integration Service executes the connection environment SQL each time it connects to the database. For example, ALTER SESSION SET CURRENT_SCHEMA=INFA_USR; Note: Enclose special characters in double quotes. |
TransactionSQL | Optional. SQL commands to execute before each transaction. The Data Integration Service executes the transaction SQL at the beginning of each transaction. For example, SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; Note: Enclose special characters in double quotes. |
QuoteChar | Optional. The character that you will use for quotes in this connection. The type of character used to identify special characters and reserved SQL keywords, such as WHERE. The Data Integration Service places the selected character around special characters and reserved SQL keywords. The Data Integration Service also uses this character for the QuoteChar property. Default is 4. |
ODBC Provider | Optional. The type of database to which the Data Integration Service connects using ODBC. For pushdown optimization, specify the database type to enable the Data Integration Service to generate native database SQL. The options are as follows:
Default is Other. |
EnableQuotes | Optional. Choose to enable quotes or not for this connection. When enabled, the Data Integration Service places identifier characters around table, view, schema, synonym, and column names when generating and executing SQL against these objects in the connection. Use if the objects have mixed-case or lowercase names. Valid values are True or False. Default is False. |
EnableConnectionPool | Optional. Enables connection pooling. When you enable connection pooling, the connection pool retains idle connection instances in memory. When you disable connection pooling, the Data Integration Service stops all pooling activity. Valid values are True or False. Default is True. |
ConnectionPoolSize | Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances. Default is 15. |
ConnectionPoolMaxIdleTime | Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idle time when it does not exceed the minimum number of idle connection instances. Default is 120. |
ConnectionPoolMinConnections | Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0. |
Option | Description |
---|---|
PassThruEnabled | Optional. Enables pass-through security for the connection. When you enable pass-through security for a connection, the domain uses the client user name and password to log into the corresponding database, instead of the credentials defined in the connection object. |
MetadataAccessConnectString | JDBC connection URL used to access metadata from the database. jdbc:informatica:oracle://<host_name>:<port>;SID=<database name> |
AdvancedJDBCSecurityOptions | Optional. Database parameters for metadata access to a secure database. Informatica treats the value of the AdvancedJDBCSecurityOptions field as sensitive data and encrypts the parameter string. To connect to a secure database, include the following parameters:
If this parameter is set to true, Informatica validates the certificate that is sent by the database server. If you specify the HostNameInCertificate parameter, Informatica also validates the host name in the certificate. If this parameter is set to false, Informatica does not validate the certificate that is sent by the database server. Informatica ignores any truststore information that you specify. Note: For a complete list of the secure JDBC parameters, see the DataDirect JDBC documentation. Informatica appends the secure JDBC parameters to the connection string. If you include the secure JDBC parameters directly to the connection string, do not enter any parameters in the AdvancedJDBCSecurityOptions field. |
DataAccessConnectString | Connection string used to access data from the database. Enter the connection string in the following format from the TNSNAMES entry: <database name> |
CodePage | Required. Code page used to read from a source database or write to a target database or file. |
EnvironmentSQL | Optional. SQL commands to set the database environment when you connect to the database. The Data Integration Service executes the connection environment SQL each time it connects to the database. For example, ALTER SESSION SET CURRENT_SCHEMA=INFA_USR; Note: Enclose special characters in double quotes. |
TransactionSQL | Optional. SQL commands to execute before each transaction. The Data Integration Service executes the transaction SQL at the beginning of each transaction. For example, SET TRANSACTION ISOLATION LEVEL SERIALIZABLE; Note: Enclose special characters in double quotes. |
EnableParallelMode | Optional. Enables parallel processing when loading data into a table in bulk mode. Used for Oracle. True or false. Default is false. |
QuoteChar | Optional. The character that you will use for quotes in this connection. The type of character used to identify special characters and reserved SQL keywords, such as WHERE. The Data Integration Service places the selected character around special characters and reserved SQL keywords. The Data Integration Service also uses this character for the QuoteChar property. Default is 0. |
EnableQuotes | Optional. Choose to enable quotes or not for this connection. When enabled, the Data Integration Service places identifier characters around table, view, schema, synonym, and column names when generating and executing SQL against these objects in the connection. Use if the objects have mixed-case or lowercase names. Valid values are True or False. Default is True. |
EnableConnectionPool | Optional. Enables connection pooling. When you enable connection pooling, the connection pool retains idle connection instances in memory. When you disable connection pooling, the Data Integration Service stops all pooling activity. Valid values are True or False. Default is True. |
ConnectionPoolSize | Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances. Default is 15. |
ConnectionPoolMaxIdleTime | Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idletime when it does not exceed the minimum number of idle connection instances. Default is 120. |
ConnectionPoolMinConnections | Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0. |
Option | Description |
---|---|
userName | Salesforce user name. |
password | Password for the Salesforce user name. The password is case sensitive. To access Salesforce outside the trusted network of your organization, you must append a security token to your password to log in to the API or a desktop client. To receive or reset your security token, log in to Salesforce and click Setup > My Personal Information > Reset My Security Token. |
SERVICE_URL | URL of the Salesforce service that you want to access. In a test or development environment, you might want to access the Salesforce Sandbox testing environment. For more information about the Salesforce Sandbox, see the Salesforce documentation. |
Option | Description |
---|---|
UserName | Required. SAP system user name. |
Password | Required. Password for the user name. |
HostName | Required. Host name of the SAP application. |
ClientNumber | Required. SAP client number. |
SystemNumber | Required. SAP system number. |
Language | Optional. SAP Logon language. |
Option | Description |
---|---|
CodePage | Required. Code to read from or write to the sequential file. Use the ISO code page name, such as ISO-8859-6. The code page name is not case sensitive. |
ArraySize | Optional. Determines the number of records in the storage array for the threads when the worker threads value is greater than 0. Valid values are from 1 through 5000. Default is 25. |
Compression | Optional. Compresses the data to decrease the amount of data that Informatica applications write over the network. True or false. Default is false. |
EncryptionLevel | Optional. Level of encryption. If you specify AES for the EncryptionType option, specify one of the following values to indicate the level of AES encryption:
Default is 1. Note: If you specify None for encryption type, the Data Integration Service ignores the encryption level value. |
EncryptionType | Optional. Enter one of the following values for the encryption type:
Default is None. Optional. Controls whether to use encryption. Specify one of the following values:
Default is None. |
InterpretAsRows | Optional. If true, the pacing size value represents a number of rows. If false, the pacing size represents kilobytes. Default is false. |
Location | Location of the PowerExchange Listener node that can connect to the data source. The location is defined in the first parameter of the NODE statement in the PowerExchange dbmover.cfg configuration file. |
OffLoadProcessing | Optional. Moves bulk data processing from the data source machine to the Data Integration Service machine. Enter one of the following values:
Default is Auto. |
PacingSize | Optional. Slows the data transfer rate in order to reduce bottlenecks. The lower the value, the greater the session performance. Minimum value is 0. Enter 0 for optimal performance. Default is 0. |
WorkerThread | Optional. Number of threads that the Data Integration Service uses to process bulk data when offload processing is enabled. For optimal performance, this value should not exceed the number of available processors on the Data Integration Service machine. Valid values are 1 through 64. Default is 0, which disables multithreading. |
WriteMode | Enter one of the following write modes:
Default is CONFIRMWRITEON. |
EnableConnectionPool | Optional. Enables connection pooling. When you enable connection pooling, the connection pool retains idle connection instances in memory. When you disable connection pooling, the Data Integration Service stops all pooling activity. True or false. Default is false. |
ConnectionPoolSize | Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances. Default is 15. |
ConnectionPoolMaxIdleTime | Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idletime when it does not exceed the minimum number of idle connection instances. Default is 120. |
ConnectionPoolMinConnections | Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0. |
Option | Description |
---|---|
UserName | Required. Teradata database user name with the appropriate write permissions to access the database. |
Password | Required. Password for the Teradata database user name. |
DriverName | Required. Name of the Teradata JDBC driver. |
ConnectionString | Required. JDBC URL to fetch metadata. |
TDPID | Required. Name or IP address of the Teradata database machine. |
databaseName | Required. Teradata database name. If you do not enter a database name, Teradata PT API uses the default login database name. |
DataCodePage | Optional. Code page associated with the database. When you run a mapping that loads to a Teradata target, the code page of the Teradata PT connection must be the same as the code page of the Teradata target. Default is UTF-8. |
Tenacity | Optional. Number of hours that Teradata PT API continues trying to log on when the maximum number of operations run on the Teradata database. Must be a positive, non-zero integer. Default is 4. |
MaxSessions | Optional. Maximum number of sessions that Teradata PT API establishes with the Teradata database. Must be a positive, non-zero integer. Default is 4. |
MinSessions | Optional. Minimum number of Teradata PT API sessions required for the Teradata PT API job to continue. Must be a positive integer between 1 and the Max Sessions value. Default is 1. |
Sleep | Optional. Number of minutes that Teradata PT API pauses before it retries to log on when the maximum number of operations run on the Teradata database. Must be a positive, non-zero integer. Default is 6. |
useMetadataJdbcUrl | Optional. Set this option to true to Indicate that the Teradata Connector for Hadoop (TDCH) must use the JDBC URL that you specified in the connection string. Set this option to false to specify a different JDBC URL that TDCH must use when it runs the mapping. |
tdchJdbcUrl | Required. JDBC URL that TDCH must use when it runs the mapping. |
dataEncryption | Required. Enables full security encryption of SQL requests, responses, and data on Windows. To enable data encryption on Unix, add the command UseDataEncryption=Yes to the DSN in the odbc.ini file. |
authenticationType | Required. Authenticates the user. Enter of the following values for the type of the authentication:
Default is Native. |
hadoopConnector | Required if you want to enable Sqoop connectivity for the data object that uses the JDBC connection. The Data Integration Service runs the mapping in the Hadoop run-time environment through Sqoop. You can configure Sqoop connectivity for relational data objects, customized data objects, and logical data objects that are based on a JDBC-compliant database. Set the value to SQOOP_146 to enable Sqoop connectivity. |
hadoopConnectorArgs | Optional. Enter the arguments that Sqoop must use to connect to the database. Enclose the Sqoop arguments within single quotes. Separate multiple arguments with a space. For example, hadoopConnectorArgs='--<Sqoop argument 1> --<Sqoop argument 2>' To read data from or write data to Teradata through Teradata Connector for Hadoop (TDCH) specialized connectors for Sqoop, define the TDCH connection factory class in the hadoopConnectorArgs argument. The connection factory class varies based on the TDCH Sqoop Connector that you want to use.
hadoopConnectorArgs='-Dsqoop.connection.factories=com.cloudera.connector.teradata.TeradataManagerFactory' hadoopConnectorArgs='-Dsqoop.connection.factories=org.apache.sqoop.teradata.TeradataManagerFactory' If you do not enter Sqoop arguments, the Data Integration Service constructs the Sqoop command based on the JDBC connection properties. |
Option | Description |
---|---|
ConsumerKey | The consumer key that you get when you create the application in Twitter. Twitter uses the key to identify the application. |
ConsumerSecret | The consumer secret that you get when you create the Twitter application. Twitter uses the secret to establish ownership of the consumer key. |
AccessToken | Access token that the OAuth Utility returns. Twitter uses this token instead of the user credentials to access the protected resources. |
AccessSecret | Access secret that the OAuth Utility returns. The secret establishes ownership of a token. |
Option | Description |
---|---|
HoseType | Streaming API methods. You can specify the following methods:
|
UserName | Twitter user screen name. |
Password | Twitter password. |
Option | Description |
---|---|
CodePage | Required. Code to read from or write to the VSAM file. Use the ISO code page name, such as ISO-8859-6. The code page name is not case sensitive. |
ArraySize | Optional. Determines the number of records in the storage array for the threads when the worker threads value is greater than 0. Valid values are from 1 through 5000. Default is 25. |
Compression | Optional. Compresses the data to decrease the amount of data Informatica applications write over the network. True or false. Default is false. |
EncryptionLevel | Optional. Level of encryption. If you specify AES for the EncryptionType option, specify one of the following values to indicate the level of AES encryption:
Default is 1. Note: If you specify None for encryption type, the Data Integration Service ignores the encryption level value. |
EncryptionType | Optional. Controls whether to use encryption. Specify one of the following values:
Default is None. |
InterpretAsRows | Optional. If true, the pacing size value represents a number of rows. If false, the pacing size represents kilobytes. Default is false. |
Location | Location of the PowerExchange listener node that can connect to VSAM. The node is defined in the PowerExchange dbmover.cfg configuration file. |
OffLoadProcessing | Optional. Moves bulk data processing from the VSAM source to the Data Integration Service machine. Enter one of the following values:
Default is Auto. |
PacingSize | Optional. Slows the data transfer rate in order to reduce bottlenecks. The lower the value, the greater the session performance. Minimum value is 0. Enter 0 for optimal performance. Default is 0. |
WorkerThread | Optional. Number of threads that the Data Integration Service uses to process bulk data when offload processing is enabled. For optimal performance, this value should not exceed the number of available processors on the Data Integration Service machine. Valid values are 1 through 64. Default is 0, which disables multithreading. |
WriteMode | Enter one of the following write modes:
Default is CONFIRMWRITEON. |
EnableConnectionPool | Optional. Enables connection pooling. When you enable connection pooling, the connection pool retains idle connection instances in memory. When you disable connection pooling, the Data Integration Service stops all pooling activity. True or false. Default is false. |
ConnectionPoolSize | Optional. Maximum number of idle connections instances that the Data Integration Service maintains for a database connection. Set this value to be more than the minimum number of idle connection instances. Default is 15. |
ConnectionPoolMaxIdleTime | Optional. Number of seconds that a connection exceeding the minimum number of connection instances can remain idle before the connection pool drops it. The connection pool ignores the idletime when it does not exceed the minimum number of idle connection instances. Default is 120. |
ConnectionPoolMinConnections | Optional. Minimum number of idle connection instances that the pool maintains for a database connection. Set this value to be equal to or less than the idle connection pool size. Default is 0. |
Option | Description |
---|---|
ManagementConsoleURL | URL of the Local Management Console where the robot is uploaded. The URL must start with http or https. For example, http://localhost:50080. |
RQLServicePort | The port number where the socket service listens for the RQL service. Enter a value from 1 through 65535. Default is 50000. |
Username | User name required to access the Local Management Console. |
Password | Password to access the Local Management Console. |