Property | Description |
---|---|
Cluster Configuration | The name of the cluster configuration associated with the Hadoop environment. Appears in General Properties. |
Write Reject Files to Hadoop | Select the property to move the reject files to the HDFS location listed in the property Reject File Directory when you run mappings. Appears in Reject Directory Properties. |
Reject File Directory | The directory for Hadoop mapping files on HDFS when you run mappings. Appears in Reject Directory Properties |
Blaze Job Monitor Address | The host name and port number for the Blaze Job Monitor. Appears in Blaze Configuration. |
YARN Queue Name | The YARN scheduler queue name used by the Spark engine that specifies available resources on a cluster. Appears in Blaze Configuration. |
Current Name | Previous Name | Description |
---|---|---|
ImpersonationUserName | HiveUserName | Hadoop impersonation user. The user name that the Data Integration Service impersonates to run mappings in the Hadoop environment. |
Hive Staging Database Name | Database Name | Namespace for Hive staging tables. Appears in Common Properties. Previously appeared in Hive Properties. |
HiveWarehouseDirectory | HiveWarehouseDirectoryOnHDFS | The absolute HDFS file path of the default database for the warehouse that is local to the cluster. |
Blaze Staging Directory | Temporary Working Directory on HDFS CadiWorkingDirectory | The HDFS file path of the directory that the Blaze engine uses to store temporary files. Appears in Blaze Configuration. |
Blaze User Name | Blaze Service User Name CadiUserName | The owner of the Blaze service and Blaze service logs. Appears in Blaze Configuration. |
YARN Queue Name | Yarn Queue Name CadiAppYarnQueueName | The YARN scheduler queue name used by the Blaze engine that specifies available resources on a cluster. Appears in Blaze Configuration. |
BlazeMaxPort | CadiMaxPort | The maximum value for the port number range for the Blaze engine. |
BlazeMinPort | CadiMinPort | The minimum value for the port number range for the Blaze engine. |
BlazeExecutionParameterList | CadiExecutionParameterList | An optional list of configuration parameters to apply to the Blaze engine. |
SparkYarnQueueName | YarnQueueName | The YARN scheduler queue name used by the Spark engine that specifies available resources on a cluster. |
Spark Staging Directory | Spark HDFS Staging Directory | The HDFS file path of the directory that the Spark engine uses to store temporary files for running jobs. |
Property | Description |
---|---|
Resource Manager Address | The service within Hadoop that submits requests for resources or spawns YARN applications. Imported into the cluster configuration as the property yarn.resourcemanager.address. Previously appeared in Hadoop Cluster Properties. |
Default File System URI | The URI to access the default Hadoop Distributed File System. Imported into the cluster configuration as the property fs.defaultFS or fs.default.name. Previously appeared in Hadoop Cluster Properties. |
Property | Description |
---|---|
Type | The connection type. Previously appeared in General Properties. |
Metastore Execution Mode* | Controls whether to connect to a remote metastore or a local metastore. Previously appeared in Hive Configuration. |
Metastore Database URI* | The JDBC connection URI used to access the data store in a local metastore setup. Previously appeared in Hive Configuration. |
Metastore Database Driver* | Driver class name for the JDBC data store. Previously appeared in Hive Configuration. |
Metastore Database User Name* | The metastore database user name. Previously appeared in Hive Configuration. |
Metastore Database Password* | The password for the metastore user name. Previously appeared in Hive Configuration. |
Remote Metastore URI* | The metastore URI used to access metadata in a remote metastore setup. This property is imported into the cluster configuration as the property hive.metastore.uris. Previously appeared in Hive Configuration. |
Job Monitoring URL | The URL for the MapReduce JobHistory server. Previously appeared in Hive Configuration. |
* These properties are deprecated in 10.2. When you upgrade to 10.2, the property values that you set in a previous release are saved in the repository, but they do not appear in the connection properties. |
Property | Description |
---|---|
ZooKeeper Host(s) | Name of the machine that hosts the ZooKeeper server. |
ZooKeeper Port | Port number of the machine that hosts the ZooKeeper server. |
Enable Kerberos Connection | Enables the Informatica domain to communicate with the HBase master server or region server that uses Kerberos authentication. |
HBase Master Principal | Service Principal Name (SPN) of the HBase master server. |
HBase Region Server Principal | Service Principal Name (SPN) of the HBase region server. |
Property | Description |
---|---|
Default FS URI | The URI to access the default Hadoop Distributed File System. |
JobTracker/Yarn Resource Manager URI | The service within Hadoop that submits the MapReduce tasks to specific nodes in the cluster. |
Hive Warehouse Directory on HDFS | The absolute HDFS file path of the default database for the warehouse that is local to the cluster. |
Metastore Execution Mode | Controls whether to connect to a remote metastore or a local metastore. |
Metastore Database URI | The JDBC connection URI used to access the data store in a local metastore setup. |
Metastore Database Driver | Driver class name for the JDBC data store. |
Metastore Database User Name | The metastore database user name. |
Metastore Database Password | The password for the metastore user name. |
Remote Metastore URI | The metastore URI used to access metadata in a remote metastore setup. This property is imported into the cluster configuration as the property hive.metastore.uris. |
Name | Value |
---|---|
Reject File Directory | The directory for Hadoop mapping files on HDFS when you run mappings in the Hadoop environment. The Blaze engine can write reject files to the Hadoop environment for flat file, HDFS, and Hive targets. The Spark and Hive engines can write reject files to the Hadoop environment for flat file and HDFS targets. Choose one of the following options:
|