Big Data Management Administrator Guide > Connections > JDBC Connection Properties
  

JDBC Connection Properties

You can use a JDBC connection to access tables in a database. You can create and manage a JDBC connection in the Administrator tool, the Developer tool, or the Analyst tool.
Note: The order of the connection properties might vary depending on the tool where you view them.
The following table describes JDBC connection properties:
Property
Description
Database Type
The database type.
Name
Name of the connection. The name is not case sensitive and must be unique within the domain. The name cannot exceed 128 characters, contain spaces, or contain the following special characters:
~ ` ! $ % ^ & * ( ) - + = { [ } ] | \ : ; " ' < , > . ? /
ID
String that the Data Integration Service uses to identify the connection. The ID is not case sensitive. It must be 255 characters or less and must be unique in the domain. You cannot change this property after you create the connection. Default value is the connection name.
Description
The description of the connection. The description cannot exceed 765 characters.
User Name
The database user name.
If you configure Sqoop, Sqoop uses the user name that you configure in this field. If you configure the --username argument in a JDBC connection or mapping, Sqoop ignores the argument.
Password
The password for the database user name.
If you configure Sqoop, Sqoop uses the password that you configure in this field. If you configure the --password argument in a JDBC connection or mapping, Sqoop ignores the argument.
JDBC Driver Class Name
Name of the JDBC driver class.
The following list provides the driver class name that you can enter for the applicable database type:
  • - DataDirect JDBC driver class name for Oracle:
  • com.informatica.jdbc.oracle.OracleDriver
  • - DataDirect JDBC driver class name for IBM DB2:
  • com.informatica.jdbc.db2.DB2Driver
  • - DataDirect JDBC driver class name for Microsoft SQL Server:
  • com.informatica.jdbc.sqlserver.SQLServerDriver
  • - DataDirect JDBC driver class name for Sybase ASE:
  • com.informatica.jdbc.sybase.SybaseDriver
  • - DataDirect JDBC driver class name for Informix:
  • com.informatica.jdbc.informix.InformixDriver
  • - DataDirect JDBC driver class name for MySQL:
  • com.informatica.jdbc.mysql.MySQLDriver
For more information about which driver class to use with specific databases, see the vendor documentation.
Connection String
Connection string to connect to the database. Use the following connection string:
jdbc:<subprotocol>:<subname>
The following list provides sample connection strings that you can enter for the applicable database type:
  • - Connection string for DataDirect Oracle JDBC driver:
  • jdbc:informatica:oracle://<host>:<port>;SID=<value>
  • - Connection string for Oracle JDBC driver:
  • jdbc:oracle:thin:@//<host>:<port>:<SID>
  • - Connection string for DataDirect IBM DB2 JDBC driver:
  • jdbc:informatica:db2://<host>:<port>;DatabaseName=<value>
  • - Connection string for IBM DB2 JDBC driver:
  • jdbc:db2://<host>:<port>/<database_name>
  • - Connection string for DataDirect Microsoft SQL Server JDBC driver:
  • jdbc:informatica:sqlserver://<host>;DatabaseName=<value>
  • - Connection string for Microsoft SQL Server JDBC driver:
  • jdbc:sqlserver://<host>;DatabaseName=<value>
  • - Connection string for Netezza JDBC driver:
  • jdbc:netezza://<host>:<port>/<database_name>
  • - Connection string for Pivotal Greenplum driver:
  • jdbc:pivotal:greenplum://<host>:<port>;/database_name=<value>
  • - Connection string for Postgres Greenplum driver:
  • jdbc:postgressql://<host>:<port>/<database_name>
  • - Connection string for Teradata JDBC driver:
  • jdbc:teradata://<host>/database_name=<value>,tmode=<value>,charset=<value>
For more information about the connection string to use with specific drivers, see the vendor documentation.
Environment SQL
Optional. Enter SQL commands to set the database environment when you connect to the database. The Data Integration Service executes the connection environment SQL each time it connects to the database.
Note: If you enable Sqoop, Sqoop ignores this property.
Transaction SQL
Optional. Enter SQL commands to set the database environment when you connect to the database. The Data Integration Service executes the transaction environment SQL at the beginning of each transaction.
Note: If you enable Sqoop, Sqoop ignores this property.
SQL Identifier Character
Type of character that the database uses to enclose delimited identifiers in SQL queries. The available characters depend on the database type.
Select (None) if the database uses regular identifiers. When the Data Integration Service generates SQL queries, the service does not place delimited characters around any identifiers.
Select a character if the database uses delimited identifiers. When the Data Integration Service generates SQL queries, the service encloses delimited identifiers within this character.
Note: If you enable Sqoop, Sqoop ignores this property.
Support Mixed-case Identifiers
Enable if the database uses case-sensitive identifiers. When enabled, the Data Integration Service encloses all identifiers within the character selected for the SQL Identifier Character property.
When the SQL Identifier Character property is set to none, the Support Mixed-case Identifiers property is disabled.
Note: If you enable Sqoop, Sqoop honors this property when you generate and execute a DDL script to create or replace a target at run time. In all other scenarios, Sqoop ignores this property.
Use Sqoop Connector
Enables Sqoop connectivity for the data object that uses the JDBC connection. The Data Integration Service runs the mapping in the Hadoop run-time environment through Sqoop.
You can configure Sqoop connectivity for relational data objects, customized data objects, and logical data objects that are based on a JDBC-compliant database.
Select Sqoop v1.x to enable Sqoop connectivity.
Default is None.
Sqoop Arguments
Enter the arguments that Sqoop must use to connect to the database. Separate multiple arguments with a space.
To run the mapping on the Blaze engine with the Teradata Connector for Hadoop (TDCH) specialized connectors for Sqoop, you must define the TDCH connection factory class in the Sqoop arguments. The connection factory class varies based on the TDCH Sqoop Connector that you want to use.
  • - To use Cloudera Connector Powered by Teradata, configure the following Sqoop argument:
  • -Dsqoop.connection.factories=com.cloudera.connector.teradata.TeradataManagerFactory
  • - To use Hortonworks Connector for Teradata (powered by the Teradata Connector for Hadoop), configure the following Sqoop argument:
  • -Dsqoop.connection.factories=org.apache.sqoop.teradata.TeradataManagerFactory
To run the mapping on the Spark engine, you do not need to define the TDCH connection factory class in the Sqoop arguments. The Data Integration Service invokes the Cloudera Connector Powered by Teradata and Hortonworks Connector for Teradata (powered by the Teradata Connector for Hadoop) by default.
Note: To run the mapping with a generic JDBC connector instead of the specialized Cloudera or Hortonworks connector, you must define the --driver and --connection-manager Sqoop arguments in the JDBC connection. If you define the --driver and --connection-manager arguments in the Read or Write transformation of the mapping, Sqoop ignores the arguments.
If you do not enter Sqoop arguments, the Data Integration Service constructs the Sqoop command based on the JDBC connection properties.
On the Hive engine, to run a column profile on a relational data object that uses Sqoop, set the Sqoop argument m to 1. Use the following syntax:
-m 1

Sqoop Connection-Level Arguments

In the JDBC connection, you can define the arguments that Sqoop must use to connect to the database. The Data Integration Service merges the arguments that you specify with the default command that it constructs based on the JDBC connection properties. The arguments that you specify take precedence over the JDBC connection properties.
If you want to use the same driver to import metadata and run the mapping, and do not want to specify any additional Sqoop arguments, select Sqoop v1.x from the Use Sqoop Version list and leave the Sqoop Arguments field empty in the JDBC connection. The Data Integration Service constructs the Sqoop command based on the JDBC connection properties that you specify.
However, if you want to use a different driver for run-time tasks or specify additional run-time Sqoop arguments, select Sqoop v1.x from the Use Sqoop Version list and specify the arguments in the Sqoop Arguments field.
You can configure the following Sqoop arguments in the JDBC connection:
driver
Defines the JDBC driver class that Sqoop must use to connect to the database.
Use the following syntax:
--driver <JDBC driver class>
For example, use the following syntax depending on the database type that you want to connect to:
connect
Defines the JDBC connection string that Sqoop must use to connect to the database. The JDBC connection string must be based on the driver that you define in the driver argument.
Use the following syntax:
--connect <JDBC connection string>
For example, use the following syntax depending on the database type that you want to connect to:
connection-manager
Defines the connection manager class name that Sqoop must use to connect to the database.
Use the following syntax:
--connection-manager <connection manager class name>
For example, use the following syntax to use the generic JDBC manager class name:
--connection-manager org.apache.sqoop.manager.GenericJdbcManager
direct
When you read data from or write data to Oracle, you can configure the direct argument to enable Sqoop to use OraOop. OraOop is a specialized Sqoop plug-in for Oracle that uses native protocols to connect to the Oracle database. When you configure OraOop, the performance improves.
You can configure OraOop when you run Sqoop mappings on the Spark and Hive engines.
Use the following syntax:
--direct
When you use OraOop, you must use the following syntax to specify multiple arguments:
-D<argument=value> -D<argument=value>
Note: If you specify multiple arguments and include a space character between -D and the argument name-value pair, Sqoop considers only the first argument and ignores the remaining arguments.
To direct a MapReduce job to a specific YARN queue, configure the following argument:
-Dmapred.job.queue.name=<YARN queue name>
If you do not direct the job to a specific queue, the Spark engine uses the default queue.
-Dsqoop.connection.factories
To run the mapping on the Blaze engine with the Teradata Connector for Hadoop (TDCH) specialized connectors for Sqoop, you must configure the -Dsqoop.connection.factories argument. Use the argument to define the TDCH connection factory class that Sqoop must use. The connection factory class varies based on the TDCH Sqoop Connector that you want to use.
Note: To run the mapping on the Spark engine, you do not need to configure the -Dsqoop.connection.factories argument. The Data Integration Service invokes Cloudera Connector Powered by Teradata and Hortonworks Connector for Teradata (powered by the Teradata Connector for Hadoop) by default.
--infaoptimize
Use this argument to disable the performance optimization of Sqoop pass-through mappings on the Spark engine.
When you run a Sqoop pass-through mapping on the Spark engine, the Data Integration Service optimizes mapping performance in the following scenarios:
If you want to disable the performance optimization, set the --infaoptimize argument to false. For example, if you see data type issues after you run an optimized Sqoop mapping, you can disable the performance optimization.
Use the following syntax:
--infaoptimize false
For a complete list of the Sqoop arguments that you can configure, see the Sqoop documentation.