This release includes enhancements to the following connectors.
Common
Effective in this release, the Connections page features a refreshed design, new table layout, and improved search and sorting. If a connector is in maintenance mode, a Deprecated label appears on the connector tile.
Amazon Athena Connector
You can use workgroups in Amazon Athena to isolate and manage queries for different users, teams, or applications. You can configure each workgroup with specific settings, such as default S3 output locations and encryption options, to control how queries are processed and results are stored.
You can specify a workgroup in the Amazon Athena JDBC URL.
Amazon Redshift V2 Connector
This release includes the following enhancements for Amazon Redshift V2 Connector:
•Amazon Redshift V2 Connector uses the AWS SDK version 2.30.15.
•You can read data from tables enabled for data sharing in Amazon Redshift. Data sharing allows you to share data across Amazon Redshift clusters without the need to copy or move the data or duplicate the storage.
Amazon S3 V2 Connector
This release includes the following enhancements for Amazon S3 V2 Connector:
•Amazon S3 V2 Connector uses the AWS SDK version 2.30.15.
•You can tag target objects to categorize the storage for the objects when you use Avro, Parquet, JSON, and ORC file formats.
•You can merge multiple partition files into a single file when you write data to multiple Amazon S3 flat file targets in a mapping.
•You can configure fixed partitioning to optimize the mapping performance at run time when you read data from a manifest file or a flat file from a directory.
Databricks Connector
This release includes the following enhancements for Databricks Connector:
•SQL ELT optimization enhancements
- You can push median and percentile aggregate functions to Databricks.
- You can use a filter argument in an aggregate function.
•You can read from and write to Unity Catalog-managed Apache Iceberg tables in a mapping.
This enhancement doesn't apply to a mapping in advanced mode.
•You can configure the Secure Agent to extend the timeout duration for the temporary credentials issued by AWS Security Token Service when you stage data in Amazon S3.
•You can configure the Secure Agent to retry the connection to the Databricks SQL Warehouse cluster up to 3 times, with a 4.5-minute wait between each attempt if the connection fails.
•You can edit the precision or scale of a field in the source or lookup object in a Databricks mapping in SQL ELT mode that reads from Amazon S3, Google Cloud Storage, or Microsoft Azure Data Lake Storage Gen2.
DB2 Loader Connector
This release includes the following enhancements for DB2 Loader Connector:
•You can truncate the target table before writing data in a mapping.
•When you choose to use the default file name for the target files in a mapping, a deterministic unique suffix is appended to the file names.
Google BigQuery V2 Connector
This release includes the following enhancements for Google BigQuery V2 Connector:
•You can read from and write to Apache Iceberg tables in a mapping.
•When a mapping enabled with SQL ELT optimization contains multiple pipelines, you can define the flow run order to load the targets from the pipelines in a specific order.
Google Cloud Storage V2 Connector
You can use the delta file format to read and write data in a mapping.
Hadoop Files V2 Connector
When you specify the file name in the target object, you have an option to use the specified file name as the target file name. You can skip appending any additional characters to the target file name.
This enhancement doesn't apply to a mapping in advanced mode.
IBM MQ Connector
You can store the recovery state in a queue in the Queue Manager when you run a mapping task enabled with recovery strategy that writes to an IBM MQ target.
JDBC V2 Connector
You can configure a connection and mapping in one environment and then migrate and run the mapping in another environment using a different connection.
Kafka Connector
You can write headers along with the data to a Kafka topic in a mapping.
This enhancement doesn't apply to mappings in advanced mode.
Microsoft Azure Data Lake Storage Gen2 Connector
You can use the Icerberg format when you run a mapping to read from and write to Apache Icerberg files.
Microsoft Dynamics 365 for Operations Connector
You can now record reject rows and corresponding error messages in a reject file when you write data to a Microsoft Dynamics 365 for Operations target.
Microsoft Fabric Data Warehouse Connector
You can sort records based on the conditions you specify when you read data from Microsoft Fabric Data Warehouse.
Microsoft Fabric OneLake Connector
This release includes the following enhancements for Microsoft Fabric OneLake Connector:
•You can use the Icerberg format when you run a mapping to read from and write to Apache Icerberg files.
•When you use discover structure and document file format in a mapping in advanced mode, you can incrementally load source files from a directory to read and process only the files that have changed since the last time the mapping ran.
Microsoft SQL Server Connector
This release includes the following enhancements for Microsoft SQL Server Connector:
•When you use Kerberos authentication mode to connect to Microsoft SQL Server on the Linux platform, multiple users can run mappings in parallel without restarting the Data Integration Server.
•You can use Unicode characters as part of the field names in the Source and Target transformations in a mapping.
ODBC Connector
This release includes the following enhancements for ODBC Connector:
•You can configure SCRAM-SHA-256 authentication when you connect to a Greenplum database using an ODBC connection.
•When you use Kerberos authentication mode to connect to a DB2 database on the Linux platform, multiple users can run mappings in parallel without restarting the Data Integration Server.
Open Table Connector
You can read from and write to Apache Iceberg tables that are managed by the AWS Glue catalog and stored in Amazon S3 in a mapping.
Oracle Connector
This release includes the following enhancements for Oracle Connector:
•You can connect to Oracle Database version 23ai using an Oracle connection.
•You can run a mapping using the upgraded ODBC EBF driver version 08.02.3863.
This release includes the following enhancements for PostgreSQL Connector:
•You can connect to PostgreSQL database version 16.x using a PostgreSQL connection.
•You can run a mapping using one of the following upgraded JDBC driver versions:
- JDBC Base driver: 6.0.0.001023
- JDBC Latest driver: 6.0.0.001796
Salesforce Connector
You can override object name at run time from a parameter file for single objects.
This enhancement doesn't apply to mappings in advanced mode.
Salesforce Data Cloud Connector
You can configure additional connection properties, such as connection retry and timeout, when you create a connection.
SAP BW Connector
This release includes the following enhancements for SAP BW Connector:
•You can read data from ADSO or Composite Provider objects in SAP BW/4HANA applications.
•You can connect to SAP BW/4HANA applications using the transport request B42K900164 for the BW4HANA transport that is shipped with the connector package.
SAP HANA Connector
You can perform the following actions on the lookup data when you look up data from SAP HANA:
•Override the default SQL query used to look up data.
•Use a custom query to reduce the number of columns to query.
•Filter data before the data enters the data flow. This enhancement doesn't apply to mappings in advanced mode.
SAP OData V4 Connector
You can read data from hierarchical and flat entities, and write data to flat entities in SAP S/4HANA applications.
Snowflake Data Cloud Connector
This release includes the following enhancements for Snowflake Data Cloud Connector:
•Snowflake Data Cloud Connector uses the JDBC driver version 3.25.0, by default, to connect to Snowflake and run your Snowflake Data Cloud jobs.
You can also roll back to the previous JDBC driver version 3.13.34. The driver is available in the rollback folder in the Secure Agent installation directory.
•You can use a programmatic access token generated in Snowflake when you connect to Snowflake using standard authentication.
•You can limit the number of rejected rows written to the rejected file for the target object in a mapping. This enhancement doesn't apply to mappings in advanced mode.
•You can write data of the TIMESTAMPNTZ data type with microsecond or nanosecond precision to a Snowflake target in a mapping enabled with the staging property for the write operation. This enhancement doesn't apply to mappings in advanced mode.
Teradata Connector
This release includes the following enhancements for Teradata Connector:
•You can select source objects from multiple databases when you run a Teradata mapping.
•When you configure a KRB5 authentication and connect to Teradata from Linux, multiple users can run mappings in parallel without restarting the Data Integration Server.
Veeva Vault Connector
This release includes the following enhancements for Veeva Vault Connector:
•You can use Authorization Code authentication to connect to Veeva Vault in a mapping.
•You can read data of the Attachment data type from a Veeva Vault source object in a mapping.
•You can write to Veeva Vault and perform insert, update, upsert, or delete operations on the Veeva Vault target object.
Zendesk V2 Connector
You can now use an API token or OAuth 2.0 client credentials authentication to connect to Zendesk.