This release includes enhancements to the following connectors.
Amazon Redshift V2 Connector
This release includes the following enhancements for Amazon Redshift V2 Connector:
•Mapping in SQL ELT mode
You can push certain data type formatting, date and time, hash, and string functions to Amazon Redshift.
For a list of functions that you can push down to Amazon Redshift, see the Amazon Redshift V2 Connector documentation.
Amazon S3 V2 Connector
You can read from and write to Delta format files in Amazon S3.
Databricks Connector
This release includes the following enhancements for Databricks Connector:
•Mapping in SQL ELT mode
You can push certain window functions in an Expression transformation to Databricks.
For a list of functions that you can push down to Databricks, see the Databricks Connector documentation.
•You can use OAuth Machine-to-Machine authentication for service principals to read from and write to Databricks objects. OAuth Machine-to-Machine authentication uses OAuth 2.0 protocol to authenticate access to Databricks objects.
•You can read data from external tables of Parquet and CSV formats in Databricks.
Google BigQuery V2 Connector
You can push certain aggregate functions, conditional expressions, security and bit functions, and window functions to Google BigQuery in a SQL ELT mode mapping.
For a list of functions that you can push to Google BigQuery, see the Google BigQuery Connectors documentation.
Hive Connector
You can use a Hive connection in mappings to access Hive sources on Hadoop clusters with the Amazon EMR 6.4 distribution version.
IBM MQ Connector
You can use custom cipher suites for the SSL-enabled IBM MQ connections to securely connect to IBM MQ based on the level of security you need.
Microsoft Azure Data Lake Storage Gen2 Connector
You can use the document file format in a mapping in advanced mode to read PDF files.
Microsoft Fabric Data Warehouse Connector
You can create a mapping in SQL ELT mode when you extract data from the following sources and load data to Microsoft Fabric Data Warehouse:
•Microsoft Fabric Data Warehouse
•Microsoft Fabric Lakehouse
Microsoft Fabric OneLake Connector
You can use the document file format in a mapping in advanced mode to read PDF files.
MongoDB V2 Connector
You can now use the MongoDB V2 connection in Source and Target transformations in mappings to read from and write data to MongoDB.
Open Table Connector
You can read from or write to Apache Iceberg tables on Microsoft Azure Data Lake Storage Gen2, managed by the Apache Hive Metastore Catalog. This functionality is available for preview. For more information, see Important notices.
SAP Table Connector includes performance improvement of extracting changed data from the Change Document Header table in SAP in a mapping configured for delta extraction with the delta update mode.
Snowflake Data Cloud Connector
This release includes the following enhancements for Snowflake Data Cloud Connector:
•Mapping in SQL ELT mode
You can push certain aggregate, conditional expression, context, and cortex functions to Snowflake.
For a list of functions that you can push down to Snowflake, see the Snowflake Data Cloud Connector documentation.
•When you write data from Amazon S3 sources to Snowflake targets, you can access data files stored in the Amazon S3 buckets without using the client secret and access keys in mappings enabled for SQL ELT optimization.
This feature also applies to mappings in advanced mode.
•You can view the number of processed and failed rows for each source and target in the job details page for a Snowflake mapping task that is based on a mapping in advanced mode.
•When a CDC mapping task that writes change data from a CDC source to a Snowflake target is stopped, you can restart it from the beginning of the task or resume it from where the task last ended.
For more information about restarting and resuming a mapping task, see Tasks.
SuccessFactors OData Connector
When you read data from SuccessFactors Odata, you can filter records for specified dates.