The November 2024 release includes changes in behavior for the following connectors.
Amazon Redshift V2 Connector
Effective in this release, Amazon Redshift V2 Connector includes the following changed behaviors due to upgrade of the JDBC driver version to 2.1.0.26:
•When an SQL query in an SQL transformation returns an error, the error messages are displayed in the following format:
Error: column""xyz"" does not exist in department_auto_dynamic
Before the upgrade, the error messages were displayed in the following format:
[Amazon] (500310) Invalid operation: column ""xyz"" does not exist in department_auto_dynamic
• When you read an external table with decimal values from a Parquet source in Amazon Redshift, the precision and scale of decimal values are maintained in the target.
For example, if a column of decimal data type has a precision of 15 and a scale of 10 in the source, after the upgrade, the decimal value 12345.6789012345 retains its precision and scale in the target.
Business 360 FEP Connector
Effective in this release, when you use Business 360 FEP connector in the mapping for an egress job for exporting field groups, the exported data includes the source systems of the field groups. If field groups contain multiple entries, you can identify them based on their source systems in the exported data.
Previously, the exported data included only the source systems for the root fields.
Databricks Connector
Effective in this release, Databricks Delta Connector is renamed to Databricks Connector.
Microsoft SQL Server Connector
The JDBC base driver is updated to version 6.0.0.001282.
•When you run a mapping with an Oracle connection to write data to a column with nVarchar2 data type, and the source data length exceeds the precision set for the target object, the rows are rejected with a warning message.
Previously, these records were written to the target after truncating the data as per the precision set for the target object.
•When you set the OdbcDataDirectNonWapi system property for the DTM type to 1 in the Secure Agent and configure a mapping to write data to a column with the VARCHAR2 data type, if the source data length exceeds the precision set for the target object, the rows are rejected with a warning message. This issue occurs when you use the driver version available in the base folder.
Previously, these records were written to the target after truncating the data as per the precision set for the target object.
To resolve this issue, perform one of the following actions:
- You can increase the length of the column in Oracle database and resync the target before running the mapping.
- To truncate and write data of the VARCHAR2 data type based on the column width, enter ColumnSizeAsCharacter=0 in the Runtime Advanced Connection Properties field in the Oracle connection and then run the mapping.
Snowflake Data Cloud Connector
Effective in this release, you can use a SQL transformation to call overloaded stored procedure in Snowflake that contain the same name and different number of arguments.
Previously, you could not configure a mapping that uses schema with multiple stored procedures that contain the same name and different number of arguments.
However, after the upgrade, if you add a stored procedure to a schema that is used in an existing mapping that already contains a stored procedure with an identical name and different number of arguments, the mapping fails at run time.
To run mapping successfully, re-select the stored procedure from the schema and re-run the mapping.