When you add a mapping to a mapping task, you can choose how Data Integration handles changes in the data object schemas. To refresh the schema every time the task runs, you can enable dynamic schema handling in the task.
A schema change includes one or more of the following changes to the data object:
•Fields added, deleted, or renamed.
•Fields updated for data type, precision, or scale.
Configure schema change handling in the Advanced Options section on the Runtime Options tab when you configure the task. You can configure asynchronous or dynamic schema change handling.
When you configure dynamic schema change handling, you can choose from the following options to refresh the schema:
Alter and apply changes
Data Integration applies the following changes from the source schema to the target schema:
- New fields. Alters the target schema and adds the new fields from the source.
- Renamed fields. Adds renamed fields as new columns in the target.
- Data type and precision updates. Applies these changes to the target. Updates to the scale are not applicable.
Data Integration does not apply the schema changes to the target.
Drop current and recreate
Drops the existing target table and then recreates the target table at runtime using all the incoming metadata fields from the source.
Handling dynamic schemas for mappings in advanced mode
When you run a mapping in advanced mode and if a column is renamed or a field is added to the source, the target schema is jumbled because of the difference in columns in the source and target schema.
To avoid this, set the following properties, separated by an ampersand, in the Additional Write Runtime Parameters field in the Target transformation:
•column_mapping=name
•column_mismatch_behavior=ignore
When you set these properties, Data Integration includes the name and also ignores any mismatch between the source and target schema when the column names contain uppercase letters, digits, or underscore.
If the column names do not contain uppercase letters, you must additionally set the keep_column_case=on parameter in both the Source and Target transformations:
•For the source, set the parameter in the Additional JDBC URL Parameters of the Snowflake Data Cloud connection.
•For the target, specify the parameter in the Additional Write Runtime Parameters property of the Target transformation.
Rules and guidelines for dynamic schema handling
Consider the following rules and guidelines when you enable dynamic schema change handling:
•Do not include an override for the target or source from the advanced properties. If the mapping is enabled for dynamic schema handling and contains a source or target override, the mapping fails with the following error: Exception: class java.lang.NullPointerException occurred in writeMetadata
However, you can specify an SQL override in mappings enabled for dynamic schema handling.
•When you set the keep_column_case=on parameter in the connection properties to fix dynamic schema changes for column names that are not in upper case letters, ensure to use this connection only for mappings in advanced mode. Data Integration uses this connection to run the mapping on an advanced cluster. If you use the same connection in mappings that are not in advanced mode, it results in an error.
•In advanced mode, do not specify the Truncate Target Table option in the Target transformation properties. If a column is renamed or added in the source schema and if you configure the Truncate Target Table and also set the Additional Write Runtime Parameters for the Target transformation, the mapping fails with the following error: java.lang.UnsupportedOperationException