Before you configure and run a mapping, complete the required prerequisites.
Delta files
You can read from and write to Delta format files in mappings and mappings in advanced mode.
A Delta file consists of the following components:
•Parquet files where the data is stored.
•JSON files where the metadata and data change logs are stored.
Each transaction that modifies the data results in a new JSON file. The JSON files are stored in _delta_log directory.
Reading and writing Delta files in a mapping in advanced mode
To read from and write to Delta format files in a mapping in advanced mode, you must set the spark.custom.property in the Spark Session Properties section of the mapping task to the following value:
Consider the following rules and guidelines when you read from and write to Delta files:
•You cannot read and write Delta files in a mapping in SQL ELT mode or a mapping task enabled with SQL ELT optimization.
•You cannot use source partitioning or target partitioning when you read from or write to Delta files.
•When you read from a Delta file and edit the metadata, do not change the data type. Else, the mapping fails. You can only change the precision of the data types.
• If you select the Delta format type and select Import from schema file as the value of the Schema Source formatting option, you can only upload a schema file in the JSON format.
The following sample shows a schema file for a Delta file:
•When there is a change in the metadata of a Delta file, you cannot write data to the same Delta file in its current folder path or bucket. You must specify a different path or bucket name.
•When you write to a Delta file, the Secure Agent ignores the target file name override specified in advanced target properties and retains the existing target file name.
•When you write to a Delta file, the Secure Agent applies Snappy compression by default, regardless of the compression format you select in the advanced target properties.