Select the format of the Microsoft Azure Data Lake Storage Gen2 file and configure the formatting options.
The following table describes the formatting options for Avro, Parquet, Delta, JSON, ORC, Document, and delimited flat files:
Property
Description
Schema Source
The schema of the source or target file.
Select one of the following options to specify a schema:
- Read from data file. Imports the schema from a file in Microsoft Azure Data Lake Storage Gen2.
- Import from schema file. Imports the schema from a schema definition file in the agent machine.
Schema File
The schema definition file in the agent machine from where you want to upload the schema.
You cannot upload a schema file when you create a target at runtime.
The following table describes the formatting options for flat files:
Property
Description
Flat File Type
The type of flat file.
Select one of the following options:
- Delimited. Reads a flat file that contains column delimiters.
- Fixed Width. Reads a flat file with fields that have a fixed length.
You must select the file format in the Fixed Width File Format option.
If you do not have a fixed-width file format, click New > Components > Fixed Width File Format to create one.
Delimiter
Character used to separate columns of data in a delimited flat file. You can set values as comma, tab, colon, semicolon, or others.
You cannot set a tab as a delimiter directly in the Delimiter field. To set a tab as a delimiter, you must type the tab character in any text editor. Then, copy and paste the tab character in the Delimiter field.
EscapeChar
Character immediately preceding a column delimiter character embedded in an unquoted string, or immediately preceding the quote character in a quoted string data in a delimited flat file.
When you write data to Microsoft Azure Data Lake Storage Gen2 and specify a qualifier, by default, the qualifier is considered as the escape character. Else, the character specified as the escape character is considered.
Qualifier
Quote character that defines the boundaries of data in a delimited flat file. You can set qualifier as single quote or double quote.
Qualifier Mode
Specify the qualifier behavior when you write data to a delimited flat file.
You can select one of the following options:
- Minimal. Default mode. Applies qualifier to data enclosed within a delimiter value or a special character.
- All. Applies qualifier to all data.
- Non_Numeric. Not applicable.
- All_Non_Null. Not applicable.
Disable escape character when a qualifier is set
Applicable to a Microsoft Azure Data Lake Storage Gen2 target.
Select to disable the escape character when a qualifier is set.
When you disable the escape character, the special characters not escaped and are considered as part of the data written to the target.
Code Page
Select the code page that the Secure Agent must use to read or write data to a delimited flat file.
Select UTF-8 for mappings.
Select one of the following options for mappings in advanced mode:
- UTF-8
- MS Windows Latin 1
- Shift-JIS
- ISO 8859-15 Latin 9 (Western European)
- ISO 8859-3 Southeast European
- ISO 8859-5 Cyrillic
- ISO 8859-9 Latin 5 (Turkish)
- IBM EBCDIC International Latin-1
Header Line Number
Specify the line number that you want to use as the header when you read data from a delimited flat file.
Specify the value as 0 or 1.
To read data from a file with no header, specify the value as 0.
First Data Row1
Specify the line number from where you want the Secure Agent to read data in a delimited flat file. You must enter a value that is greater or equal to one.
To read data from the header, the value of the Header Line Number and the First Data Row fields should be the same. Default is 1.
Target Header
Select whether you want to write data to a target that contains a header or without a header in the delimited flat file. You can select With Header or Without Header options.
This property is not applicable when you read data from a Microsoft Azure Data Lake Storage Gen2 source.
Distribution Column
Not applicable.
Max Rows To Preview
Not applicable.
Row Delimiter
Character used to separate rows of data. You can set values as \r, \n, and \r\n.
This property is not applicable when you read data from a Microsoft Azure Data Lake Storage Gen2 source.
1Doesn't apply to mappings in advanced mode.
The following table describes the formatting options for JSON files:
Property
Description
Data elements to sample1
Specify the number of rows to read to find the best match to populate the metadata.
Memory available to process data1
The memory that the parser uses to read the JSON sample schema and process it.
The default value is 2 MB.
If the file size is more than 2 MB, you might encounter an error. Set the value to the file size that you want to read.
Read multiple-line JSON files
Not applicable.
1Applies only to mappings in advanced mode.
Rules and guidelines for Delta file format
Consider the following rules and guidelines when you use the Delta file format to read from or write to a Delta Lake:
•Ensure that the source directory is a Delta Lake where data is stored in Parquet format and its corresponding transaction logs are stored in JSON format in a _delta_log folder. You can either select a delta log file or a Parquet file as a source object.
•You cannot use recursive read to read objects stored in subdirectories.
•When you run a mapping that reads data from a directory, the number of rows written to a target depends on the selected source object. If you select a Parquet file, the mapping processes the latest snapshot of the json file. If you select a delta log file, the mapping processes all the json files in the Delta Lake.
•You can't write to a partitioned target with delta format.
•You can infer schema from the selected object only if the file is in JSON format.
•When you configure a target operation, you cannot use an existing target to write data. You can only create a new target at runtime.
•You cannot append time stamp information to the file name.
•When you specify a file name override, ensure that the file name matches the following format:
- 0000000000000000000n.json
•When you specify a directory override to override the directory path specified in the connection, ensure that you specify the absolute path including the_delta_log folder. For example, <Directory>/<sub-directory>/_delta_log
•When you use the Delta file format to read or write decimal data types with a precision greater than 24, the following issues occur:
- In a read operation, the mapping fails.
- In a write operation, data is incorrectly written to the target.
Rules and guidelines for Document file format
Certain rules and guidelines apply to Document file format.
Consider the following rules and guidelines when you use the Document file format to read PDF files:
•You cannot read compressed files that use Gzip compression.
•Merged cells in header rows of the PDF are not written accurately to the target.
•Ensure that the header rows are not empty.
•If a table does not contain any data but includes only the header row, the default column name is prefixed to the header row in the target file.
•Tables with dotted borders are written to the target as text.
•If columns in a table move to the next page in the PDF, the table is not written as a single table.
•If the PDF files contain only text, the text might not be well-structured in the target. There might also be a mismatch in the order of text.
•If the content in the PDF is formatted with two columns, the layout is not preserved when the content is written to the target file. Instead of keeping the distinct column structure, the text from both columns gets merged, and the content appears in the target in a single, continuous line.
•Multiple adjacent tables are read as a single table.
•The Input Type field continues to appear even when you change the source type from single object to parameter or update the source object in the Source transformation.