Connections > Databricks Delta connection > Databricks Delta target properties
  

Databricks Delta target properties

You can use a Databricks Delta object as a target in a data loader task.

Temporary security credentials using AssumeRole for EC2

When you create a data loader task to write data to Databricks Delta, specify the Databricks Delta target properties.
The following table describes the Databricks Delta target properties:
Property
Description
Connection
Name of the target connection.
Target Name Prefix
Optional prefix for the target table names.
If you enter a prefix, the target table names are the same as the source table names plus the prefix.
For example, if you enter pfx1_ as the prefix, then data from source object src1 is written to target table as pfx1_src1.
Write Disposition
Overwrites or adds data to the existing data in a table.
Choose one of the following options:
  • - Append. Appends data to the existing data in the table even if the table is empty.
  • - Truncate. Overwrites the existing data in the table.
Staging Location
Relative directory path to store the staging files.
Enter the path based on the cluster deployment:
  • - AWS. Use the path relative to the Amazon S3 staging bucket.
  • - Azure. Use the path relative to the Azure Data Lake Store Gen2 staging filesystem name.
Table Location
The path to the table where you want to write the data.
Path
The path to the Databricks database name and schema.

Rules and guidelines for tasks that read data from Microsoft Azure Data Lake Storage Gen2

When you read data from Microsoft Azure Data Lake Storage Gen2 and write data to Databricks Delta, exclude the FileName field from the source before you run the data loader task.

Update mode for Databricks Delta targets

When you write to a Databricks Delta target, the data loader task either performs an upsert or an insert operation based on how the primary key and watermark fields are configured in the source object.
When you define a primary key field for the source object, the data loader task performs an upsert operation.
When you don't define a primary key field or watermark field for the source object, the data loader task performs an insert operation.