Property | Description |
---|---|
Connection | Name of the target connection. You can select an existing connection, create a new connection, or define parameter values for the source connection property. If you want to overwrite the target connection properties at run time, select the Allow parameter to be overridden at run time option. |
Target Type | Type of the Open Table target object. You can choose from the following source types:
|
Parameter | A parameter file where you define values that you want to update without having to edit the task. Select an existing parameter for the target object or click New Parameter to define a new parameter for the target object. The Parameter property appears only if you select parameter as the target type. If you want to overwrite the target object at run time, select the Allow parameter to be overridden at run time option. When the task runs, the Secure Agent uses the parameters from the file that you specify in the advanced session properties. |
Object | Name of the target object. You can select an existing object from the list or create a target object at run time. For information on creating the target object at run time, see the Create a target table at runtime topic. Note: You cannot create a target object at runtime if you configure Hive Metastore catalog in the connection. |
Operation | Type of the target operation. Select one of the following operations:
Note: You cannot configure the Data Driven operation on an Open Table target. |
Update Columns | Select the unique key column as the condition field for the update, upsert, or delete operations in an Open Table target. |
Property | Description |
---|---|
Truncate Target | If enabled, the Secure Agent truncates the target table before loading the data. By default, the checkbox is not selected. |
Iceberg Spark Properties | The properties that you want to configure for the Iceberg tables. Enter the properties in the following format: <parameter name>=<parameter value> If you enter more than one property, enter each property in a new line. When you use AWS Glue Catalog with Amazon S3 and the source and target are in different regions, you must specify the bucket region property in the following format for the update, upsert, or delete target operations: BucketRegion=<Amazon-S3-target-bucket-region-name> When you use Hive metastore with Amazon S3 and the source and target are in different regions, you must specify the bucket region property in the following format: BucketRegion=<Amazon-S3-target-bucket-region-name> When you use REST Catlog with Amazon S3 and the source and target are in different regions, you must specify the bucket region property in the following format: restcatalog.iceberg.s3.client.region = <Amazon-S3-source-bucket-region-name> |
Delta Spark Properties | The properties that you want to configure for the Delta Lake tables. Enter the properties in the following format: <parameter name>=<parameter value> If you enter more than one property, enter each property in a new line. When you use AWS Glue Catalog with Amazon S3 and the source and target are in different regions, you must specify the bucket region property in the following format: BucketRegion=<bucket-region-name> |
UpdateMode | Loads data to the target based on the mode you specify. This property applies when you select the Update or Upsert operation. Select one of the following modes:
|
Pre-SQL | Pre-SQL queries to run before writing data to Apache Iceberg Open Table formats. Ensure that the SQL queries use valid Spark SQL syntax. You can enter multiple queries separated by a semicolon. You must prefix the table name with the <OpenTableCatalog> string and the database name. For example, <OpenTableCatalog>.databasename.tablename The database name and table name identifiers in the queries must be in lowercase. If the pre-SQL query fails, the mapping also fails. |
Post-SQL | Post-SQL queries to run after writing data to Apache Iceberg Open Table formats. Ensure that the SQL queries use valid Spark SQL syntax. You can enter multiple queries separated by a semicolon. You must prefix the table name with the <OpenTableCatalog> string and the database name. For example, <OpenTableCatalog>.databasename.tablename The database name and table name identifiers in the queries must be in lowercase. If the post-SQL query fails, the mapping also fails. If the mapping fails, the post-SQL query is not executed. |