Amazon SageMaker Lakehouse Connector > Mappings for Amazon SageMaker Lakehouse > Targets for Amazon SageMaker Lakehouse
  

Targets for Amazon SageMaker Lakehouse

When you configure a mapping in advanced mode to use an Amazon SageMaker Lakehouse target, configure the target properties.
Specify the name and description of the Amazon SageMaker Lakehouse target. Configure the target and advanced target properties for the Amazon SageMaker Lakehouse object in a Target transformation.
The following table describes the properties that you can configure in a Target transformation:
Property
Description
Connection
Name of the target connection.
You can select an existing connection, create a new connection, or define parameter values for the target connection property.
If you want to overwrite the target connection properties at run time, select the Allow parameter to be overridden at run time option.
Target Type
Type of the Amazon SageMaker Lakehouse target object.
You can choose from the following target types:
  • - Single Object. Select to specify a single Amazon SageMaker Lakehouse object.
  • - Parameter. Select to specify a parameter name. You can configure the target object in a mapping task associated with a mapping that uses this Target transformation.
Parameter
A parameter file where you define values that you want to update without having to edit the task.
Select an existing parameter for the target object or click New Parameter to define a new parameter for the target object.
The Parameter property appears only if you select parameter as the target type.
If you want to overwrite the target object at run time, select the Allow parameter to be overridden at run time option.
When the task runs, the Secure Agent uses the parameters from the file that you specify in the advanced session properties.
Object
Name of the target object.
You can select an existing object from the list or create a target object at run time.
Operation
Type of the target operation.
Select one of the following operations:
  • - Insert
  • - Update
  • - Upsert
  • - Delete
You cannot use update, upsert, and delete target operations for S3 Tables lakehouse pattern.
Note: You cannot configure the Data Driven operation on an Amazon SageMaker Lakehouse target.
Update Columns
The primary key columns to update, upsert, or delete data in a Amazon SageMaker Lakehouse target.
The following table describes the advanced properties that you can configure in a Target transformation:
Property
Description
Truncate Target
Truncates the target table before loading the data.
By default, the property is not selected.
Iceberg Spark Properties
The Spark configuration properties as key-value pairs that you want to configure for the Iceberg tables at runtime.
Enter the properties in the following format:
<parameter name>=<parameter value>
If you enter more than one property, enter each property in a new line.
When you use the S3 Tables lakehouse pattern, you must specify the S3 table bucket ARN property in the following format:
TableBucketARN=arn:aws:s3tables:us-east-1:001234567890:bucket/sagemaker-s3tables
When the source and target are in different regions, you must specify the bucket region property in the following format for the update, upsert, or delete target operations:
BucketRegion=<Amazon-S3-target-bucket-region-name>
Update Mode
Determines how the records are updated in the target table.
This property applies when you select the Update or Upsert target operation.
Select one of the following modes:
  • - Update As Update. Updates records in the target table if the specified unique key column value matches with the incoming column value.
  • - Update Else Insert. Updates records in the target table if the specified unique key column value matches with the incoming column value. If the unique key column value does not match, the mapping inserts a new row with the records.
Pre-SQL
The SQL queries to run before writing data to Apache Iceberg tables.
Ensure that the SQL queries use a valid Spark SQL syntax.
You can enter multiple queries separated by a semicolon.
Post-SQL
Post-SQL queries to run after writing data to Apache Iceberg tables.
Ensure that the SQL queries use a valid Spark SQL syntax.
You can enter multiple queries separated by a semicolon.