Snowflake Connector > Mappings and mapping tasks with Snowflake Connector > Snowflake objects in mappings
  

Snowflake objects in mappings

When you create a mapping, you can configure a Source or Target transformation to represent a Snowflake object. You can use tables or views as Snowflake objects.

Snowflake sources in mappings

In a mapping, you can configure a source transformation to represent a Snowflake source.
You can configure partitioning to optimize the mapping performance at run time when you read data from Snowflake. The partition type controls how the agent distributes data among partitions at partition points. You can define the partition type as key range partitioning. With partitioning, the agent distributes rows of source data based on the number of threads that you define as partition.
The following table describes the Snowflake source properties that you can configure in a Source transformation:
Property
Description
Connection
Name of the source connection.
Source Type
Type of the source object. Select Single Object, Multiple Objects, Query, or Parameter.
Note: When you use a custom SQL query to import Snowflake tables, the Secure Agent fetches the metadata using separate metadata calls.
Object
The source object for the task. Select the source object for a single source. When you select the multiple source option, you can add source objects and configure relationship between them.
Filter
Filters records based on the filter condition. Configure a simple filter.
Sort
Sorts records based on the conditions you specify. You can specify the following sort conditions:
  • - Not parameterized. Select the fields and type of sorting to use.
  • - Parameterized. Use a parameter to specify the sort option.
The following table describes the advanced properties that you can configure in a Source transformation:
Advanced Property
Description
Database
Overrides the database specified in the connection.
Schema
Overrides the schema specified in the connection.
Warehouse
Overrides the Snowflake warehouse name specified in the connection.
Role
Overrides the Snowflake role assigned to user, specified in the connection.
Pre SQL
SQL statement that is executed prior to start of a read operation. For example, if you want to update records in the database before you read the records from the table, specify a Pre-SQL statement.
Post SQL
SQL statement that is executed after completion of a read operation. For example, if you want to delete some records after the latest records load, specify a Post-SQL statement.
Table Name
Overrides the table name of the imported Snowflake source table.
SQL Override
The SQL statement to override the default query used to read data from the Snowflake source.
Tracing Level
Determines the amount of detail that appears in the log file. You can select Terse, Normal, Verbose Initialization, or Verbose Data. Default value is Normal.

Key range partitioning

You can configure key range partitioning when you use a mapping task to read data from Snowflake sources. With key range partitioning, the agent distributes rows of source data based on the field that you define as partition keys. The agent compares the field value to the range values for each partition and sends rows to the appropriate partitions.
Use key range partitioning for columns that have an even distribution of data values. Otherwise, the partitions might have unequal size. For example, a column might have 10 rows between key values 1 and 1000 and the column might have 999 rows between key values 1001 and 2000. If the mapping includes multiple sources, use the same number of key ranges for each source.
When you define key range partitioning for a column, the agent reads the rows that are within the specified partition range. For example, if you configure two partitions for a column with the ranges as 10 through 20 and 30 through 40, the agent does not read the rows 20 through 30 because these rows are not within the specified partition range.
You can configure a partition key for fields of the following data types:

Configuring key range partitioning

Perform the following steps to configure key range partitioning for Snowflake sources:
    1In the Source Properties page, click the Partitions tab.
    2Select the required partition key from the list.
    3Click Add New Key Range to define the number of partitions and the key ranges based on which the agent must partition data.
    Use a blank value for the start range to indicate the minimum value. Use a blank value for the end range to indicate the maximum value.
    The following image displays the details of Partitions tab:

Snowflake targets in mappings

In a mapping, you can configure a target transformation to represent a Snowflake target.
You can write data to an existing table or create a table in the target by using create target option.
You can configure partitioning to optimize the mapping performance at run time when you write data to Snowflake targets. The partition type controls how the agent distributes data among partitions at partition points. You can define the partition type as passthrough partitioning. With partitioning, the agent distributes rows of target data based on the number of threads that you define as partition.
The following table describes the Snowflake target properties that you can configure in a Target transformation:
Property
Description
Connection
Name of the target connection.
Target Type
Type of target object.
Object
The target object for the task. Select the target object.
You can either select an existing table or create a new table.
Create Target
Creates a target. Enter the table name.
Operation
The target operation. Select Insert, Update, Upsert, or Delete.
Note: You cannot use Data Driven operation in Target transformation.
Update columns
The temporary key column to update data to or delete data from a Snowflake target.
If you perform an update, update else insert, or delete operation and the Snowflake target does not include a primary key column, click Add to add a temporary key. You can select multiple columns.
The following table describes the advanced properties that you can configure in a Target transformation:
Advanced Property
Description
Database
Overrides the database specified in the connection.
Schema
Overrides the schema specified in the connection.
Warehouse
Overrides the Snowflake warehouse name specified in the connection.
Role
Overrides the Snowflake role assigned to user specified in the connection.
Pre SQL
SQL statement that is executed prior to start of a write operation. For example, if you want to assign sequence object to a primary key field of the target table before you write data to the table, specify a Pre-SQL.
Post SQL
SQL statement that is executed after completion of write operation. For example, if you want to alter the table created by using create target option and assign constraints to the table before you write data to the table, specify a Post-SQL.
Batch Row Size
Number of rows that the agent writes in a batch to the Snowflake target.
Number of local staging files
Enter the number of local staging files. The agent writes data to the target after the specified number of local staging files are created.
Truncate Target Table
Truncates the database target table before inserting new rows. Select one of the following options:
  • - True. Truncates the target table before inserting all rows.
  • - False. Inserts new rows without truncating the target table
Default is false.
Additional Write Runtime Parameters
Specify additional runtime parameters.
For example:
remoteStage=CQA.CQA_SCHEMA.CQA_STAGE
Separate multiple runtime parameters with &.
Table Name
Overrides the table name of the Snowflake target table.
Rejected File Path
The filename and path of the file on the Secure Agent machine where the Secure Agent writes records that are rejected while writing to the target. For example, \rejectedfiles\reject7
Success File Directory
Not supported.
Error File Directory
Not supported.
Forward Rejected Rows
Determines whether the transformation passes rejected rows to the next transformation or drops rejected rows. By default, the agent forwards rejected rows to the next transformation.

Snowflake lookups in mappings

You can create lookups for objects in Snowflake connection. You can retrieve data from a Snowflake lookup object based on the specified lookup condition.
When you configure a lookup in Snowflake, you select the lookup connection and lookup object. You also define the behavior when a lookup condition returns more than one match.
The following table describes the Snowflake lookup object properties that you can configure in a Lookup transformation:
Property
Description
Connection
Name of the lookup connection.
Source Type
Type of the source object. Select Single Object or Parameter.
Lookup Object
Name of the lookup object for the mapping.
Multiple Matches
Behavior when the lookup condition returns multiple matches. Select Return any row, Return all rows, or Report error.
Filter
Not supported.
Sort
Not supported.
The following table describes the Snowflake lookup object advanced properties that you can configure in a Lookup transformation:
Advanced Property
Description
Database
Overrides the database specified in the connection.
Schema
Overrides the schema specified in the connection.
Warehouse
Overrides the Snowflake warehouse name specified in the connection.
Role
Overrides the Snowflake role assigned to user specified in the connection.
Pre SQL
Not supported.
Post SQL
Not supported.