You can use the additional JDBC URL parameters field in the Snowflake Data Cloud connection to customize and set any additional parameters when you connect to Snowflake.
You can configure the following properties as additional JDBC URL parameters in the Snowflake Data Cloud connection:
•To override the database and schema name used to create temporary tables in Snowflake, enter the database and schema name in the following format:
•To load data from Amazon S3, Google Cloud Storage, or Microsoft Azure Data Lake Storage Gen2 to Snowflake for SQL ELT optimization, enter the Cloud Storage Integration name created for the Amazon S3, Google Cloud Storage, or Microsoft Azure Data Lake Storage Gen2 account in Snowflake in the following format:
storage_integration=<Storage Integration name>
The storage integration name is case-sensitive. For example, if the storage integration name you created for Amazon S3, Google Cloud Storage, or Microsoft Azure Data Lake Storage Gen2 in Snowflake is STORAGE_INT, you need to specify the same integration name:
storage_integration=STORAGE_INT
Note: You can also load data from Amazon S3 to Snowflake for SQL ELT optimization without using storage integration.
•To ignore double quotes in the table and treat all tables as case-insensitive, enter the following parameter:
QUOTED_IDENTIFIERS_IGNORE_CASE=true
When you set this property in the connection to true, Snowflake ignores the double quotes in the table and treats all tables as case-insensitive.
If you have set this property to true, you cannot access case-sensitive tables with the same connection. You need to create a new connection to fetch any existing case-sensitive tables.
•To filter queries that are executed in a Snowflake job on the Snowflake web interface, enter the tag name in the following format:
query_tag=<Tag name>
In addition to the parameters listed, this field provides you the flexibility to configure other Snowflake parameters based on your requirements.