When you create a test case in Data Validation, you select the connections to the data sources that contain the data to compare. The connections are created in Administrator.
You can use the following types of connections with Data Validation:
•Amazon Redshift v2
•Amazon S3 v2
•Databricks Delta
•Flat file
•Google BigQuery V2
•Microsoft Azure Data Lake Storage Gen2
•Microsoft Azure Synapse SQL
•MySQL
•Netezza
•ODBC connections with the DB2 subtype
•Oracle
•PostgreSQL
•Salesforce
•SAP HANA
•Snowflake Data Cloud
•SQL Server
•Teradata
Amazon Redshift v2 connection
When you select an Amazon Redshift v2 connection in the test case wizard, you must enter a valid path and a valid S3 bucket name for staging the data.
For more information about configuring an Amazon Redshift v2 connection, see the Data Integration help.
Amazon S3 v2 connection
You can create test cases for flat files and Parquet files within an Amazon S3 v2 connection. When you select an Amazon S3 v2 connection in the test case wizard, you can enter a relative path where the flat file is stored or the relative path to the S3 bucket where the Parquet file is stored. If you do not enter a path, Data Validation lists all the files in the folder path specified in the connection.
For more information about configuring an Amazon S3 v2 connection, see the Data Integration help.
Databricks Delta connection
You can create test cases for Databricks Delta objects deployed in Azure and AWS environments.
Before you use a Databricks Delta connection, ensure that you have specified the database name in the Databricks Delta connection in Administrator.
Google BigQuery V2 connection
When you select a Google BigQuery V2 connection in the test case wizard, you can enter a schema name to get a list of objects.
For more information about configuring a Google BigQuery V2 connection, see the Data Integration help.
Microsoft Azure Data Lake Storage Gen2
You can create test cases for files of the following formats within a Microsoft Azure Data Lake Storage Gen2 connection:
•Avro
•CSV flat file
•JSON
•ORC
•Parquet
When you select a Microsoft Azure Data Lake Storage Gen2 connection in the test case wizard, you can enter a relative path to the folder where the file is stored. If you do not enter a path, Data Validation lists all the files in the folder path specified in the connection.
You can't create test cases for hierarchical objects within a Microsoft Azure Data Lake Storage Gen2 connection.
For more information about configuring a Microsoft Azure Data Lake Storage Gen2 connection, see the Data Integration help.
Microsoft Azure Synapse SQL connection
Before you use a Microsoft Azure Synapse SQL connection in Data Validation, in Administrator, enter the Azure DW Schema Name in the following format:
<schema name>
For more information about configuring a Microsoft Azure Synapse SQL connection, see the Data Integration help.
ODBC connections with the DB2 subtype
You can create test cases for DB2 tables and views using an ODBC connection with the DB2 subtype.
For more information about configuring an ODBC connection with the DB2 subtype, see the Data Integration help.
Snowflake Data Cloud connection
Before you use a Snowflake Data Cloud connection in Data Validation, in Administrator, enter the additional JDBC URL connection parameters in the following format:
db=<DATABASE NAME>&schema=<SCHEMA NAME>
For more information about configuring a Snowflake Data Cloud connection, see the Data Integration help.
Note: If you use a Microsoft Azure Data Lake Storage Gen2 connection or a Snowflake Data Cloud connection, the Secure Agent must have at least 2048 MB of Java heap size to run test cases. Otherwise, you might face an error. For more information, see the Informatica Knowledge Base article 000167312.