Connectors and Connections > Data Ingestion and Replication connectors > Database Ingestion and Replication connectors
  

Database Ingestion and Replication connectors

Before you begin defining connections for database ingestion and replication tasks, verify that the connectors for your source and target types are available in Informatica Intelligent Cloud Services.
The following table lists the connectors that Database Ingestion and Replication requires to connect to a source or target that can be configured in a database ingestion and replication task:
Source or target type
Connector
Use for
Amazon Redshift
Amazon Redshift V2
Targets in initial load, incremental load, and initial and incremental load jobs
Amazon S3
Amazon S3 V2
Targets in initial load and incremental load jobs
Databricks
Databricks
Targets in initial load, incremental load, and initial and incremental load jobs
Db2 for i
Db2 for i Database Ingestion
Sources in initial load, incremental load, and initial and incremental load jobs
Db2 for Linux, UNIX, and Windows
Db2 for LUW Database Ingestion
Sources in initial load jobs
Db2 for z/OS
Db2 for zOS Database Ingestion
Sources in initial load and incremental load jobs
Flat file
No connector required
Targets in initial load jobs
Google BigQuery
Google BigQuery V2
Targets in initial load, incremental load, and initial and incremental load jobs
Google Cloud Storage
Google Cloud Storage V2
Targets in initial load and incremental load jobs
Kafka, including Apache Kafka, Confluent Kafka, Amazon Managed Streaming for Apache Kafka, and Kafka-enabled Azure Event Hubs
Kafka
Targets in incremental load jobs
Microsoft Azure Data Lake Storage Gen2
Microsoft Azure Data Lake Storage Gen2
Targets in initial load and incremental load jobs
Microsoft SQL Server, including on-premises SQL Server, RDS for SQL Server, Azure SQL Database, and Azure SQL Managed Instance
SQL Server
Sources in initial load, incremental load, and combined initial and incremental load jobs. For Azure SQL Database sources, you must use the Query-based or CDC Tables capture method for incremental load and combined load jobs.
Targets in initial load, incremental load, and initial and incremental load jobs.
Microsoft Azure Synapse Analytics1
Microsoft Azure Synapse Analytics Database Ingestion
Targets in initial load, incremental load, and initial and incremental load jobs
Microsoft Fabric OneLake
Microsoft Fabric OneLake
Targets in initial load, incremental load, and initial and incremental load jobs
MongoDB
MongoDB Mass Ingestion
Sources in initial load and incremental load jobs
MySQL, including RDS for MySQL
MySQL
Sources in initial load and incremental load jobs. RDS for MySQL in initial load jobs only.
Netezza
Netezza
Sources in initial load jobs
Oracle, including RDS for Oracle
Oracle Database Ingestion
Sources in initial load, incremental load, and initial and incremental load jobs
Targets in initial load, incremental load, and initial and incremental load jobs
Oracle Cloud Infrastructure (OCI) Object Storage
Oracle Cloud Object Storage
Targets in initial load, incremental load, and initial and incremental load jobs
PostgreSQL, including on-premises PostgreSQL, Amazon Aurora PostgreSQL, Azure Database for PostgreSQL - Flexible Server, RDS for PostgreSQL, and Cloud SQL for PostgreSQL
PostgreSQL
Sources in initial load, incremental load, and initial and incremental load jobs
Targets in initial load, incremental load, and initial and incremental load jobs (Amazon Aurora PostgreSQL only)
SAP HANA, including on-premises SAP HANA and SAP HANA Cloud
SAP HANA Database Ingestion
Sources in initial load and incremental load jobs
Snowflake
Snowflake Data Cloud
Targets in initial load, incremental load, and initial and incremental load jobs
Teradata Data Warehouse Appliance
Teradata
Sources in initial load jobs
1. For the Microsoft Azure Synapse Analytics target type, Database Ingestion and Replication uses Microsoft Azure SQL Data Lake Storage Gen2 to store staging files. Ensure that you have Microsoft Azure SQL Data Lake Storage Gen2 installed.

Mock connectors

Database Ingestion and Replication supports mock, or sample, connections for some of the sources and targets. Use mock connections to learn how to create database ingestion and replication initial load tasks without creating real connections to the database.
A mock connector does not connect to a real database. Instead, a source mock connector uses flat files with sample data. A target mock connector reports the information about processed source data to Database Ingestion and Replication user interface, but it does not write any data to the target.
The sample connections appear in the source and target connection lists in Database Ingestion and Replication if you have the MockConnector license.
The following table lists mock connections that you can use for Database Ingestion and Replication sources and targets:
Connection name
Source or Target
Sample Oracle Connection
Source
Sample SQL Server Connection
Source
Sample S3 Connection
Target
Sample ADLS Gen2 Connection
Target
Note: You must use sample connections for both the source and target databases. You cannot use a sample connection for only one of them, for example, for the source but not for the target.

Source data

The source data for sample connections is stored in CVS files in the following directory:
Secure_Agent_installation/downloads/package-MockConnector.version/package/sampleData/source/database_type/
Each file represents a single table. A mock table name matches the file name. The first line in a file determines column headers, and the subsequent lines contain row data.