

Property | Description |
|---|---|
Task Name | Enter a name that you want to use to identify the database ingestion and replication task, if you do not want to use the generated name. Using a descriptive name will make finding the task easier later. Task names can contain Latin alphanumeric characters, spaces, periods (.), commas (,), underscores (_), plus signs (+), and hyphens (-). Task names cannot include other special characters. Task names are not case sensitive. Maximum length is 50 characters. Note: If you include spaces in the task name, after you deploy the task, the spaces do not appear in the corresponding job name. |
Location | The project or project\folder in Explore that will contain the task definition. If you do not specify a project, the "Default" project is used. |
Runtime Environment | Select the runtime environment that you want to use to run the task. By default, the runtime environment that you initially entered when you began defining the task is displayed. You can use this runtime environment or select another one. Tip: To refresh the list of runtime environments, click Refresh. The runtime environment can be a Secure Agent group that consists of one or more Secure Agents. A Secure Agent is a lightweight program that runs tasks and enables secure communication. Alternatively, for selected cloud source types, you can use a serverless runtime environment hosted on Microsoft Azure. Note: You cannot choose a serverless runtime environment if a local runtime environment was previously selected. The Cloud Hosted Agent is not supported. Select Set as default to use the specified runtime environment as your default environment for all tasks you create. Otherwise, leave this check box cleared. |
Description | Optionally, enter a description you want to use for the task. Maximum length is 4,000 characters. |
Schedule | If you want to run an initial load task based on a schedule instead of manually starting it, select Run this task based on a schedule. Then select a schedule that was previously defined in Administrator. The default option is Do not run this task based on a schedule. Note: This field is not available for incremental load and combined initial and incremental load tasks. To view and edit the schedule options, go to Administrator. If you edit the schedule, the changes will apply to all jobs that use the schedule. If you edit the schedule after deploying the task, you do not need to redeploy the task. If the schedule criteria for running the job is met but the previous job run is still active, Database Ingestion and Replication skips the new job run. |
Auto-Tune | Preview Notice: Effective in the April 2026 release, auto-tuning is available for preview. Select this check box to enable automatic tuning of intial load jobs and the unload phase of combined load jobs to optimize performance. This option tunes selected performance-related parameters that affect reading data from the source and writing data to target. Tuning settings are based on performance and system metrics collected from your environment, including network and database latency, row counts, table sizes, CPU cores, and memory usage. They're also based on application-specific metrics such as JVM heap allocation and task capacity. Note: This option is ignored for tasks that have a MongoDB or Netezza source. |
Execute in Taskflow | Select this check box to make the task available in Data Integration to add to a taskflow as an event source.You can then include transformations in the taskflow to transform the ingested data. Available for initial load and incremental load tasks with Snowflake targets that don't use the Superpipe option. |

Option | Description |
|---|---|
Apply Cycle Interval | Specifies the amount of time that must elapse before a database ingestion and replication job ends an apply cycle. You can specify days, hours, minutes, and seconds or specify values for a subset of these time fields leaving the other fields blank. The default value is 15 minutes. Note: If you're using an Amazon S3 target with the Apache Iceberg option for the Open Table Format target property, this field is ignored. |
Apply Cycle Change Limit | Specifies the total number of records in all tables of a database ingestion and replication job that must be processed before the job ends an apply cycle. When this record limit is reached, the database ingestion and replication job ends the apply cycle and writes the change data to the target. The default value is 10000 records. During startup, jobs might reach this limit more frequently than the apply cycle interval if they need to catch up on processing a backlog of older data. Note: If you're using an Amazon S3 target with the Apache Iceberg option for the Open Table Format target property, this field is ignored. |
Low Activity Flush Interval | Specifies the amount of time, in hours, minutes, or both, that must elapse during a period of no change activity on the source before a database ingestion and replication job ends an apply cycle. When this time limit is reached, the database ingestion and replication job ends the apply cycle and writes the change data to the target. If you do not specify a value for this option, a database ingestion and replication job ends apply cycles only after either the Apply Cycle Change Limit or Apply Cycle Interval limit is reached. No default value is provided. Note: If you're using an Amazon S3 target with the Apache Iceberg option for the Open Table Format target property, this field is ignored. |
Source | Target |
|---|---|
Db2 for i | Amazon Redshift, Amazon S3, Databricks, Google BigQuery, Google Cloud Storage, Kafka (incremental loads only), Microsoft Azure Data Lake Storage, Microsoft Azure Synapse Analytics, Microsoft Fabric OneLake, Oracle, Oracle Cloud Object Storage, PostgreSQL, Snowflake, and SQL Server |
Db2 for LUW | Snowflake |
Db2 for z/OS, except Db2 11 | Amazon Redshift, Amazon S3, Databricks, Google BigQuery, Google Cloud Storage, Kafka (incremental loads only), Microsoft Azure Data Lake Storage, Microsoft Azure Synapse Analytics, Microsoft Fabric OneLake, Oracle, Oracle Cloud Object Storage, Snowflake, and SQL Server |
Microsoft SQL Server | Amazon Redshift, Amazon S3, Databricks, Google BigQuery, Google Cloud Storage, Kafka (incremental loads only), Microsoft Azure Data Lake Storage, Microsoft Azure Synapse Analytics, Microsoft Fabric OneLake, Oracle, Oracle Cloud Object Storage, PostgreSQL, Snowflake, and SQL Server |
Oracle | Amazon Redshift, Amazon S3, Databricks, Google BigQuery, Google Cloud Storage, Kafka (incremental loads only), Microsoft Azure Data Lake Storage, Microsoft Azure Synapse Analytics, Microsoft Fabric OneLake, Oracle, Oracle Cloud Object Storage, PostgreSQL, Snowflake, and SQL Server |
PostgreSQL | Incremental loads: Amazon Redshift, Amazon S3, Databricks, Google BigQuery, Google Cloud Storage, Kafka (incremental loads only), Microsoft Azure Data Lake Storage, Microsoft Azure Synapse Analytics, Microsoft Fabric OneLake, Oracle, Oracle Cloud Object Storage, PostgreSQL, and Snowflake Combined initial and incremental loads: Oracle, PostgreSQL, and Snowflake |
Option | Description |
|---|---|
Ignore | Do not replicate DDL changes that occur on the source database to the target. For Amazon Redshift, Kafka, Microsoft Azure Synapse Analytics, PostgreSQL, Snowflake and SQL Server targets, this option is the default option for the Drop Column and Rename Column operation types. For Amazon S3, Google Cloud Storage, Microsoft Azure Data Lake Storage, and Oracle Cloud Object Storage targets that use the CSV output format, the Ignore option is disabled. For the AVRO output format, this option is enabled. |
Replicate | Replicate the DDL operation to the target. For Amazon S3, Google Cloud Storage, Microsoft Azure Data Lake Storage, Microsoft Fabric OneLake, and Oracle Cloud Object Storage targets, this option is the default option for all operation types. For other targets, this option is the default option for the Add Column and Modify Column operation types. |
Stop Job | Stop the entire database ingestion and replication job. |
Stop Table | Stop processing the source table on which the DDL change occurred. When one or more of the tables are excluded from replication because of the Stop Table schema drift option, the job state changes to Running with Warning. Important: The database ingestion and replication job cannot retrieve the data changes that occurred on the source table after the job stopped processing it. Consequently, data loss might occur on the target. To avoid data loss, you will need to resynchronize the source and target objects that the job stopped processing. Use the Resume With Options > Resync option. |
Option | Description |
|---|---|
Checkpoint All Rows | Indicates whether a database ingestion and replication job performs checkpoint processing for every message that is sent to the Kafka target. Note: If this check box is selected, the Checkpoint Every Commit, Checkpoint Row Count, and Checkpoint Frequency (secs) options are ignored. |
Checkpoint Every Commit | Indicates whether a database ingestion and replication job performs checkpoint processing for every commit that occurs on the source. |
Checkpoint Row Count | Specifies the maximum number of messages that a database ingestion and replication job sends to the target before adding a checkpoint. If you set this option to 0, the job does not perform checkpoint processing based on the number of messages. If you set this option to 1, the job adds a checkpoint for each message. |
Checkpoint Frequency (secs) | Specifies the maximum number of seconds that must elapse before a database ingestion and replication job adds a checkpoint. If you set this option to 0, a database ingestion and replication job does not perform checkpoint processing based on elapsed time. |