REST API Reference > Data Ingestion and Replication REST API > Database and Application Ingestion and Replication REST API
  

Database and Application Ingestion and Replication REST API

The REST API resources in this section apply to both database ingestion and replication and application ingestion and replication tasks in the Data Ingestion and Replication service.
Both task types use the same base URL, URI, request structure, and headers. The specific task type is determined by the taskType field value, which you must specify in the request payload when you create a task.

Base URL and URI

Use the serverUrl value from the login response as the base URL. For example:
https://na4-ing.dm-us.informaticacloud.com
Use the following URI for all API calls:
/dbmi/public/api/v2/<API name>

Request header

All requests must include the following header format:
<METHOD> <serverUrl>/<URI> HTTP/<HTTP version>
Content-Type: application/json
IDS-SESSION-ID: <IDS_SESSION_ID>

Example API call

If the serverUrl is https://na4.dm-us.informaticacloud.com/saas and the URI is /dbmi/public/api/v2/task/create, the API call to create a task is a POST request structured as follows:
<METHOD> https://na4-ing.dm-us.informaticacloud.com/dbmi/public/api/v2/task/create HTTP/1.1
Content-Type: application/json
IDS-SESSION-ID: 123ABC456789defIJK

Create a database or application ingestion and replication task

Use the task resource to create a database ingestion and replication task or application ingestion and replication task in a specified project and folder.

POST request

Use the following POST request to create a task:
/dbmi/public/api/v2/task/create
The request payload must include the taskType field to specify the type of task that you want to create:
If you omit the value for the taskType field, it defaults to "dbmi".
Include only the source and target connections and task properties relevant to the specified task type. For more information about sources and targets supported for the task type, see the Application Ingestion and Replication and Database Ingestion and Replication documentation.

Example request payload

For an application ingestion and replication task, the request payload with the taskType field is as follows:
{
"taskType": "appmi",
// other application ingestion and replication task-specific fields
}
For a database ingestion and replication task, the request payload with the taskType field is as follows:
{
"taskType": "dbmi",
// other database ingestion and replication task-specific fields
}
The following table describes basic attributes in the request:
Field
Type
Description
taskType
String
The type of ingestion and replication task to create.
Options are:
  • - appmi. Creates an application ingestion and replication task.
  • - dbmi. Creates a database ingestion and replication task.
Default is dbmi.
name
String
Name of the task.
description
String
An optional description for the task.
location
String
Project or project folder that contains the task definition.
runtimeEnvironment
String
The runtime environment in which you want to run the task.
type
String
Type of load operation that the task performs.
Options are:
  • - initial. Loads source data read at a single point in time to a target. After the data is loaded, the ingestion and replication job ends.
  • - cdc. Loads data changes continuously or until the ingestion and replication job is stopped or ends. The ingestion and replication job loads the changes that have occurred since the last time it ran or from a specific start point.
  • - combined. Performs an initial load of point-in-time data to the target and then automatically switches to replicating incremental data changes made to the same source objects on a continuous basis.
The following table describes the source connection and task attributes in the request.
Field
Type
Description
connection
String
Name of the connection for the source system.
schema
String
The source schema that includes the source tables.
This option applies for all database ingestion and replication sources except MongoDB.
salesforceAPI
String
For Salesforce sources in an application ingestion and replication task, the type of Salesforce API that you want to use to retrieve the source data. Options are:
  • - Standard (REST) API. Replicates source fields of Base64 data type.
  • - Bulk API 2.0. Excludes replication of source fields of Base64 data type.
Default is Bulk API 2.0.
database
String
For MongoDB sources in a database ingestion and replication task, the MongoDB database that stores collections with the source data.
journalName
String
For Db2 for i incremental load in a database ingestion and replication task, the name of the journal that records the changes made to source tables.
replicationSlotName
String
For PostgreSQL sources, the unique name of a PostgreSQL replication slot.
replicationPlugin
String
For PostgreSQL sources in a database ingestion and replication task, a replication plugin. Options are:
  • - pgoutput
  • - wal2json
publication
String
For PostgreSQL sources in a database ingestion and replication task, the publication name used by the pgoutput plugin. Use this parameter only if you specified pgoutput as the replicationPlugin value.
selectionRules
List
Optional rules to select a subset of source objects or tables.
Default is include all *. This rule selects all tables task's source schema.
To narrow the tables to be processed by the task, you can define additional Include rules, Exclude rules, or both types of rules.
For example:
selectionRules: - include: TABLE_1 - exclude: TABLE_2
restartPointForIncrementalLoad
String
For SAP ECC sources in an application ingestion and replication incremental load task and all sources in a database ingestion and replication incremental load or combined initial and incremental load task, the position in the source change stream or logs from which to start reading changes the first time the job runs.
Options are:
  • - earliest
  • - latest
For example:
restartPointForIncrementalLoad: earliest restartPointForIncrementalLoad: latest
For a database ingestion and replication task, default is latest.
For an application ingestion and replication task, the earliest option is not supported. The default is the restartPointForIncrementalLoadTimestamp value.
If you specify this parameter, do not also specify the restarPointForIncrementalLoadPosition or restartPointForIncrementalLoadTimestamp parameter.
restartPointForIncrementalLoadPosition
Integer
For all sources in a database ingestion and replication incremental load or combined initial and incremental load task, the RBA position in source change stream or logs from which to start reading change records the first time the job runs.
For example:
restartPointForIncrementalLoadPosition: 0
If you specify this parameter, do not also specify the restarPointForIncrementalLoad or restartPointForIncrementalLoadTimestamp parameter.
restartPointForIncrementalLoadTimestamp
Timestamp
For all sources in an application ingestion and replication incremental load task or database ingestion and replication combined initial and incremental load task, the date and time, including AM or PM, in the source change stream or logs from which to start reading change records the first time the job runs.
For example:
restartPointForIncrementalLoadTimestamp: 2021-08-18 02:50:00 PM
For an application ingestion and replication task, this parameter determines the default behavior.
Note: If you specify this parameter, do not also specify the restarPointForIncrementalLoad or restartPointForIncrementalLoadPosition parameter.
cdcInterval
String (time interval)
For Salesforce sources in an application ingestion and replication incremental load or combined load task, the time interval for which the job runs to retrieve change records for CDC.
You can specify cdcIntervalDays, cdcIntervalHours, and cdcIntervalMins parameters.
fetchSize
Integer
For Salesforce sources in an application ingestion and replication initial load or incremental load task, the number of records that the job reads at a time from the source.
The default value for initial load jobs is 50000, and the default value for incremental load jobs is 2000.
If you specified Standard (REST) API for the salesforceAPI parameter, you must change the fetchSize value to 2000.
fetchSizeForInitialLoad
Integer
For Salesforce sources in an application ingestion and replication combined initial and incremental load task, the number of source records that the job reads at a time during the initial unload phase.
The default value is 50000.
If you specified Standard (REST) API for the salesforceAPI field, you must change the fetchSize value to 2000.
fetchSizeForIncrementalLoad
Integer
For Salesforce sources in an application ingestion and replication combined initial and incremental load task, the number of change records that the job can fetch at a time from the source during the incremental phase.
The default value is 2000.
For an initial load task or combined initial and incremental load task, if you specified Standard (REST) API for the salesforceAPI parameter, you must update the fetchSize value to 2000.
includeArchivedAndDeletedRows
Boolean
For Salesforce sources in an application ingestion and replication initial load or combined initial and incremental load task, controls whether the job can read archived and soft-deleted rows from the source during the initial load or unload phase of the combined load.
Default is false.
includeBase64Fields
Boolean
For an application ingestion and replication task with a Salesforce source, controls whether the job can replicate data from source fields that have the Base64 data type.
The default value is false.
Configure this parameter only if you set the salesforceAPI parameter to Standard (REST) API.
maximumBase64BodySize
Integer
For Salesforce sources in an application ingestion and replication task, the maximum body size of Base64 encoded data.
Default is 7 MB.
Pass this parameter only if you set the includeBase64Fields field to true.
includeViews
Boolean
For Microsoft SQL Server and Oracle sources in a database ingestion and replication task, indicates whether to include views in the table counts and list of table names.
Options are:
  • - true. Include views.
  • - false. Do not include views.
The following table describes the target connection and task attributes in the request:
Field
Type
Description
connection
String
Name of the connection for the target system.
schema
String
For Amazon Redshift, Databricks, Google BigQuery, Microsoft Azure Synapse Analytics, Oracle, and Snowflake targets, the target schema where Data Ingestion and Replication creates the target objects or tables.
bucket
String
For Amazon Redshift, Google Cloud Storage, and Google BigQuery targets, the name of an existing bucket container that stores, organizes, and controls access to the data objects that you load to the target.
directory
String
For Databricks, Amazon Redshift, Google Cloud Storage, and Google BigQuery targets, the virtual directory for the target objects that contain the data.
useTableNameAsTopicName
Boolean
For Apache Kafka targets, indicates whether the task writes messages that contain source data to separate topics, one for each source table or object, or writes all messages to a single topic.
Options are:
  • - true. Write messages to separate table-specific topics.
  • - false. Write all messages to the single topic that has the name specified in the topicName field.
includeSchemaName
Boolean
For Apache Kafka targets, when you set useTableNameAsTopicName to true, this setting adds the source schema name in the table-specific topic names. The topic names then have the format schemaname_tablename.
Options are:
  • - true. Add the source schema name in the table-specific topic names.
  • - false. Do not add the source schema name in the table-specific topic names.
tablePrefix
String
For Apache Kafka targets, when you set useTableNameAsTopicName to true, this parameter specifies an optional prefix to add to the table-specific topic names.
For example, if you specify myprefix_, the topic names have the format myprefix_tablename. If you omit the underscore (_) after the prefix, the prefix is prepended to the table name.
tableSuffix
String
For Apache Kafka targets, when you set useTableNameAsTopicName to true, this parameter specifies an optional suffix to add to the table-specific topic names.
For example, if you specify _mysuffix, the topic names have the format tablename_mysuffix. If you omit the underscore (_) before the suffix, the suffix is appended to the table name.
topicName
String
For Apache Kafka targets, the name of the single Kafka topic to which all messages that contain source data will be written. Use this parameter if useTableNameAsTopicName is set to false.
stage
String
For Snowflake targets, the name of the internal staging area that holds the data read from the source before the data is written to the target tables. This name must not include spaces. If the staging area does not exist, it will be automatically created.
outputFormat
String
For Amazon S3, Flat file, Google Cloud Storage, Microsoft Azure Data Lake Storage, and Kafka targets, the format of the output file.
Options are:
  • - AVRO
  • - CSV
  • - PARQUET
Default is CSV.
Note: Output files in CSV format use double-quotation marks ("") as the delimiter for each field.
parquetFormat
Boolean
For Amazon S3, Flat File, Google Cloud Storage, Microsoft Azure Data Lake Storage, and Kafka targets, if AVRO is specified as the output format in the outputFormat parameter, set this parameter to true to write data in uncompressed Parquet format. Alternatively, you can just set the outputFormat parameter to PARQUET and not include this parameter.
Options are:
  • - true. Write data in uncompressed Parquet format.
  • - false. Write data in AVRO format.
Note: If you set this option to true, you must install Visual C++ Redistributable Packages for Visual Studio 2013 on the computer where the Secure Agent runs.
avroFormat
String
For Amazon S3, Flat file, Google Cloud Storage, Microsoft Azure Data Lake Storage, and Kafka targets, if AVRO is specified as the output format, specifies the format of the Avro schema that will be created for each source table or object.
Options are:
  • - Avro-Flat. This Avro schema format lists all Avro fields in one record.
  • - Avro-Generic. This Avro schema format lists all columns or fields from a source table or object in a single array of Avro fields.
  • - Avro-Nested. This Avro schema format organizes each type of information in a separate record.
Default is Avro-Flat.
avroSerializationFormat
String
For Amazon S3, Flat file, Google Cloud Storage, Microsoft Azure Data Lake Storage, and Kafka targets, if AVRO is specified as the the output format, specify the serialization format of the Avro output files.
Options are:
  • - Binary
  • - JSON
Default value is Binary.
avroSchemaDirectory
String
For Amazon S3, Flat file, Google Cloud Storage, Microsoft Azure Data Lake Storage, and Kafka targets, if AVRO is specified as the output format, specifies the local directory where Data Ingestion and Replication stores Avro schema definitions for each source table or object.
Schema definition files have the following naming pattern: schemaname_tablename.txt
Note: If you do not specify this directory, no Avro schema definition file is produced.
fixedDirectoryForEachTable
Boolean
For an initial load task with an Amazon S3, Google Cloud Storage, or Microsoft Azure Data Lake Storage Gen2 target, indicates whether to use the source table or object names as the names of the directories to which the task writes flat files that contain source data for all job runs.
Options are :
  • - true. Use the source table or object names as the names of the directories for each job run.
  • - false. Create a new set of directories for each job run by using the following naming pattern: tablename_timestamp.
fileCompressionType
String
For Amazon S3, Flat file, Google Cloud Storage, and Microsoft Azure Data Lake Storage targets, the file compression type to optionally use for AVRO or CSV output files.
Options are:
  • - Deflate
  • - gzip
  • - Snappy
If you do not specify this parameter, the output files are not compressed.
avroCompressionType
String
For Amazon S3, Flat file, Google Cloud Storage, Kafka, and Microsoft Azure Data Lake Storage targets, the optional Avro compression type if the output format is set to AVRO.
Options are:
  • - None
  • - Bzip2
  • - Deflate
  • - Snappy
Default value is None, which means no compression is used.
parquetCompressionType
String
For Amazon S3, Flat file, Google Cloud Storage, Kafka, and Microsoft Azure Data Lake Storage targets, the optional Parquet compression type to use when the output format is set to PARQUET or AVRO by the parquetFormat true parameter.
Options are:
  • - None
  • - gzip
  • - Snappy
deflateCompressionLevel
Integer
For Amazon S3, Flat file, Google Cloud Storage, Kafka, and Microsoft Azure Data Lake Storage targets, a compression level of 0-9 to use for Deflate compression type, which is set by the avroCompressionType parameter.
Default is 0.
addDirectoryTags
Boolean
For Amazon S3, Google Cloud Storage, and Microsoft Azure Data Lake Storage targets, indicates whether the task adds the "dt=" prefix to the names of apply cycle directories to be compatible with the naming convention for Hive partitioning.
Options are:
  • - true. Add the "dt=" prefix to the names of apply cycle directories.
  • - false. Do not change the names of apply cycle directories.
Default is false.
directoryTags
String
For Amazon S3, Google Cloud Storage, and Microsoft Azure Data Lake Storage targets, this field is dependent on the addDirectoryTags field.
For example:
addDirectoryTags: true,
directoryTags: \directory.
renamingRules
List
Optional rules for renaming target tables that correspond to the source tables or objects.
For example:
- source: "*"
target: "*_1"
dataTypeRules
List
Optional data type mapping rules that override the default source-to-target data-type mapping rules. The default mappings are described under Default Data-Type Mappings in the Application Ingestion and Replication and Database Ingestion and Replication documentation.
For example:
- source: Int target: String
cdcCompatibleFormat
Boolean
For Amazon S3, Flat file, Google Cloud Storage, Kafka, and Microsoft Azure Data Lake Storage targets, indicates whether to include UNDO data in the output.
Default is false.
The following table describes the target advanced attributes that you can set for a task:
Field
Type
Description
addOperationType
Boolean
For Amazon S3, Apache Kafka, Flat File, Google Cloud Storage, and Microsoft Azure Data Lake Storage targets in an incremental load or combined initial and incremental load task, set this parameter to true to add a metadata column that includes the source SQL operation type in the output that the job replicates to the target.
For an incremental load or combined initial and incremental load task, default is true. For an initial load task, default is false.
addOperationTime
Boolean
For Amazon S3, Apache Kafka, Flat File, Google Cloud Storage, and Microsoft Azure Data Lake Storage targets in an incremental load or combined initial and incremental load task, set this parameter to true to add a metadata column that includes the source SQL operation time in the output that the job replicates to the target. For initial loads, the job always writes the current date and time. Default is false for all load types.
addOperationOwner
Boolean
For Amazon S3, Apache Kafka, Flat File, Google Cloud Storage, and Microsoft Azure Data Lake Storage targets in an incremental load or combined initial and incremental load task, set this parameter to true to add a metadata column that includes the owner of the source SQL operation in the output that the job replicates to the target.
For initial loads, the job always writes "INFA" as the owner. Default is false.
addOperationTransactionId
Boolean
For Amazon S3, Apache Kafka, Flat File, Google Cloud Storage, and Microsoft Azure Data Lake Storage targets in an incremental load or combined initial and incremental load task, set this parameter to true to add a metadata column that includes the source transaction ID in the output that the job replicates to the target for SQL operations.
Default is false.
addBeforeImages
Boolean
For Amazon S3, Apache Kafka, Flat File, Google Cloud Storage, and Microsoft Azure Data Lake Storage targets in an incremental load or combined load task, set this parameter to true to include UNDO data in the output that the job writes to the target.
Default is false.
asyncWrite
Boolean
For Kafka targets, controls whether to use synchronous delivery of messages to Kafka.
Options are:
  • - true. Use asynchronous delivery. Data Ingestion and Replication sends messages as soon as possible, without regard for the order in which the changes were retrieved from the source.
  • - false. Use synchronous delivery. Kafka must acknowledge each message as received before Data Ingestion and Replication sends the next message. In this mode, Kafka is unlikely to receive duplicate messages. However, performance might be slower.
Default is true.
producerConfigurationProperties
String
For Kafka targets, a comma-separated list of key=value pairs to enter Kafka producer properties for Apache Kafka, Confluent Kafka, or Kafka-enabled Event Hubs targets.
If you have a Confluent target that uses Confluent Schema Registry to store schemas, you must specify the following properties:
schema.registry.url=url, key.serializer= org.apache.kafka.common.serialization.StringSerializer, value.serializer= io.confluent.kafka.serializers.KafkaAvroSerializer
The following table describes the run-time options that you can set for a task:
Field
Type
Description
schemaDriftOptions
String
For Microsoft SQL Server, Oracle, or PostgreSQL sources in a database ingestion and replication incremental load or combined initial and incremental load task, the schema drift options for DDL operations.
Options are:
  • - addColumn
  • - modifyColumn
  • - dropColumn
  • - renameColumn
Valid values are:
  • - ignore
  • - replicate
  • - stop job
  • - stop table
For example:
schemaDriftOptions:
addColumn: ignore
modifyColumn: replicate
dropColumn: replicate
renameColumn: replicate
Note: Tasks that have a Microsoft Azure Synapse Analytics target ignore the renameColumn option.
numberOfRowsInOutputFile
Integer
For Amazon Redshift, Amazon S3, Google Big Query, Google Cloud Storage, Microsoft Azure Data Lake Storage, Microsoft Azure Synapse Analytics, Oracle, and Snowflake targets, the maximum number of rows that the task writes to an output data file on a target.
fileExtensionBasedOnFileType
Boolean
For Flat File, Amazon S3, Google Cloud Storage, and Microsoft Azure Data Lake Storage targets, indicates whether you want the output data files to have the .dat extension.
  • - true. The output files have file-name extensions based on their file types.
  • - false. The output files have the .dat extension.
The default value is true.
applyCycleChangeLimit
Integer
For Amazon S3, Google Cloud Storage, and Microsoft Azure Data Lake Storage Gen2 targets in an incremental load task, the number of records that must be processed before the job ends an apply cycle. When this record limit is reached, the job ends the apply cycle and writes the change data to the target.
Default is 10000.
Note: Either the applyCycleChangeLimit parameter or applyCycleIntervaltime_unit parameter must have a non-zero value.
applyCycleIntervalDays
Integer
For Amazon S3, Google Cloud Storage, or Microsoft Azure Data Lake Storage Gen2 targets in an incremental load task, the number of days that must elapse before the application ingestion and replication job or database ingestion and replication job ends an apply cycle.
You can specify this parameter with the applyCycleIntervalHours, applyCycleIntervalMins, and applyCycleIntervalSecs parameters or specify a subset of these parameters.
Default is 0.
applyCycleIntervalHours
Integer
For Amazon S3, Google Cloud Storage, and Microsoft Azure Data Lake Storage Gen2 targets in an incremental load task, the number of hours that must elapse before a job ends an apply cycle.
Default is 0.
applyCycleIntervalMins
Integer
For Amazon S3, Google Cloud Storage, and Microsoft Azure Data Lake Storage Gen2 targets in an incremental load task, the number of minutes that must elapse before the application ingestion and replication job or database ingestion and replication job ends an apply cycle.
Default is 15.
applyCycleIntervalSecs
Integer
For Amazon S3, Google Cloud Storage, and Microsoft Azure Data Lake Storage Gen2 targets in an incremental load task, the number of seconds that must elapse before the application ingestion and replication job or database ingestion and replication job ends an apply cycle.
Default is 0.
lowActivityFlushHours
Integer
For Amazon S3, Google Cloud Storage, and Microsoft Azure Data Lake Storage Gen2 targets, the amount of time, in hours, that must elapse during a period of no change activity on the source before the application ingestion and replication job or database ingestion and replication job ends an apply cycle.
You can use this parameter in conjunction with the lowActivityFlushMins parameter or specify only one of these parameters. When this time limit is reached, the job ends the apply cycle and writes the change data to the target.
Default is 0. If you do not specify a value for lowActivityFlushHours or lowActivityFlushMins, the application ingestion and replication job or database ingestion and replication job ends apply cycles only after either the applyCycleChangeLimit or cycle interval time limit is reached.
lowActivityFlushMins
Integer
For Amazon S3, Google Cloud Storage, and Microsoft Azure Data Lake Storage Gen2 targets, the amount of time, in minutes, that must elapse during a period of no change activity on the source before the application ingestion and replication job or database ingestion and replication job ends an apply cycle.
You can use this parameter in conjunction with the lowActivityFlushHours parameter or specify only one of these parameters. When the overall time limit is reached, the ingestion and replication job ends the apply cycle and writes the change data to the target.
Default is 0. If you do not specify a value for lowActivityFlushHours or lowActivityFlushMins, the job ends apply cycles only after either the applyCycleChangeLimit or cycle interval time limit is reached.
The following table describes check point attributes that define when and how progress is saved during data ingestion and replication:
Field
Type
Description
checkpointAllRows
Boolean
For Kafka targets in an incremental load task, indicates whether the ingestion and replication job performs checkpoint processing for every message that is sent to the Kafka target. When this parameter is set to true, the checkpointEveryCommit, checkpointRowCount, and checkpointFrequencySecs parameters are ignored.
Default is true.
checkpointEveryCommit
Boolean
For Kafka targets in an incremental load job, indicates whether the ingestion and replication job performs checkpoint processing for every commit that occurs on the source.
Default is false. If you set this parameter to true, do not also specify checkpointAllRows.
checkpointRowCount
Integer
For a Kafka target in an incremental load task, the maximum number of messages that the job sends to the target before adding a checkpoint.
If you set this option to 0, an ingestion and replication job does not perform checkpoint processing based on the number of messages. If you set this option to 1, the ingestion and replication job adds a checkpoint for each message.
Default is 0. If you set this parameter to a non-zero number, do not also specify checkpointAllRows.
checkpointFrequencySecs
Integer
For a Kafka target in an incremental load task, the maximum number of seconds that must elapse before the ingestion and replication job adds a checkpoint. If you set this option to 0, the job does not perform checkpoint processing based on elapsed time.
Default is 0. If you set this parameter to a non-zero number, do not also specify checkpointAllRows.
The following table describes custom and schedule attributes that you can set for a task:
Field
Type
Description
customProperties
String
Custom properties for the source, target, and task definition that Informatica provides to meet special requirements. Specify these properties only at the direction of Informatica Global Customer Support.
For example:
"customProperties": {
"readerInputIsPersisted": true
schedule
String
The name of a predefined schedule in Administrator to run job instances for initial load tasks automatically after deployment.

Successful post response

If the task is created successfully, the API returns a 201 Created success code. The response typically includes the following header:
Content-Type: application/json;charset=UTF-8
HTTP/2 201 Created
The response also includes the following information about the task:
Field
Type
Description
location
String
Project and folder name in which the task is created.
taskId
String
ID generated for the newly created task.
frsId
String
FRS reference ID.
name
String
Name of the task.
description
String
Description for the task.
createdBy
String
User ID who created the task
documentType
String
Type of document: DBMI_TASK or APPMI_TASK based on the taskType value specified in the request.
parentInfo
String
Parent project and structure details.

Initial load task example

Post request
Use this sample as a reference to create a database ingestion and replication task:
{
"general": {
"name": "ora_2_snflk_api2",
"description": "this is task created from api for unload",
"location": "Oracle",
"runtimeEnvironment": "asvappmiperf01",
"type": "initial"
},
"source": {
"connection": "Oracle_19RDS_yk-CDC",
"schema": "AUTO_INIT",
"selectionRules": [
{
"include": "SRC_ALLC*"
},
{
"exclude": "SRC_ALLCHAR2_8K"
}
],
"includeViews": true,
"customProperties": {
"readerInputIsPersisted": true
}
},
"target": {
"connection": "AIN_Snowflake",
"schema": "SP5",
"stage": "gkn",
"renamingRules": [
{
"source": "SRC_ALLC*",
"target": "SRC_ALLC*_F25"
}
],
"dataTypeRules": [
{
"source": "CHAR(2 BYTE)",
"target": "CHAR(200 BYTE)"
}
]

},
"runtimeOptions": {



"numberOfRowsInOutputFile": 10000,
"executeInTaskflow": "false"
},
"schedule": "every5mins",
"taskType": "DBMI"
}
Post response
A successful POST response returns a summary similar to the following example for an initial load job:
{
"location": "Oracle",
"taskId": "604065",
"frsId": "7GYYqXEQPz5dcvJOaowiai",
"name": "ora_2_snflk_api2",
"description": "this is task created from api for unload",
"createdBy": "6r2IItttmU3ibDwPqxnRwb",
"documentType": "DBMI_TASK",
"parentInfo": [
{
"parentId": "7cCn5thwWFLhiZoSosphKL",
"parentName": "REG",
"parentType": "Space"
},
{
"parentId": "33YxQayEoi1d5QgU02OQGV",
"parentName": "Oracle",
"parentType": "Project"
}
]
}

Incremental load task example

Post request
Use this sample as a reference to create a database ingestion and replication incremental load job:
{
"general": {
"name": "ora_2_snflk_api_cdc",
"description": "this is task created from api for cdc",
"location": "Oracle",
"runtimeEnvironment": "asvappmiperf01",
"type": "cdc"
},
"source": {
"connection": "Oracle_19RDS_yk-CDC",
"schema": "SP_UNLOAD",
"selectionRules": [
{
"include": "FIX9"
},
{
"exclude": "FIX4"
}
],
"restartPointForIncrementalLoad": "latest",
"customProperties": {
"readerInputIsPersisted": true
}
},
"target": {
"connection": "AIN_Snowflake",
"schema": "SP5",
"stage": "gkn",
"renamingRules": [
{
"source": "FIX9",
"target": "FIX9_API"
}
],
"dataTypeRules": [
{
"source": "CHAR(300 BYTE)",
"target": "CHAR(600 BYTE)"
}
]

},
"runtimeOptions": {
"schemaDriftOptions": {
"addColumn": "ignore",
"modifyColumn": "replicate",
"dropColumn": "ignore",
"renameColumn": "replicate"
},
"numberOfRowsInOutputFile": 10000,
"executeInTaskflow": "false"
},



"taskType": "DBMI"
}
Post response
A successful POST response returns a summary similar to the following example for an incremental load job:
{
"location": "Oracle",
"taskId": "604203",
"frsId": "2fPHZbdT2epie8HCOkhIgY",
"name": "ora_2_snflk_api_cdc",
"description": "this is task created from api for cdc",
"createdBy": "6r2IItttmU3ibDwPqxnRwb",
"documentType": "DBMI_TASK",
"parentInfo": [
{
"parentId": "7cCn5thwWFLhiZoSosphKL",
"parentName": "REG",
"parentType": "Space"
},
{
"parentId": "33YxQayEoi1d5QgU02OQGV",
"parentName": "Oracle",
"parentType": "Project"
}
]
}

Combined load task example

Post request
Use this sample as a reference to create a database ingestion and replication combined load job:
{
"general": {
"name": "ora_2_snflk_api_comb",
"description": "this is task created from api for combined",
"location": "Oracle",
"runtimeEnvironment": "asvappmiperf01",
"type": "combined"
},
"source": {
"connection": "Oracle_19RDS_yk-CDC",
"schema": "SP_UNLOAD",
"selectionRules": [
{
"include": "FIX9"
},
{
"exclude": "FIX4"
}
],
"restartPointForIncrementalLoad": "latest",
"customProperties": {
"readerInputIsPersisted": true
}
},
"target": {
"connection": "AIN_Snowflake",
"schema": "SP5",
"stage": "gkn",
"renamingRules": [
{
"source": "FIX9",
"target": "FIX9_API2"
}
],
"dataTypeRules": [
{
"source": "CHAR(300 BYTE)",
"target": "CHAR(600 BYTE)"
}
]

},
"runtimeOptions": {
"schemaDriftOptions": {
"addColumn": "ignore",
"modifyColumn": "replicate",
"dropColumn": "ignore",
"renameColumn": "stop_table"
},
"numberOfRowsInOutputFile": 10000,
"executeInTaskflow": "false"
},



"taskType": "DBMI"
}
Post response
A successful POST response returns a summary similar to the following example for a combined load job:
{
"location": "Oracle",
"taskId": "604439",
"frsId": "849OHrnmXLJdZql1JzmG8L",
"name": "ora_2_snflk_api_comb",
"description": "this is task created from api for combined",
"createdBy": "6r2IItttmU3ibDwPqxnRwb",
"documentType": "DBMI_TASK",
"parentInfo": [
{
"parentId": "7cCn5thwWFLhiZoSosphKL",
"parentName": "REG",
"parentType": "Space"
},
{
"parentId": "33YxQayEoi1d5QgU02OQGV",
"parentName": "Oracle",
"parentType": "Project"
}
]
}

Failed responses and error codes

If the request fails, the API might return one of the following HTTP error codes:
Name
HTTPS status code
Error message
Unauthorized
401
Invalid sessionId.
Forbidden
403
Logged-in user does not have permission to create a task or access the specified folder.
Internal server error
500
Failed due to an internal server error.

Deploy a database or application ingestion and replication task

Use the deploy resource to deploy a database ingestion and replication task or application ingestion and replication task.
To deploy a task, perform the following tasks:

Get the task ID

Before you deploy a task, you need to retrieve the task ID.
Use the following GET request to retrieve the task ID:
GET /dbmi/public/api/v2/tasks/fetchId?taskName={taskName}&projectId={projectId}&folderId={folderId}
You can include the following optional attributes in the request:
Field
Type
Description
projectId
string
The ID of the project where the task resides.
folderId
string
The ID of the folder where the task resides in the project.
taskName
string
Name of the task.

Get the project ID, folder ID, and task name

You can get the project ID, folder ID, and task name for the task that you want to deploy by using one of the following methods:
Use the FRS API to get the folder ID and project ID by filtering documents with the task name.
GET /frs/api/v1/Documents?$filter=name eq 'taskName'
For example:
https://pod.ics.dev:444/frs/api/v1/Documents?$filter=name eq 'zendeskToSlfk'
Alternatively, you can get the details from the task user interface as follows:

Sample request

Use the following sample as a reference to get the task ID:
GET https://pod-ing.ics.dev:11447/dbmi/public/api/v2/tasks/fetchId?projectId=1lHbX3urKB8dHJ071tXV8Fg&folderId=5GX14m3CuFAeVWqnsSX9mC&taskName=zendeskToSlfk

Sample response

A successful response returns a summary including the task name and task ID similar to the following example:
{
"totalCount": 1,
"pageNo": 0,
"pageSize": 25,
"documents": [
{
"name": "check_comb_case2_feb4",
"parentInfo": [
{
"parentId": "7cCn5thwWFLhiZoSosphKL",
"parentName": "REG",
"parentType": "Space"
},
{
"parentId": "33YxQayEoi1d5QgU02OQGV",
"parentName": "Oracle",
"parentType": "Project"
},
{
"parentId": "77uOlq0PAQWl3z2lDjAtWk",
"parentName": "Snowflake",
"parentType": "Folder"
}
],
"taskId": 593603
}
]
}

Pagination attributes

To paginate the results, you can include the following pagination attributes in your GET request:
GET /dbmi/public/api/v2/task/fetchId?orderBy={nameOfThecolumn}&pageNo={int}&pageSize={int}
You can include the following pagination attributes in the request:
Fields
Type
Description
orderBy
string
Column name to sort the results.
You can use one of the following column names to specify the sorting order:
  • - name
  • - createdTime
  • - lastUpdatedTime
  • - lastAccessedTime
pageNo
Integer
Page number to retrieve after pagination.
pageSize
Integer
Number of records per page.

Deploy the task

After you get the taskId, deploy the task using the deploy resource.
Use the following POST request to deploy the task:
POST /dbmi/public/api/v2/task/deploy/{taskId}
Include the following attribute in the request:
Field
Type
Description
taskId
integer
The ID associated with the task.

Sample response

A successful deployment request returns a 202 status code with a summary including the job ID and deployment status, indicating that deployment has started:

{
"status": "success",
"code": 202,
"message": "Deployment started",
"data": {
"jobId": 4,
"deploymentStatus": "processing_deploy"
}
}
Note: Deployment runs asynchronously. For tasks that include many tables, deployment might take longer. The API responds immediately with a "Deployment started" message. To monitor the status, query the Get Status Status API with the job ID. For more information, see Get Job Status API.

Failed responses

If the request fails, the API might return one of the following HTTP error codes:
Name
HTTP Status Code
Error Message
Unauthorized
401
Invalid sessionId.
Bad Request
400
Missing body.
Forbidden
403
No permission to deploy the task.
Not found
404
Mass Ingestion Databases could not find the task with ID.
Deployment conflict
409
Mass Ingestion task already deployed.
Internal server error
500
Failed due to server internal error.

Run a database or application ingestion and replication job

Use the job resource to run a database ingestion and replication job or application ingestion and replication job.
Perform the following tasks to run and monitor a task:

Get the job ID

To start a job, you need to retrieve the job ID.
Use the following GET request to fetch the job ID:
GET /dbmi/public/api/v2/task/fetch/{taskId}/job
Include the following attribute in the request:
Field
Type
Description
taskId
integer
ID of the deployed task.

Sample request

Use the following sample as reference to get the job ID:
GET https://pod-ing.ics.dev:11447/dbmi/public/api/v2/task/fetch/2/job

Sample response

A successful response returns the job ID and related details for the specified task ID similar to the following example:

{
"jobId": 15,
"assetName": "sftosnw_15",
"location": "Default"
}
Only one job ID is returned for the specified task ID.

Start the job

Use the jobId to start a job instance.
Use the following POST request to run a task:
POST /dbmi/public/api/v2/job/start
Include the following attribute in the request body:
Field
Type
Description
jobId
integer
The ID of the job that you want to start.

Sample body

Use the following request body to pass the job ID to the resource:
{
"jobId": "15"
}
Each task corresponds to a single job ID. Therefore, when you retrieve the job ID for a given task, only one job ID is returned.

Sample response

A successful response returns a 202 status code indicating the job is accepted to start:

{
"status": "Success",
"code": 202,
"message": "Job with org id 2HimcQ9cXW5kLUCOUkXGi8 and job name zendeskToSlfk-UserA-test_8 has been accepted to start successfully."
}
Note: The start job API initiates the job asynchronously. The response confirms that the job has been accepted to start. To check if the job started successfully and monitor its progress, query the job status API. For more information, see Get Job Status API.

Failed responses

If the request fails, the API might return one of the following HTTP error codes:
Name
HTTP Status Code
Error Message
Unauthorized
401
Invalid sessionId.
Forbidden
403
No permission to run or start.
Missing jobId
404
Mass Ingestion Databases could not find the job with ID '0' .
Not Found
404
Mass Ingestion Databases could not find the job with ID '<jobId>'.
Start conflict.
409
Job is already running.
Internal Server Error
500
Failed due to server internal error.

Stop a database or application ingestion and replication job

Use the stop action of the job resource to stop a database ingestion and replication job or application ingestion and replication job that has the status Up and Running, Running with Warning, or On Hold.

POST request

Use the following POST request to stop a job:
POST /dbmi/public/api/v2/job/stop
Include the following attribute in the request body:
Field
Type
Description
jobId
integer
The ID of the job that you want to stop.

Request body

Use the following request body to pass the job ID to the resource:
{
"jobId": "<jobId>"
}

Success response

A successful response returns a status indicating the job has been accepted to stop.
{
"status": "Success",
"code": 202,
"message": "Job with org id 2HimcQ9cXW5kLUCOUkXGi8 and job name zendeskToSlfk-UserA-test_8 has been accepted to stop successfully."
}
Note: The stop job API processes requests asynchronously. To verify whether the job stopped successfully, query the Get Status Status API with the job ID. For more information, see Get Job Status API.

Failed responses

If the request fails, the API might return one of the following HTTP error codes:
Name
HTTPS status code
Error message
Unauthorized
401
Invalid sessionId.
Bad Request
400
Missing body.
Forbidden
403
No permission to stop the job.
Missing jobId
404
Mass Ingestion Databases could not find the job with ID '0'.
Not Found
404
Mass Ingestion Databases could not find the job with ID '<jobId>'.
Conflict
409
Job associated with JobID is not in "RUNNING" state.
Internal Server Error
500
Failed due to server internal error.

Resume a database or application ingestion and replication task

Use the resume action of the job resource to resume a database ingestion and replication job or application ingestion and replication job that failed or was stopped or aborted because of a schema drift error.

POST request

Use the following POST request to resume a job:
POST /dbmi/public/api/v2/job/resume
Include the following required attributes in the request body:
Field
Type
Description
jobId
integer
The ID of the job that you want to resume.
parameters
Map<String, JSON>
Overrides schema drift options and controls how a job that is in a Stopped, Aborted, or Failed state resumes.
For an incremental load job, resume options are:
  • - replicate. Allows the job to replicate the DDL changes to the target.
  • - ignore. Does not replicate DDL changes that occur on the source database to the target.
  • - stop. Stops processing the source table on which the DDL change occurred.
For a combined load job, resume options are:
  • - replicate. Allows the job to replicate the DDL change to the target.
  • - ignore. Does not replicate DDL changes that occur on the source database to the target.
  • - stop table. Stops processing the source table on which the DDL change occurred.
  • - resync retain. Resynchronizes the same columns that have been processed for CDC, retaining the current structure of the source and target tables. No checks for changes to the source or target table definitions are performed. If source DDL changes affected the source table structure, those changes are not processed.
  • - resync. Replicates the DDL changes to the target.

Request body

Use the following request body to pass attributes without resume options:
{
"jobId": <jobId>,
"parameters": {
"resumeOptions":null}
}
Use the following request body to pass attributes with resume options. The example includes resume options to replicate all schema changes:
{
"jobId": <jobId>,
"parameters": {
"resumeOptions":{"schemaChangeOptions":[{"pattern":"*.*","action":"REPLICATE"}]}}
}
Note: If you specify the RESYNC or RESYNC_RETAIN resume option, the API automatically applies a generic pattern "*.*" to resyncPatterns, so you do not need to specify a pattern explicitly. For IGNORE, REPLICATE, and STOP_TABLE resume options, the API uses the pattern you specify in the resumeOptions field. The API creates a SchemaChangeHandlingOption for the pattern with a list of SchemaDriftItems for schema operations such as DROP_COLUMN, ADD_COLUMN, RENAME_COLUMN, and MODIFY_COLUMN. Each operation is mapped to a corresponding SchemaDriftAction derived from the resume action.
For example:
{
"jobId": 15,
"parameters": {
"resumeOptions":{"schemaChangeOptions":[{"pattern":"*.*","action":"REPLICATE"}]}}
}
In this request, all tables in the sales schema replicates the schema changes.

Success response

A successful response returns the Success status, which indicates the job has been accepted to resume.
{
"status": "Success",
"code": 202,
"message": "Job with org id 2HimcQ9cXW5kLUCOUkXGi8 and job name zendeskToSlfk-UserA-test_8 has been accepted to resume successfully."
}
Note: The resume job API processes requests asynchronously. To verify whether the job resumed successfully, query the Get Job Status API with the job ID. For more information, see Get Job Status API.

Failed responses

If the request fails, the API returns one of the following HTTP error codes:
Name
HTTPS status code
Error message
Unauthorized
401
Invalid sessionId.
Bad Request
400
Missing body.
Forbidden
403
No permission to resume the job.
Missing jobId
404
Mass Ingestion Databases could not find the job with ID '0'.
Not Found
404
Mass Ingestion Databases could not find the job with ID '<jobId>'.
Conflict
409
Cannot be started as it is already running or in the "COMPLETED" or "DEPLOYED" state.
Internal Server Error
500
Failed due to server internal error.

Undeploy a database or application ingestion and replication job

Use the undeploy action of the job resource to undeploy a database ingestion and replication job or application ingestion and replication job. Before you undeploy a job, ensure that it is not running and the status is Aborted, Completed, Deployed, Failed, or Stopped .

POST request

Use the following POST request to undeploy a job.
POST /dbmi/public/api/v2/job/undeploy
Include the following attribute in the request body:
Field
Type
Description
jobId
integer
The ID of the job that you want to undeploy.

Request body

Use the following request body to pass the job ID to the resource:
{
"jobId": "<jobId>"
}

Success response

A successful response returns a Success status which indicates that the job is accepted for undeployment.
{
"status": "Success",
"code": 202,
"message": "Job with org id 2HimcQ9cXW5kLUCOUkXGi8 and job name zendeskToSlfk-UserA-test_8 has been accepted to undeploy successfully."
}
Note: The undeploy job API processes requests asynchronously. To verify whether the job undeployed successfully, query the Get Job Status API with the job ID. For more information, see Get Job Status API.

Failed responses

If the request fails, the API returns one of the following HTTP error codes:
Name
HTTPS status code
Error message
Unauthorized
401
Invalid sessionId.
Bad Request
400
Missing body.
Forbidden
403
No permission to undeploy.
Missing jobId
404
Mass Ingestion Databases could not find the job with ID '0'.
Not Found
404
Mass Ingestion Databases could not find the job with ID '<jobId>'.
Conflict
409
Job associated with JobID is not in "STOPPED", "FAILED" or "ABORTED" state.
Internal server error
500
Failed due to server internal error.

Get job status for a database or application ingestion and replication task

Use the status action of the job resource to retrieve the status of a database ingestion and replication or application ingestion and replication job.

GET request

Use the following GET request to fetch the status of a job.
GET /dbmi/public/api/v2/job/status
Include the following attribute in the request body:
Field
Type
Description
jobId
integer
The ID of the job for which you want to retrieve the status.

Request body

Use the following request body to pass the job ID to the resource:
{
"jobId": "<jobId>"
}

Get response

If successful, the response includes the following information about the status of a job:
Field
Type
Description
jobId
Long
Unique identifier of the job.
assetName
String
Name of the task.
assetType
String
Type or category of the asset. Returns either APPMI_TASK or DBMI_TASK based on the task type you specified in the request.
startedBy
String
Email or identifier of the user who started the job.
orgId
String
Organization identifier to which the job belongs.
runtimeEnv
String
The name of the runtime environment in which the job ran.
status
String
The status of the job. A job can have one of the following status:
  • - Up and Running. The job is running.
  • - Running with Warning. The job is running with a warning. This state can also occur when one or more table-specific subtasks fail but some subtasks are still running.
  • - On Hold. The job is in a paused state while the DBMI agent is being updated.
  • - Stopping. The job is stopping in response to a Stop request.
  • - Stopped. The job was intentionally stopped.
  • - Failed. The job ended abnormally, the task deployment to the job failed, or one or more table-specific subtasks failed. Also, for an initial load job, the job was stopped.
  • - Deploying. The job is being deployed.
  • - Deployed. The job has been deployed.
  • - Aborting. The job is stopping immediately in response to an Abort request.
  • - Aborted. The job has been aborted.
  • - Undeploying. The job is being undeployed.
  • - Undeployed. The job has been undeployed.
  • - Completed. The job completed successfully.
durationInSeconds
Long
Duration of the job execution in seconds.
endTime
String (ISO 8601)
Timestamp when the job ended.
location
String
The project or project folder that contains the task definition.
startTime
String (ISO 8601)
Timestamp when the job started.
errorMessage
String or null
Error message if the job failed. Null if no error.
lastAction
String
Description of the last action performed on the job. For example, Job was started by a user.
jobConfig
Object (JobInfoExtraData)
Configuration details related to the task associated with the job.
The jobConfig parameter includes the following information:
Field
Type
Description
taskId
String
Unique identifier of the task associated with the job.
taskMode
String (Enum)
Represents the type of load operation performed. Options are:
  • - UNLOAD. Loads source data read at a single point in time to a target. After the data is loaded, the ingestion and replication job ends.
  • - CDC. Loads data changes continuously or until the ingestion and replication job is stopped or ends. The job loads the changes that have occurred since the last time it ran or from a specific start point.
  • - COMBINED. Performs an initial load of point-in-time data to the target and then automatically switches to propagating incremental data changes made to the same source objects on a continuous basis.
srcConnId
String
Source connection identifier used in the task.
tgtConnId
String
Target connection identifier used in the task.
deployTime
String (ISO 8601)
Timestamp when the task was deployed.
schedulerJobName
String or null
Name of the scheduler job if applicable. Null if not scheduled.
schedulerId
String
Identifier for the scheduler. Returns empty string if not applicable.
agentId
String
Identifier of the agent handling the job deployment. Returns empty if not assigned.
isRunOnOtherAgentEnabled
Boolean
Flag that indicates if the job can run on other agents.
isServerless
Boolean or null
Indicates if the job runs in a serverless environment.
deployVersion
String
Version of the deployment. For example, 58.0.0-SNAPSHOT.
featureTags
List or null
List of feature tags associated with the job. Returns Null if none.
srcVendor
String or null
Vendor or source system type. For example, ZENDESK.
tgtVendor
String or null
Vendor or target system type. For example, SNOWFLAKE.
cdcGroupJobId
Long or null
CDC group job ID. Returns this parameter only for CDC and combined jobs.
cdcGroupInmdtStrgConId
String or null
CDC group intermediate storage connection ID. Returns this parameters only for CDC and combined jobs.
mapOfApplyJobDetails
Map<String, Object>
Map of apply jobs in a CDC staging job and jobs for migrating jobs to a CDC staging group.

Success response example

A successful response returns details about the job status similar to the following example.
{
"jobId": 7,
"assetName": "Replication_Task_20251218120228_7",
"assetType": "APPMI_TASK",
"startedBy": "UserA@informatica.com",
"orgId": "2HimcQ9cXW5kLUCOUkXGi8",
"runtimeEnv": "01000125000000000002",
"status": "failed",
"durationInSeconds": 68,
"endTime": "2026-01-21T06:22:10.496Z",
"location": "Default",
"startTime": "2026-01-21T06:21:02.847Z",
"errorMessage": null,
"lastAction": "Job was started by a user.",
"jobConfig": {
"taskId": "dip5eZ0BfeNk9vYzEhQA6E",
"taskMode": "UNLOAD",
"srcConnId": "0100010B000000000002",
"tgtConnId": "0100010B000000000003",
"deployTime": "2025-12-18T06:37:54.337256427Z",
"schedulerJobName": null,
"schedulerId": "",
"agentId": "",
"isRunOnOtherAgentEnabled": false,
"isServerless": false,
"deployVersion": "58.0.0-SNAPSHOT",
"featureTags": null,
"srcVendor": "ZENDESK",
"tgtVendor": "SNOWFLAKE",
"cdcGroupJobId": null,
"cdcGroupInmdtStrgConId": null,
"mapOfApplyJobDetails": {}
}
}

Failed responses

If the request fails, the API might return one of the following HTTP error codes:
Name
HTTPS status code
Error message
Unauthorized
401
Invalid sessionId.
Bad Request
400
Missing body.
Forbidden
403
No permission.
Not Found
404
Mass Ingestion Databases could not find the job with ID '<jobId>'.
Internal Server Error
500
Failed due to server internal error.

Get statistics for a database or application ingestion and replication job

Use the metric action of the job resource to retrieve detailed statistics for a database ingestion and replication job or application ingestion and replication job.

POST request

Use the following POST request to retrieve detailed job statistics:
POST /dbmi/public/api/v2/job/metrics
Include the following required attributes in the request body:
Field
Type
Description
jobId
integer
The ID of the job for which you want to retrieve detailed statistics.
parameters
Map<String,json>
Map containing the metricsOptions and their values.
Include the following required metricsOptions attributes:
Field
Type
Description
stateFilter
string
Filter for the state of the objects for which you want to retrieve metrics. If you specify null, no filtering is applied.
sort
array
Defines the sorting order of the results. For example, to sort by "srcTable" in ascending ("asc") order, pass the following attribute:"
"sort": [ "srcTable", "asc" ],
search
string
Search string to filter objects by name or other searchable fields. Empty string means no filter applied.
offset
number
Maximum number of objects to return in one response or page size. For example, if you set the offset to 25, the request can returns metrics for up to 25 objects in one response.
limit
number
Number of objects to skip from the beginning for pagination. For example, 0 means start from the first object.

Request body

Use the following request body to pass the job ID and metrics options:
{
"jobId": <jobId>,
"parameters": {
"metricsOptions": {
"stateFilter": null,
"sort": [ "srcTable", "asc" ],
"search": "",
"limit": 25,
"offset": 0
}
}
}

Success response

A successful response includes the following detailed statistics about the job and its tasks:
Parameter
Type
Description
jobId
Long
Unique identifier of the job.
assetId
String
Asset identifier related to the job.
jobName
String
Name of the job.
assetType
String
Type or category of the asset. The asset type can be "APPMI_TASK" or "DBMI_TASK" based on the task type you specified.
startedBy
String
Email or identifier of the user who started the job.
correlationId
String
Correlation identifier for tracking related jobs or processes.
orgId
String
Organization identifier to which the job belongs.
runtimeEnv
String
The name of the runtime environment in which the job ran.
status
String
The status of the job. Options are:
  • - Up and Running. The job is running.
  • - Running with Warning. The job is running with a warning. This state can also occur when one or more table-specific subtasks fail but some subtasks are still running.
  • - On Hold. The job is in a paused state while the Database Ingestion and Replication (DBMI) agent is being updated.
  • - Stopping. The job is stopping in response to a Stop request.
  • - Stopped. The job was intentionally stopped.
  • - Failed. The job ended abnormally, the task deployment to the job failed, or one or more table-specific subtasks failed. Also, for an initial load job, the job was stopped.
  • - Deploying. The job is being deployed.
  • - Deployed. The job has been deployed.
  • - Aborting. The job is stopping immediately in response to an Abort request.
  • - Aborted. The job has been aborted.
  • - Undeploying. The job is being undeployed.
  • - Undeployed. The job has been undeployed.
  • - Completed. The job completed successfully.
duration
Long
Duration of the job execution, in milliseconds or seconds.
endTime
String (ISO 8601 timestamp)
Timestamp when the job ended.
startTime
String (ISO 8601 timestamp)
Timestamp when the job started.
location
String
The project or folder that contains the task definition.
jobConfig
Object
Configuration details related to the job task.
metricsInfo
List
List of metrics for each task associated with the job.
The jobConfig parameter includes the following information:
Parameter
Type
Description
taskId
String
Unique identifier of the task associated with the job.
taskMode
String (Enum)
Represents the type of load operation performed.
Options are:
  • - UNLOAD. Loads source data read at a single point in time to a target. After the data is loaded, the ingestion and replication job ends.
  • - CDC. Loads data changes continuously or until the ingestion and replication job is stopped or ends. The job loads the changes that have occurred since the last time it ran or from a specific start point.
  • - COMBINED. Performs an initial load of point-in-time data to the target and then automatically switches to propagating incremental data changes made to the same source objects on a continuous basis.
srcConnId
String
Source connection identifier used in the task.
tgtConnId
String
Target connection identifier used in the task.
deployTime
String (ISO 8601)
Timestamp when the task was deployed.
schedulerJobName
String or null
Name of the scheduler job if applicable. Null if not scheduled.
schedulerId
String
Identifier for the scheduler. Returns empty string if not applicable.
agentId
String
Identifier of the agent handling the job deployment. Returns empty if not assigned.
isRunOnOtherAgentEnabled
Boolean
Flag that indicates if the job can run on other agents.
isServerless
Boolean or null
Indicates if the job runs on a serverless environment.
deployVersion
String
Version of the deployment. For example, 58.0.0-SNAPSHOT.
featureTags
List or null
List of feature tags associated with the job. Returns Null if none.
srcVendor
String or null
Vendor or source system name. For example, ORACLE.
tgtVendor
String or null
Vendor or target system name. For example, SNOWFLAKE.
cdcGroupJobId
Long or null
The CDC group job ID. Returns this parameter for database ingestion and replication CDC staging tasks.
cdcGroupInmdtStrgConId
String or null
CDC group intermediate storage connection ID. Returns this parameter for database ingestion and replication CDC staging tasks.
mapOfApplyJobDetails
Map<String, Object>
Map of apply jobs in database ingestion and replication CDC staging jobs and for migrating jobs to a CDC staging group.
The metricsInfo parameter includes the following information:
Parameter
Type
Description
jobName
String
Name of the job.
taskName
String
Name of the task associated with the job.
runId
String
Run identifier for the task execution.
recordsRead
Long
Number of records read during the task.
recordsWritten
Long
Number of records written during the task.
captureProgress
Object or null
Optional progress capture object. Null if not applicable.
duration
Long
Duration of the task execution, usually in milliseconds.
subTasks
List
List of subtasks metrics related to this task.

Sample response for unload metrics

The following sample response shows the returned unload metrics:
{
"jobInfo": {
"jobId": 20,
"assetId": "7yQXJnK2uwUe50OwfaPzQU_20",
"jobName": "sftosnw_20",
"assetType": "APPMI_TASK",
"startedBy": "usera@informatica.com",
"correlationId": "MGY1ZmVmN2MtMzNhZi00ZD_DBMI",
"orgId": "2HimcQ9cXW5kLUCOUkXGi8",
"runtimeEnv": "01000125000000000002",
"status": "completed",
"duration": 100,
"endTime": "2026-02-04T18:01:45.248Z",
"jobConfig": {
"taskId": "7yQXJnK2uwUe50OwfaPzQU",
"taskMode": "UNLOAD",
"srcConnId": "0100010B00000000000A",
"tgtConnId": "0100010B000000000003",
"deployTime": "2026-02-04T17:57:50.242566142Z",
"schedulerJobName": null,
"schedulerId": "",
"agentId": "",
"isRunOnOtherAgentEnabled": false,
"isServerless": false,
"deployVersion": "58.0.0-SNAPSHOT",
"featureTags": null,
"srcVendor": "SALESFORCE",
"tgtVendor": "SNOWFLAKE",
"cdcGroupJobId": null,
"cdcGroupInmdtStrgConId": null,
"mapOfApplyJobDetails": {}
},
"location": "Default",
"startTime": "2026-02-04T18:00:05.041Z"
},
"metricsInfo": [
{
"jobName": "sftosnw_20",
"taskName": "UNLOAD-SALESFORCE-Account",
"runId": "576",
"recordsRead": 161,
"recordsWritten": 161,
"captureProgress": null,
"duration": 80145,
"subTasks": [
{
"srcName": "SALESFORCE.Account",
"tgtName": "USERA.Account",
"state": "COMPLETED"
}
]
},
{
"jobName": "sftosnw_20",
"taskName": "UNLOAD-SALESFORCE-AccountShare",
"runId": "577",
"recordsRead": 0,
"recordsWritten": 0,
"captureProgress": null,
"duration": 74085,
"subTasks": [
{
"srcName": "SALESFORCE.AccountShare",
"tgtName": "USERA.AccountShare",
"state": "COMPLETED"
}
]
}
],
"counts": {
"total": 2,
"statusCounts": [
{
"status": "COMPLETED",
"counts": 2
}
],
"totalUnload": {
"totalRead": 161,
"totalWritten": 161
}
},
"currentThroughput": 1.6262626262626263
}

Sample response for CDC metrics

The following sample response shows the returned CDC metrics:
{
"jobInfo": {
"jobId": 202664,
"assetId": "kMqdBNFWs0ebxlygu6oDeC_202664",
"jobName": "cdcforrestapi_202664",
"assetType": "APPMI_TASK",
"startedBy": "usera1",
"correlationId": "NDZjY2Q2YjAtY2M4Mi00NG_DBMI",
"orgId": "6sx0UHhl0fNbZqvIX5VDeJ",
"runtimeEnv": "013QNH250000000007XF",
"status": "running",
"duration": 1107,
"endTime": null,
"jobConfig": {
"taskId": "kMqdBNFWs0ebxlygu6oDeC",
"taskMode": "CDC",
"srcConnId": "013QNH0B00000000005D",
"tgtConnId": "013QNH0B0000000001KS",
"deployTime": "2026-02-26T04:57:19.795054266Z",
"schedulerJobName": null,
"schedulerId": "",
"agentId": "013QNH080000000007VX",
"isRunOnOtherAgentEnabled": true,
"isServerless": false,
"deployVersion": "58.0.0-SNAPSHOT",
"featureTags": null,
"srcVendor": "SALESFORCE",
"tgtVendor": "SNOWFLAKE",
"cdcGroupJobId": null,
"cdcGroupInmdtStrgConId": null,
"mapOfApplyJobDetails": {}
},
"location": "Default",
"startTime": "2026-02-26T04:58:32.104Z"
},
"metricsInfo": [
{
"jobName": "cdcforrestapi_202664",
"taskName": "CDC-SALESFORCE",
"runId": "815314",
"recordsRead": 4,
"recordsWritten": 4,
"captureProgress": null,
"duration": 1059239,
"subTasks": [
{
"srcName": "SALESFORCE.TEST111",
"tgtName": "SNOWFLAKE.TGT_TABLE",
"state": "COMPLETED",
"inserts": 10,
"updates": 5,
"deletes": 2,
"LOBs": 0,
"unloadCount": 0,
"transitionState": 1,
"transitionStateErrorDetail": 0,
"transitionStateCount": 1,
"unloadIdentifier": 0
},
{
"srcName": "SALESFORCE.ACCOUNT",
"tgtName": "SNOWFLAKE.ACCOUNT",
"state": "RUNNING",
"inserts": 20,
"updates": 3,
"deletes": 1,
"LOBs": 0,
"unloadCount": 0,
"transitionState": 0,
"transitionStateErrorDetail": 0,
"transitionStateCount": 0,
"unloadIdentifier": 0
}
]
}
],
"counts": {
"total": 1,
"statusCounts": [
{
"status": "RUNNING",
"counts": 1
}
],
"totalUnload": {
"totalRead": 0,
"totalWritten": 0
}
},
"currentThroughput": 0.003777148253068933
}

Failed responses

If the request fails, the API might return one of the following HTTP error codes:
Name
HTTPS status code
Error message
Unauthorized
401
Invalid sessionId.
Bad Request
400
Missing body.
Forbidden
403
No permission.
Not Found
404
Mass Ingestion Databases could not find the job with ID '<jobId>.'
Conflict
409
Mass ingestion job not in valid state for stats collection.
Internal Server Error
500
Failed due to server internal error.
Internal Server Error
500
MetricsOptions are missing in the requestBody.

Get database or application ingestion and replication task details

Use the details action of the task resource to retrieve detailed information about a database ingestion and replication task or application ingestion and replication task in the organization. This API provides detailed task payload information, which you can easily reuse when you create or update tasks.

GET request

Use the following GET request to retrieve task details:
GET /dbmi/public/api/v2/task/details?taskId={taskId}&projectId={projectId}&folderId={folderId}

Request parameters

You can include the task ID in the URL to retrieve details for a specific task. Alternatively, provide the project ID and folder ID in the URL to retrieve details for all tasks within a particular project and folder. If you omit the task ID, project ID, and folder ID, the API returns details for all tasks in the organization.
Field
Type
Description
projectId
String
ID of the project where the task resides.
folderId
String
ID of the folder where the task resides in the project.
taskId
integer
ID of the task.
You can also include the following optional pagination attributes in the request:
Field
Type
Description
orderBy
string
Column name to sort the results.
You can use one of the following column names to specify the sorting order:
  • - name
  • - createdTime
  • - lastUpdatedTime
  • - lastAccessedTime
pageNo
Integer
Page number to retrieve after pagination.
pageSize
Integer
Number of records per page.

Get request

Without pagination
Use the following GET request to retrieve task details based on task ID, project ID, and folder ID without pagination:
GET /dbmi/public/api/v2/task/details?taskId={taskId}&projectId={projectId}&folderId={folderId}
For example:
GET https://pod-ing.ics.dev:11447/dbmi/public/api/v2/task/details?taskId=2&folderId=5GX14m3CuFAeVWqnsSX9mC
With pagination
To paginate the results, you can include the following parameters in your GET request:
GET /dbmi/public/api/v2/task/details?orderBy={nameOftheColumn}&pageNo={int}&pageSize={int}
For example:
GET https://pod-ing.ics.dev:11447/dbmi/public/api/v2/task/details?orderBy=name&pageNo=2&pageSize=1

Successful response

A successful response returns HTTP status code 200 and includes the following header:
Content-Type: application/json
HTTP/2 200 Created

Response fields

The response includes the task details with the following fields:
Field
Type
Description
totalCount
int
Total number of documents returned in the response.
Documents
Object List
List of document objects, each representing a task or asset.
Each document object includes the following fields:
Field
Type
Description
taskId
int
Identifier of the task, retrieved from the repoInfo.repoHandle.
name
string
Name of the document or task.
documentState
string
Current state of the document, such as VALID.
documentType
string
Type of the document. For example, APPMI_TASK or DBMI_TASK.
parentInfo
Object List
List of parent entities related to the document, such as Space, Project, or Folder.
description
string
Description of the document. This field can be empty.
owner
string
Identifier of the owner of the document.
createdBy
string
Identifier of the user who created the document.
createdTime
string (ISO 8601)
Timestamp indicating when the document was created.
accessedTime
string (ISO 8601)
Timestamp indicating when the document was last accessed.
updateTime
string (ISO 8601)
Timestamp indicating when the document was last updated.
dbmiTask
JSON Node
Detailed JSON object containing task-specific configuration and metadata.
nativeData
NativeData
Optional native data associated with the document. The value can be null.
contentType
string
Content type of the document, which is ignored during JSON serialization.
docRef
DocRef
Reference to the document in the repository. The value can be null.
repoInfo
RepositoryInfo
Repository-related information including the repository handle. The value can be null.
The ParentInfo object includes the following fields:
Field
Type
Description
parentId
string
Identifier of the parent entity, such as a Space, Project, or Folder.
parentName
string
Name of the parent entity.
parentType
string
Type of the parent entity, for example, Space, Project, or Folder.

Sample response without pagination

The following example shows detailed information returned for a single task without pagination:
{
"totalCount": 1,
"pageNo": 0,
"pageSize": 25,
"documents": [
{
"taskId": 617189,
"name": "check_cdc_case3_march12",
"documentState": "VALID",
"documentType": "DBMI_TASK",
"parentInfo": [
{
"parentId": "7cCn5thwWFLhiZoSosphKL",
"parentName": "REG",
"parentType": "Space"
},
{
"parentId": "3uDIOqah80qbMK3cXAceIp",
"parentName": "Default",
"parentType": "Project"
}
],
"description": "",
"owner": "6r2IItttmU3ibDwPqxnRwb",
"createdBy": "6r2IItttmU3ibDwPqxnRwb",
"createdTime": "2026-03-12T09:49:26.000+00:00",
"accessedTime": "2026-03-16T09:49:16.438+00:00",
"updateTime": "2026-03-16T09:49:16.438+00:00",
"dbmiTask": {
"taskType": "dbmi",
"general": {
"name": "check_cdc_case3_march12",
"description": null,
"location": "Default",
"runtimeEnvironment": "inuserarh01.informatica.com",
"type": "cdc"
},
"source": {
"connection": "Oracle_19RDS_yk-CDC",
"customProperties": {
"readerInputIsPersisted": "false",
"loadType": "CDC",
"journalName": "",
"dbmi.cdc.query.based.column.type": "COLUMN_TYPE_TIMESTAMP",
"dbmi.cdc.technique": "LOG_BASED_CDC",
"pwx.cdcreader.postgresql.connection.repl.slotname": "",
"lobsEnabled": "false",
"pwx.cdcreader.postgresql.connection.repl.publication": "",
"dbmi.cdc.interval.mins": "5",
"selectionMode": "RULE_BASED"
},
"restartPointForIncrementalLoadTimestamp": null,
"schema": "SP_UNLOAD",
"includeViews": null,
"replicationSlotName": "",
"replicationPlugin": null,
"publication": "",
"journalName": "",
"restartPointForIncrementalLoad": "LATEST",
"restartPointForIncrementalLoadPosition": null,
"context": null,
"accountId": null,
"propertyId": null,
"viewId": null,
"pathToReportConfigurationFile": null,
"product": null,
"services": null,
"outputType": null,
"salesforceAPI": null,
"fetchSize": null,
"fetchSizeForInitialLoad": null,
"fetchSizeForIncrementalLoad": null,
"startDate": null,
"endDate": null,
"cdcIntervalDays": null,
"cdcIntervalHours": null,
"cdcIntervalMins": null,
"extractNonDefaultFields": null,
"includeArchivedAndDeletedRows": null,
"includeBase64Fields": null,
"maximumBase64BodySize": null,
"batchSize": null,
"mid": null,
"formattedStartDate": "",
"formattedEndDate": "",
"selectionRules": [
{
"include": "MTV2"
}
]
},
"target": {
"connection": "AIN_Snowflake",
"schema": "SP5",
"outputFormat": "CSV",
"fileCompressionType": "None",
"addHeadersToCSVFile": null,
"parquetFormat": "true",
"avroFormat": null,
"avroSerializationFormat": null,
"avroSchemaDirectory": null,
"fixedDirectoryForEachTable": "false",
"avroCompressionType": null,
"parquetCompressionType": null,
"deflateCompressionLevel": null,
"renamingRules": [
{
"source": "MTV2",
"target": "MTV2_M12"
}
],
"dataTypeRules": [],
"stage": "gkn",
"bucket": "",
"directory": "",
"addBeforeImages": "false",
"addDirectoryTags": null,
"directoryTags": "",
"useTableNameAsTopicName": "false",
"includeSchemaName": "false",
"tablePrefix": null,
"tableSuffix": null,
"topicName": null,
"asyncWrite": "true",
"producerConfigurationProperties": null,
"addOperationType": "false",
"addOperationTime": null,
"addOperationOwner": null,
"addOperationTransactionId": null,
"customProperties": {
"rat.audit.columns.optype": "false",
"applyMode": "standard",
"targetBucket": "",
"targetDirectory": "",
"rat.cdcfile.target.directory": "",
"writerSnowflakeIngestionMethod": "",
"writerSnowflakeDeferredMergeInterval": "3600",
"targetIdentifiersEnableCaseTransformation": "false",
"auditColumnsPrefix": "INFA_"
}
},
"runtimeOptions": {
"numberOfRowsInOutputFile": "100000",
"fileExtensionBasedOnFileType": null,
"applyCycleChangeLimit": null,
"applyCycleIntervalDays": null,
"applyCycleIntervalHours": null,
"applyCycleIntervalMins": null,
"applyCycleIntervalSecs": null,
"lowActivityFlushHours": null,
"lowActivityFlushMins": null,
"checkpointAllRows": null,
"checkpointEveryCommit": null,
"checkpointRowCount": null,
"checkpointFrequencySecs": null,
"executeInTaskflow": false,
"schemaDriftOptions": {
"addColumn": "replicate",
"renameColumn": "ignore",
"modifyColumn": "replicate",
"dropColumn": "ignore"
},
"customProperties": {
"writerRecoveryEnableCycleUpdates": "false"
}
},
"schedule": null
}
}
]
}

Sample response with pagination

The following example shows a paginated response containing multiple task documents:
{
"totalCount": 11215,
"pageNo": 2,
"pageSize": 1,
"documents": [
{
"taskId": 420543,
"name": "016June_T1",
"documentState": "VALID",
"documentType": "DBMI_TASK",
"parentInfo": [
{
"parentId": "7cCn5thwWFLhiZoSosphKL",
"parentName": "REG",
"parentType": "Space"
},
{
"parentId": "3uDIOqah80qbMK3cXAceIp",
"parentName": "Default",
"parentType": "Project"
}
],
"description": "",
"owner": "6raSdfOnCMBl5UYEeIh0eO",
"createdBy": "6raSdfOnCMBl5UYEeIh0eO",
"createdTime": "2025-06-16T13:40:20.000+00:00",
"accessedTime": "2026-03-16T09:50:43.486+00:00",
"updateTime": "2026-03-16T09:50:43.486+00:00",
"dbmiTask": {
"taskType": "dbmi",
"general": {
"name": "016June_T1",
"description": null,
"location": "Default",
"runtimeEnvironment": "UserA_Agent_Linux",
"type": "cdc"
},
"source": {
"connection": "Tmorel_DB2z_2_2",
"customProperties": {
"readerInputIsPersisted": "false",
"loadType": "CDC",
"cdc.staging.migration": "false",
"journalName": "",
"cdc.staging.migration.tasks": "",
"pwx.cdcreader.postgresql.connection.repl.slotname": "",
"pwx.cdcreader.postgresql.connection.repl.publication": "",
"cdc.staging.migration.tasks.guids": "",
"selectionMode": "RULE_BASED"
},
"restartPointForIncrementalLoadTimestamp": null,
"schema": "DBMIACDC",
"includeViews": null,
"replicationSlotName": "",
"replicationPlugin": null,
"publication": "",
"journalName": "",
"restartPointForIncrementalLoad": "LATEST",
"restartPointForIncrementalLoadPosition": null,
"context": null,
"accountId": null,
"propertyId": null,
"viewId": null,
"pathToReportConfigurationFile": null,
"product": null,
"services": null,
"outputType": null,
"salesforceAPI": null,
"fetchSize": null,
"fetchSizeForInitialLoad": null,
"fetchSizeForIncrementalLoad": null,
"startDate": null,
"endDate": null,
"cdcIntervalDays": null,
"cdcIntervalHours": null,
"cdcIntervalMins": null,
"extractNonDefaultFields": null,
"includeArchivedAndDeletedRows": null,
"includeBase64Fields": null,
"maximumBase64BodySize": null,
"batchSize": null,
"mid": null,
"formattedStartDate": "",
"formattedEndDate": "",
"selectionRules": [
{
"include": "DT_DECFLOAT"
}
]
},
"target": {
"connection": "DBMI_Snowflake",
"schema": "CDCGROUPJOB_MN",
"outputFormat": "CSV",
"fileCompressionType": "None",
"addHeadersToCSVFile": "false",
"parquetFormat": "true",
"avroFormat": null,
"avroSerializationFormat": null,
"avroSchemaDirectory": null,
"fixedDirectoryForEachTable": "false",
"avroCompressionType": null,
"parquetCompressionType": null,
"deflateCompressionLevel": null,
"renamingRules": [
{
"source": "*",
"target": "16JUNE_1_*"
}
],
"dataTypeRules": [],
"stage": "a",
"bucket": "",
"directory": null,
"addBeforeImages": "false",
"addDirectoryTags": "false",
"directoryTags": null,
"useTableNameAsTopicName": "false",
"includeSchemaName": "false",
"tablePrefix": null,
"tableSuffix": null,
"topicName": null,
"asyncWrite": "true",
"producerConfigurationProperties": null,
"addOperationType": "false",
"addOperationTime": "false",
"addOperationOwner": "false",
"addOperationTransactionId": "false",
"customProperties": {
"checkpointMessageFrequency": "0",
"rat.audit.columns.optime": "false",
"rat.cdcfile.target.tag.directories": "false",
"checkpointAll": "true",
"rat.audit.columns.opseq": "false",
"writerSnowflakeIngestionMethod": "",
"checkpointCommits": "false",
"targetIdentifiersEnableCaseTransformation": "false",
"checkpointTimeFrequency": "0",
"rat.audit.columns.optype": "false",
"rat.target.columns.last.replicated.time": "false",
"rat.audit.columns.optxid": "false",
"applyMode": "standard",
"targetBucket": "",
"rat.audit.columns.opowner": "false",
"writerSnowflakeDeferredMergeInterval": "3600",
"auditColumnsPrefix": "INFA_",
"formatEncoderPrintHeader": "false",
"rat.target.columns.cycleId": "false"
}
},
"runtimeOptions": {
"numberOfRowsInOutputFile": "100000",
"fileExtensionBasedOnFileType": null,
"applyCycleChangeLimit": null,
"applyCycleIntervalDays": null,
"applyCycleIntervalHours": null,
"applyCycleIntervalMins": null,
"applyCycleIntervalSecs": null,
"lowActivityFlushHours": null,
"lowActivityFlushMins": null,
"checkpointAllRows": "true",
"checkpointEveryCommit": "false",
"checkpointRowCount": "0",
"checkpointFrequencySecs": "0",
"executeInTaskflow": false,
"schemaDriftOptions": {
"addColumn": "replicate",
"renameColumn": "ignore",
"modifyColumn": "replicate",
"dropColumn": "ignore"
},
"customProperties": {
"writerRecoveryEnableCycleUpdates": "false",
"reader.iceberg.migration.to.staging.enabled": "true"
}
},
"schedule": null
}
}
]
}

Failed responses

If the request fails, the API returns one of the following HTTP error codes:
Name
HTTPS status code
Error message
Unauthorized
401
Invalid sessionId.
Forbidden
403
No permission.
Internal Server Error
500
Failed due to server internal error.