The April 2026 release of Data Ingestion and Replication includes the following new features and enhancements.
Common
The April 2026 release of Data Ingestion and Replication includes the following new features that are common to multiple types of ingestion and replication tasks.
REST APIs for creating and monitoring tasks
Publicly accessible REST APIs are now available for application ingestion and replication, database ingestion and replication, file ingestion and replication, and streaming ingestion and replication tasks.
This release introduces the following resources to help you manage your application ingestion and replication and database ingestion and replication tasks:
•Create a task. POST /dbmi/public/api/v2/task/create
•Deploy a task. POST /dbmi/public/api/v2/task/deploy/{taskId}
•Run a task. POST /dbmi/public/api/v2/job/start
•Stop a job. POST /dbmi/public/api/v2/job/stop
•Resume a job. POST /dbmi/public/api/v2/job/resume
•Undeploy a job. POST /dbmi/public/api/v2/job/undeploy
•Get job status. GET /dbmi/public/api/v2/job/status
•Get job metrics. POST /dbmi/public/api/v2/job/metrics
•Get task details. GET /dbmi/public/api/v2/task/details?taskId={taskId}&projectId={projectId}&folderId={folderId}
For more information about each endpoint including request and response formats and usage examples, see the REST API Reference documentation.
CLAIRE Copilot enhancement for creating tasks
When you create an application or database ingestion and replication task in CLAIRE Copilot for Data Integration, you can now include source table or object selection critieria in the first prompt or enter the criteria later during the conversation. You can enter specific table or object names, the initial or ending part of the names, or name masks that include the * and ? wildcards, for example, include tables with names starting with ca*, exclude tables with names starting with ca5*. Enter include and exclude criteria in the order they should be processed. If you don't want to define a subet of tables, you must enter "select all" to select all tables. Once entered, you can't change your selection criteria later in the Copilot conversation. The summary at the end of the conversation will include the table selection criteria you entered. When you open the new task in the task configuration wizard, the criteria will appear as include and exclude rules. You can make changes there if needed.
For more information, see the CLAIRE Copilot for Data Integration documentation.
Open mirroring available for Microsoft Fabric OneLake targets
You can now enable Open Mirroring for Microsoft Fabric OneLake targets in application and database ingestion and replication tasks for all load types. Mirroring creates a synchronized copy of your data in Parquet format within the mirrored database in Fabric OneLake, ensuring your data stays current and readily available for timely analysis.
Last Replicated Time metadata column for Databricks and Amazon Redshift targets
When you configure application and database ingestion and replication tasks that have a Databricks or Amazon Redshft target, you can now select the new Add Last Replicated Time advanced option to add the INFA_LAST_REPLICATED metadata column in the target tables. This column records the timestamp at which a record was inserted or last updated in the target table.
For more information, see the Application Ingestion and Replication and Database Ingestion and Replication documentation.
Audit and Soft Deletes apply modes for Amazon Redshift targets
You can now use the Audit and Soft Deletes apply modes in database ingestion and replication incremental load and combined initial and incremental load tasks that have an Amazon Redshift target and any source type.
The Audit and Soft Deletes apply modes are also supported for application ingestion and replication incremental load and combined initial and incremental load jobs that have an Amazon Redshift target and a SAP Mass Ingestion source.
Use the Audit apply mode to write a row for each DML operation on a source table to a generated audit table in the target. This feature is useful when you need an audit trail of changes to perform downstream processing on the data before writing it to the target or when you need to examine the metadata for changes.
Use the Soft Deletes mode to process DML delete operations on the source as soft deletes in the target. Data Ingestion and Replication marks the soft-deleted records with a "D" in the INFA_OPERATION_TYPE column in the target without actually deleting the records.
For more information, see "Configure an Amazon Redshift target" in the Application Ingestion and Replication and Database Ingestion and Replication documentation.
New Programmatic Access Token (PAT) authentication to access Snowflake
You can configure your Snowflake Data Cloud connection in application, database, and file ingestion and replication tasks to use PAT to authenticate to Snowflake.
For more information, see "Snowflake Data Cloud connection properties" in the Connectors and Connections documentation.
Serverless runtime environments
Data Ingestion and Replication extends support for serverless runtime environments hosted on Microsoft Azure. Now you can run the following types of jobs on a serverless runtime environment:
Application ingestion and replication jobs for all load types:
•SAP ECC sources on Oracle database using the SAP Mass Ingestion connector
Database ingestion and replication jobs for all load types:
•MongoDB sources
•PostgreSQL sources
You cannot use a serverless runtime environment if you are using a read-only replica for a PostgreSQL connection.
For more information about serverless runtime environments, see Administrator > Runtime Environments > Serverless Runtime Environments and the Data Ingestion and Replication Connectors and Connections documentation for your connector.
Support for SQL Server 2022
Database Ingestion and Replication and Application Ingestion and Replication add support for SQL Server 2022. You can use version 2022 of SQL Server for RDS or on-premises Developer Edition as a source or target in database ingestion and replication jobs and as a target in application ingestion and replication jobs. In the SQL Server connection properties, the SQL Server Version field does not currently include 2022. You can select any option in this field. It will be ignored.
Support for PostgreSQL 17.x
Database Ingestion and Replication and Application Ingestion and Replication add support for PostgreSQL 17.x. You can use PostgreSQL Standard Edition 17.x or RDS for PostgreSQL 17.x for sources or targets in database ingestion and replication tasks and for targets in application ingestion and replication tasks.
New Data Ingestion and Replication CLI options in the task replace command
In Data Ingestion and Replication Command-Line Interface (CLI) task replace command, you can now specify the task name and task location as command line arguments. The syntax is:
The April 2026 release of Application Ingestion and Replication includes the following new features and enhancements:
HTTPS available for SAP Mass Ingestion Connector
You can enable HTTPS for SAP Mass Ingestion Connector to securely connect to SAP for your application ingestion and replication initial load jobs including the initial load phase in combined load jobs.
For more information, see the Connectors and Connections and Application Ingestion and Replication documentation.
Automatic switchover to another Secure Agent in a Secure Agent group
Automatic switchover to another Secure Agent in a Secure Agent group is now enabled by default for new application ingestion and replication jobs without persistent storage enabled for source log records.
If an active Secure Agent on which a job is running goes down unexpectedly, the job can automatically switch over to another available Secure Agent in the group after a 15-minute heartbeat interval. However, if the database ingestion service is in a stopped or error state, you need to manually stop and then resume the job.
For more information about automatic switchover and the sources and targets that jobs can include, see the Getting Started with Data Ingestion and Replication and the Application Ingestion and Replication documentation.
Additional target types for Oracle Fusion Cloud sources
Application ingestion and replication tasks of any load type with an Oracle Fusion Cloud source can now load data to Microsoft SQL Server and Microsoft Azure SQL Database targets.
For more information, see the Application Ingestion and Replication documentation.
Database Ingestion and Replication
The April 2026 release of Database Ingestion and Replication includes the following new features and enhancements:
Extended support for automatic switchovers to another agent in a Secure Agent group
Automatic switchovers to another Secure Agent in a Secure Agent group is now enabled for database ingestion and replication jobs that use the Log-based CDC method and have a MongoDB, MySQL, PostgreSQL, or SAP HANA source. This feature extends prior automatic switchover support for Log-based CDC with Db2 for i, Db2 for z/OS, Oracle, and SQL Server sources.
Also, automatic switchovers are now supported for tasks that use the Query-based CDC method and have an Oracle, Db2 for LUW, or SQL Server source.
If the active Secure Agent on which the job is running goes down unexpectedly, the job can automatically switch over to another available agent in the group after the 15-minute heartbeat interval elapses. The following limitations apply:
•Jobs cannot have persistent storage enabled.
•Jobs that have Kafka targets must store checkpoint information in the Kafka header. For any jobs that existed before the July 2025 release, automatic switchovers can't occur because checkpoint information is stored in the checkpoint file in the Secure Agent.
•Jobs that use the Query-based CDC method must have an Amazon Redshift, Databricks, Google BigQuery, Microsoft Azure Synapse Analytics, Oracle, PostgreSQL, Snowflake, or SQL Server target.
•Jobs that have an SAP HANA source cannot use the trigger-based CDC method.
•If you use a read-only replica for a PostgreSQL connection, you must set the readerPostgreSQLShouldDisableLocalPersistent custom property to false on the Task Details source page of the task configuration wizard .
The automatic switchovers are enabled by default for new tasks created in the current release and later.
New target types for database ingestion and replication tasks
Database Ingestion and Replication adds support for the following targets for all load types:
•Microsoft Fabric Data Warehouse
•Open Table
•Salesforce Data 360
Support for Db2 for LUW 12.1 sources
Database Ingestion and Replication adds support for Db2 for LUW 12.1 sources for all load types.
Db2 for LUW Log-based CDC sources with additional target types
You can now use Db2 for LUW 11.x or 12.1 sources for Log-based CDC with these additional target types: Amazon Redshift, Databricks, Kafka, Oracle, PostgreSQL, or Azure SQL Server. Previously, only Snowflake targets were supported.
For more information, see the Database Ingestion and Replication documentation.
Db2 for LUW Log-based CDC sources on AIX
Database ingestion and replication tasks that use the Log-based CDC method can now process Db2 for LUW sources on AIX systems. You can use any supported Db2 for LUW version, the incremental load or combined load type, and the same CDC features that are available for Db2 sources on Linux or Windows, such as CDC staging groups.
For more information, see the Database Ingestion and Replication documentation.
Improved statistics messages for Db2 for z/OS sources
Database Ingestion and Replication introduces enhanced statistics messages than include additional details for jobs with Db2 for z/OS sources to help in diagnosing latency issues. The messages are written to the job log.
MySQL driver upgrade
Database ingestion and replication jobs that have MySQL sources now require the Progress DataDirect MySQL JDBC driver version 5.1.4.000364. This driver upgrade is delivered with April 2026 release of the product.
MySQL source columns with LOB data types
Database ingestion and replication jobs that have a MySQL source can now replicate data from source columns that have one of the following LOB data types: BLOB, LONGBLOB, MEDIUMBLOB, TINYBLOB, JSON, TEXT, LONGTEXT, MEDIUMTEXT, or TINYTEXT. The jobs can use any load type and any supported target type.
Note:
JSON columns are not fully supported. The DataDirect MySQL JDBC driver does not parse Unicode characters in JSON columns correctly,which might lead to illegible characters on the target.
When you create an ingestion and replication task, select the new Include LOBs advanced option on the Task Details -Source Details page of the configuration wizard.
For more information, see the Database Ingestion and Replication documentation.
BFILE access to Oracle TDE wallet files
Database Ingestion and Replication can use BFILE directory objects for remote access to a file-based Transparent Data Encryption (TDE) wallet.
If you configured a file-based TDE wallet, Database Ingestion and Replication first checks if the ewallet.p12 file exists in the TDE wallet directory on the local machine. If the file exists, processing continues. If the file does not exist there, Database Ingestion and Replication queries the database for a directory object that has a path matching the TDE wallet directory or tries to create a directory object for BFILE access. After an appropriate directory object exists, the wallet can be read remotely. You do not need to copy the TDE wallet to the local machine.
For more information, see the Database Ingestion and Replication documentation.
Providing JDBC connection details in the tnsnames.ora file for Oracle unload processing
For database ingestion and replication unload processing as well as Log-based CDC, you can optionally enter JDBC connection details for the Oracle source database in an Oracle tnsnames.ora file. If you need to make changes to the connection later, you can then update the details in the tnsnames.ora file only instead of in both the Oracle Database Ingestion connection properties and in the tnsnames.ora file. To enable this feature for initial load jobs and the unload phase of combined jobs, set the source custom property useTnsNamesInJDBCUrl to true for the task. This feature does not apply to jobs that use the Query-based CDC method or to Oracle targets.
Note:
When you use the tnsnames.ora file only, you can’t configure SSL encryption and Kerberos authentication.
For more information, see "Providing JDBC connection details in the tnsnames.ora file in the Database Ingestion and Replication publication.
SAP HANA Log-based CDC processing of encrypted logs
Database ingestion and replication jobs that use Log-based CDC can now capture changes from encrypted SAP HANA online redo logs and archive logs. Currently, only the AES-256 CBC (Cipher Block Chaining) encryption algorithm is supported.
Note:
To use SAP HANA Log-based CDC, with or without encrypted logs, a flag needs to be set for your organization. If you do not have access to the SAP HANA Log-based CDC option, contact Informatica Global Customer Support.
First, ensure that your database administrator created an encryption root keys file that contains the keys needed to decrypt the source logs. This file must reside on the Secure Agent machine. In a Secure Agent group, the file must reside on every Secure Agent machine in the group. Also, check that the administrator created a backup password for accessing the key file.
To enable database ingestion and replication processing of encrypted logs, you must set both the Key Backup Password and Key Backup Data Path properties in the SAP HANA Database Ingestion connection properties. If you do not set one of these properties, deployment of any job using the connection for SAP HANA Log-based CDC fails. If both properties are left empty, Database Ingestion and Replication assumes the logs are not encrypted.
For more information, see the Data Ingestion and ReplicationConnectors and Connections publication and Database Ingestion and Replication publication.
Row-level filtering for SAP HANA sources
You can now use row-level filtering for database ingestion and replication tasks that have a SAP HANA source. Configure row-level filtering rules to filter out data rows from selected source tables based on column conditions before the data is applied to the target. On the Transform Data page of the task wizard, you can create Basic or Advanced filtering rules for source tables.
For more information, see the Database Ingestion and Replication documentation.
Access Management policies for filtering and protecting data
Note:
The Access Management feature availability is controlled by means of an organization-level feature flag. If this functionality is not available for your organization but you want to use it, create a request for Informatica Global Customer Support.
For database ingestion and replication jobs that have a SQL Server source, you can now use Access Management transformations to protect sensitive data by applying filtering and data protections before your data reaches the target.
The Access Management transformation applies data access policies that you create on the Data Access Management page in Data Governance and Catalog according to the data requirements of your organization.
You can apply data access policies for a selected set of tables on the Transform page of the task configuration wizard.
You can enforce the following types of data access policies:
•Data filter policies
•Data de-identification policies
Data filter policies control access to specific rows within data assets by filtering the data which users can interact with.
Data de-identification policies remove or mask sensitive information from data elements based on their classifications, applying protections to make the data anonymous or less identifiable for specific use cases.
File Ingestion and Replication
The April 2026 release of File Ingestion and Replication includes the following new feature:
Checksum algorithms for Amazon S3
You can select a checksum algorithm when loading files to Amazon S3. The checksum is stored with each file to efficiently detect changes without the need to recompute checksums, improving performance during ingestion and replication. Available algorithms include CRC64NVME, CRC32, CRC32C, SHA1, and SHA256, giving you the flexibilty to pick the best option for your data integrity and performance requirements.
Streaming Ingestion and Replication
The April 2026 release of Streaming Ingestion and Replication includes the following new feature:
CreateEntity API
You can programmatically create streaming ingestion and replication tasks using a new REST API endpoint, CreateEntity.
Use a POST request to the following URI:
/sisvc/restapi/v1/CreateEntity/Documents
For more information, see the REST API Reference documentation.