Data Ingestion and Replication What's New > July 2025 > New features and enhancements
  

New features and enhancements

The July 2025 release of Data Ingestion and Replication includes the following new features and enhancements.

Common

The July 2025 release of Data Ingestion and Replication includes the following new features that are common to multiple types of ingestion and replication tasks.

Ability to use serverless runtime environments

You can now use a serverless runtime environment to run application ingestion and replication jobs and database ingestion and replication jobs. Previously, you could use only a Secure Agent or Secure Agent group.
For information about serverless runtime environments, see Administrator > Runtime Environments > Serverless Runtime Environments and the Data Ingestion and Replication Connectors and Connections docurmentation for your connector.

Control the case of generated table and column names on Oracle targets

When you create an application ingestion and replication task or database ingestion and replication task that has an Oracle target in either task configuration wizard, you can set advanced target properties to control the case of letters in the names of the generated target tables and columns. Previously, for Oracle targets, the target names were always generated using the same case as the source names. Now, if you select Enable Case Transformation, you can select a Case Transformation Strategy option to use all uppercase, all lowercase, or the same case as the source.
For more information, see Application Ingestion and Replication or Database Ingestion and Replication > "Configuring the target" > "Oracle target properties."

"Add Cycle ID" metadata column available for additional target types and load types

For application ingestion and replication tasks and database ingestion and replication tasks that use any load type and have an Amazon Redshift, Oracle, or SQL Server target, you can now select the Add Cycle ID target advanced property to add the Cycle ID metadata column to the target tables. The Cycle ID column identifies the cycle in which a row got updated. Previously, this option was available only for incremental load jobs with a Snowflake target that didn't use the Superpipe option. Now, this option this option is available for additional target types and for initial load and combined load jobs with a Snowflake target that doesn't use Superpipe.
For more information, see the Data Ingestion and Replication Connectors and Connections documentation for the Amazon Redshift v2, Oracle Database Ingestion, and Microsoft SQL Server connections.

Compare asset versions

You can compare two versions of a source-controlled asset to find out what's changed between them. The assets can be any data ingestion and replication task type. With this feature, you can easily identify changes when you share code updates for peer reviews or to troubleshoot issues between versions.
You can compare the current uncommitted version with a version in the repository or compare any two committed versions. To do so, navigate to the asset on the Explore page, select Actions > Compare Asset Version, choose the versions to compare, and review the highlighted differences displayed side-by-side in text format.
For more information, see Data Ingestion and Replication > Asset Management > Comparing asset versions.

Application Ingestion and Replication

The July 2025 release of Application Ingestion and Replication includes the following new features and enhancements:

Use incremental and combined load jobs with SAP OData V2 Connector

You can use SAP OData V2 Connector in application ingestion and replication initial and combined initial and incremental load jobs to transfer data from SAP S4/HANA to Snowflake.
To use these load types for SAP OData V2 Connector in your organization, contact Informatica Global Customer Support.
For more information about SAP OData Connector, see Application Ingestion and Replication > Configuring an application ingestion and replication task > Configuring an SAP source with SAP OData V2 Connector.

Use Audit and Soft Deletes apply modes for Databricks targets

You can configure the Audit and Soft Deletes apply modes in application ingestion and replication incremental load and combined initial and incremental load jobs for Databricks targets. The staging file format to load data to Databricks can either be in CSV or Parquet.
Audit
Use this mode to write a separate row for each DML operation on a source table to a dedicated audit table generated on the target. The application ingestion and replication job captures the type of operation performed and records it using flags "I" for insert, "E" for update, or "D" for delete in the INFA_OPERATION_TYPE column in the audit table. This mode is beneficial when you need a detailed record of changes to process the data downstream before loading it into the target or to analyze the metadata for the changes made to the source data.
Soft Delete
Use this mode to handle DML delete operations on the source as soft deletes on Databricks targets. Records removed from the source are not physically removed from the target. Instead, application ingestion and replication job marks the soft-deleted records with a "D" in the INFA_OPERATION_TYPE column on the target without actually deleting the records. This mode helps you track deletion events while preserving the underlying data.
For more information, see Application Ingestion and Replication > Configuring an application ingestion and replication task > Configuring the target > Databricks target properties.

Database Ingestion and Replication

The July 2025 release of Database Ingestion and Replication includes the following new features and enhancements:

CDC_WITH_BACKLOG as default CDC transitioning logic for combined initial and incremental load jobs with Oracle sources

Newly created and newly deployed database ingestion and replication combined load jobs that have an Oracle source now use the CDC_WITH_BACKLOG logic as the default CDC transitioning technique. With CDC_WITH_BACKLOG, the change data capture processing does not stop while the initial unload phase is running. Any change data that is captured for a table that is being unloaded is stored in a backlog queue. After the initial unload phase is complete, the backlog of CDC data is applied to the target using fuzzy logic.
Previously, combined load jobs that had Oracle sources used the CDC_SUSPENDED transitioning logic, where the change data capture processing was suspended when the initial unload was running. Coordination between the initial unload phase and incremental change data processing used the Oracle SCN and Oracle Flashback Query to process the unload data and CDC data from the same SCN point, and required granting the Oracle flashback permissions:
GRANT EXECUTE ON DBMS_FLASHBACK TO <cmid_user>;
GRANT FLASHBACK ON table|ANY TABLE TO <cmid_user>;
Existing jobs continue to use the CDC_SUSPENDED technique for any resync, any new table added to an existing job, and any pending unload before the upgrade to the July 2025 release.
You do not need to grant the Oracle flashback permission for newly created combined load jobs.
For more information about the Oracle sources and privileges, see Database Ingestion and Replication > Database Ingestion and Replication sources - preparation and usage > Oracle sources.