The July 2025 release of Data Ingestion and Replication includes the following new features and enhancements.
Common
The July 2025 release of Data Ingestion and Replication includes the following new features that are common to multiple types of ingestion and replication tasks.
Control the case of generated table and column names on Oracle targets
When you create an application ingestion and replication task or database ingestion and replication task that has an Oracle target in either task configuration wizard, you can set advanced target properties to control the case of letters in the names of the generated target tables and columns. Previously, for Oracle targets, the target names were always generated using the same case as the source names. Now, if you select Enable Case Transformation, you can select a Case Transformation Strategy option to use all uppercase, all lowercase, or the same case as the source.
For more information, see Application Ingestion and Replication or Database Ingestion and Replication > "Configuring the target" > "Oracle target properties."
"Add Cycle ID" metadata column available for additional target types and load types
For application ingestion and replication tasks and database ingestion and replication tasks that use any load type and have an Amazon Redshift, Oracle, or SQL Server target, you can now select the Add Cycle ID target advanced property to add the Cycle ID metadata column to the target tables. The Cycle ID column identifies the cycle in which a row got updated. Previously, this option was available only for incremental load jobs with a Snowflake target that didn't use the Superpipe option. Now, this option this option is available for additional target types and for initial load and combined load jobs with a Snowflake target that doesn't use Superpipe.
For more information, see the Data Ingestion and Replication Connectors and Connections documentation for the Amazon Redshift v2, Oracle Database Ingestion, and Microsoft SQL Server connections.
Compare asset versions
You can compare two versions of a source-controlled asset to find out what's changed between them. The asset can be any data ingestion and replication task type. Comparing asset versions helps you to identify changes when you share code updates for peer reviews or to troubleshoot issues between versions. You can compare the current uncommitted version with a version in the repository or compare any two committed versions.
For more information, see Data Ingestion and Replication > Asset Management > Comparing asset versions.
Serverless runtime environments
For some types of ingestion and replication tasks and sources, you can use serverless runtime environments. More information is forthcoming.
Application Ingestion and Replication
The July 2025 release of Application Ingestion and Replication includes the following new features and enhancements:
Use incremental and combined load jobs with SAP OData V2 Connector
You can use SAP OData V2 Connector in application ingestion and replication initial and combined initial and incremental load jobs to transfer data from SAP S4/HANA to Snowflake.
For more information about SAP OData Connector, see Application Ingestion and Replication > Configuring an application ingestion and replication task > Configuring an SAP source with SAP OData V2 Connector.
Use Audit and Soft Deletes apply modes for Databricks targets
You can configure the Audit and Soft Deletes apply modes in application ingestion and replication incremental load and combined initial and incremental load jobs for Databricks targets. The staging file format to load data to Databricks can either be in CSV or Parquet.
For more information, see Application Ingestion and Replication > Configuring an application ingestion and replication task > Configuring the target > Databricks target properties.
Database Ingestion and Replication
The July 2025 release of Database Ingestion and Replication includes the following new features and enhancements:
SAP HANA Log-based change data capture
For database ingestion and replication incremental load jobs that have SAP HANA sources, you now have the option of using the Log-based CDC method instead of the Trigger-based CDC method. To use the Log-based method, you must configure the database ingestion and replication task along with a CDC Staging Task in the latest configuration wizard. You set the capture type and optional cache properties in the SAP HANA Database Ingestion connection properties. The Log-based method leverages the SAP HANA transaction log and an SAP HANA or Oracle cache database, which is separate from the source database, for change data capture. To create the cache database and the ROWCACHE and TRANSACTIONS tables within it, you can download the generated script from the CDC Script field when you configure a task.
The following guidelines apply to Log-based capture:
•You must use an on-premises SAP HANA source, not a SAP HANA cloud source.
•The source database must run in log mode normal.
•Source tables must be COLUMN type tables.
•Schema drift DDL changes are not supported.
•The source database can't use encryption or multi-node storage.
•Don't perform ALTER PARTITIONing, dynamic partitioning, or TRUNCATE operations on a table that's associated with a CDC Staging Task. These operations can result in job failures or data loss.
•Each CDC staging group requires a separate connection and cache database. All apply tasks within the group use the same connection and staging group.
•While the CDC Staging Task is running, any source table partitioning or truncate operation can cause job failures and data loss.
For more information, see the New Data Ingestion and Replication task wizard, which is currently available upon request from Informatica Global Customer Support. It also can be downloaded from the Informatica Documentation portal as an H2L.
Backlog queue as the default CDC transitioning technique for combined load jobs with Oracle sources
Newly created and deployed database ingestion and replication combined initial and incremental load jobs that have an Oracle source now use a backlog queue as the default CDC transitioning technique. With the backlog queue, change data capture processing does not stop while the initial unload phase is running. Any change data that is captured for a table that is being unloaded is stored in the backlog queue. After the initial unload phase is complete, the backlog of CDC data is applied to the target using fuzzy logic.
Previously, combined load jobs that had Oracle sources used SCN-based transitioning logic, where change data capture processing was suspended while the initial unload was running. Coordination between the initial unload phase and incremental change data processing used the Oracle SCN and Oracle Flashback Query to process the unload data and CDC data from the same SCN point. This processing required the following Oracle Flashback permissions to be granted to the user:
GRANT EXECUTE ON DBMS_FLASHBACK TO <cmid_user>; GRANT FLASHBACK ON <table>|ANY TABLE TO <cmid_user>;
Now, for newly created combined load jobs, you do not need to grant these Oracle Flashback permissions. However, existing jobs will continue to use the SCN-based transitioning technique for any resync, any new table added to the job, and any unload pending before the July 2025 upgrade.
For more information about the Oracle sources and privileges, see Database Ingestion and Replication > Database Ingestion and Replication sources - preparation and usage > Oracle sources.
Ability to select Db2 for i source columns for data replication
If you use rules to select Db2 for i source tables when defining a database ingestion and replication task, you can individually select or deselect the columns in each of the selected tables from which to replicate data. This feature allows you to replicate only the data you need, thereby reducing the amount of data to be replicated and the replication cost and overhead.