Read about new features and enhancements in the July 2024 Data Ingestion and Replication release.
Common
The July 2024 release of Data Ingestion and Replication includes the following new features that are common to multiple types of ingestion and replication tasks.
Mass Ingestion has a new name!
The Mass Ingestion service is now called Data Ingestion and Replication to reflect its full range of data replication, synchronization, and CDC capabilities and to better align with market perceptions of its functionality. Correspondingly, the task names and solution names have also changed:
New solution names
•Application Ingestion and Replication
•Database Ingestion and Replication
•File Ingestion and Replication
•Streaming Ingestion and Replication
New task names
•Application ingestion and replication task
•Database ingestion and replication task
•File ingestion and replication task
•Streaming ingestion and replication task
These name changes have been made throughout most of the user interface and documentation set. For example, on the My Services page, you'll now see the Data Ingestion and Replication box instead of the Mass Ingestion box. In a few places, the original names have been maintained to avoid any disruption, such as for the Secure Agent's "Database Ingestion" service and in connector names that include "Mass Ingestion" or "Database Ingestion."
Unified Home page enhancement
The latest unified Home page is now available to allData Ingestion and Replication users, including those who use the localized Japanese version of the user interface and new customers who onboarded to Data Ingestion and Replication after the May 2024 release.
Ability to add _OLD columns with before-images of Updates to Oracle targets when using Audit mode.
For application ingestion and replication jobs and database ingestion and replication jobs that have an Oracle target and use Audit apply mode, you can add _OLD metadata columns that contain before-image data for Updates to the target tables. You can use these columns to compare the old and new column values.
To add the _OLD columns to the target tables, select the Add Before Images check box in the Advanced section on the Target page of the task wizard.
Resync option is now available for use in the Data Ingestion and Replication Command-Line Interface
You can now use the Data Ingestion and Replication Command-Line Interface (CLI) to resynchronize source and target objects for a combined initial and incremental load job or a subtask that is part of a running combined initial and incremental load job. This enhancement is available for application ingestion and replication jobs and database ingestion and replication jobs.
Previously, the Resync option was available only in the Informatica Intelligent Cloud Services user interface.
Schema drift support for SQL Server targets
Application Ingestion and Replication and Database Ingestion and Replication add support for automatic schema drift detection and handling for incremental load and combined initial and incremental load jobs that have SQL Server targets. When you create application ingestion and replication tasks or database ingestion and replication tasks, you can now set schema drift options on the Schedule and Runtime Options page. For pre-existing tasks, the schema options are set to Ignore.
Application Ingestion and Replication
The July 2024 release of Application Ingestion and Replication includes the following new features and enhancements:
Support for LOOKUP data type for Microsoft Dynamics 365 sources
Application Ingestion and Replication supports the LOOKUP data type for Microsoft Dynamics 365 sources.
Schema drift support for PostgreSQL targets
Application Ingestion and Replication adds support for automatic schema drift detection and handling for incremental load and combined initial and incremental load jobs that have a Salesforce source and a PostgreSQL target.
Database Ingestion and Replication
The July 2024 release of Database Ingestion and Replication includes the following new features and enhancements:
Support for Oracle source LOB columns in incremental load and combined load jobs
When you create a database ingestion and replication incremental load or combined initial and incremental load task that has an Oracle source with LOB columns and a target type other than Kafka, you can now select the Include LOBs option. This option causes change data to be captured from the LOB columns selected for replication. The supported LOB types are BLOB, CLOB, NCLOB, LONG, LONG RAW, and XML.
Note: Columns that have the LONG, LONG RAW, and XML data types are supported in incremental load and combined load jobs that use the Query-based CDC method. However, jobs that use the Log-based CDC method do not replicate data from these types of columns to the generated target table.
All of the LOB columns continue to be supported in initial load jobs.
Support for Oracle source LOB columns in jobs that have PostgreSQL targets
Database ingestion and replication jobs can now move data from Oracle source columns that have a LOB data type to Amazon Aurora PostgreSQL or RDS for PostgreSQL targets.
Support for Db2 for LUW source LOB columns
Database ingestion and replication initial load jobs and incremental load and combined initial and incremental load jobs that use the Query-based CDC method can replicate data from Db2 for LUW source columns that have a LOB data type to Microsoft Azure Data Lake Storage Gen 2, Microsoft Azure Synapse Analytics, or Snowflake targets. The supported LOB data types are: BLOB, CLOB, DBCLOB, LONG VARCHAR, LONG VARCHAR FOR BIT DATA, LONG VARGRAPHIC, and XML.
PostgreSQL sources with SQL Server targets
Database Ingestion and Replication now supports PostgreSQL sources with Microsoft SQL Server targets in initial load, incremental load, and combined load jobs.
Azure Database for MySQL sources
Database Ingestion and Replication now supports Azure Database for MySQL 8.0 sources with the following targets and load types:
•Snowflake targets in initial load, incremental load, and combined load jobs
•Confluent Kafka targets in incremental load jobs
Audit apply mode for Databricks targets
For database ingestion and replication incremental load and combined initial and incremental load jobs that have Databricks targets, you can configure tasks to use Audit apply mode to write a row for each DML operation on a source table to a generated audit table on the target. Optionally, add columns that contain metadata about the changes, such as SQL operation type, timestamp, owner, transaction ID, and sequence, and before image, to the audit table. This feature is useful when you need an audit trail of changes to perform downstream processing on the data before writing it to the target database or when you need to examine the metadata for changes.
Note: The audit tables cannot have constraints other than indexes.
To enable the use of audit tables, select Audit in the Apply Mode field on the Target page when defining a task. This field is available for new or undeployed tasks. Under Advanced, optionally select the check boxes for adding metadata columns to the audit table.
Custom data-type mappings for SQL Server sources and Snowflake targets
For database ingestion and replication tasks that have SQL Server sources and Snowflake targets, you can optionally define custom data-type mapping rules, which will override the default mappings for this source and target.
In the task wizard, you can create data-type mapping rules in the Data Type Rules section on the Target page.
Schema drift support for PostgreSQL targets
Database Ingestion and Replication adds support for automatic schema drift detection and handling for incremental load and combined initial and incremental load jobs that have a Db2 for i or Oracle source and an Amazon Aurora PostgreSQL or RDS for PostgreSQL target.
Support for the TIMESTAMP WITH LOCAL TIME ZONE data type in Oracle sources
Database Ingestion and Replication now supports Oracle source columns that have the TIMESTAMP WITH LOCAL TIME ZONE data type in jobs with any load type and any supported target type. To process columns that have the TIMESTAMP WITH LOCAL TIME ZONE data type, you must set the DBMI_ORACLE_SOURCE_ENABLE_TIMESTAMP_WITH_LOCAL_TZ environment variable for the Database Ingestion agent service to true. In Administrator, open your Secure Agent and click Edit . Under Custom Configuration Details, add the environment variable for the Database Ingestion service and the DBMI_AGENT_ENV type.
New Db2 for i journal receiver exit to prevent journal receiver deletion during CDC processing
Database Ingestion and Replication now provides a Db2 for i journal receiver exit to prevent the deletion of journal receivers while database ingestion and replication incremental load and combined load jobs are reading them for change data capture (CDC). The exit program causes the journal receivers to be locked while in use for CDC. To use the journal receiver exit, you must manually install the exit program and specify the pwx.cdcreader.iseries.option.useJournalReceiverExit and pwx.cdcreader.iseries.option.JournalReceiverExitJobToken custom properties on the Source page of the task wizard.
Streaming Ingestion and Replication
The July 2024 release of Streaming Ingestion and Replication includes the following new feature and enhancement:
Support for Business 360 Events Connector
Mass Ingestion Streaming now supports Business 360 Events connector as a source to transfer files.