What's New > Data Ingestion and Replication > New features and enhancements
  

New features and enhancements

The November 2024 release of Data Ingestion and Replication includes the following new features and enhancements.

New configuration wizard features

The November 2024 release of Database Ingestion and Replication includes the following new features that are available in the latest configuration wizard for application ingestion and replication tasks and database ingestion and replication tasks.
Note: The new wizard is available to existing organizations on a controlled basis and to all new user organizations. Existing organizations can request access to the new wizard from Informatica Global Customer Support or their Customer Success Manager.

Row-level filtering

Database Ingestion and Replication now supports row-level filtering for Oracle, Microsoft SQL Server, Db2 for i, Db2 for LUW, and Db2 for z/OS source tables in tasks of any load type. Row-level filtering allows you to filter data rows for selected source tables and columns before they're applied to the target. You can define Basic or Advanced filters for columns in a selected table when you create a database ingestion and replication task in the new configuration wizard.

CDC staging groups for improved performance

Database ingestion and replication incremental load or combined initial and incremental load jobs that have a Db2 for i or Db2 for z/OS source, an Oracle source that uses the Log-based CDC method, or a SQL Server source that uses either the Log-based or CDC Tables method can use a CDC staging task to read data from the source database in a single pass and then write the data to cloud-based storage. The cloud-based storage can be in Amazon S3, Google Cloud Storage, or Microsoft Azure Data Lake Storage Gen2 (with Shared Key Authentication). For log-based sources, the tasks in the group can process different tables and schemas in the same source database. For Db2 for i sources, the tasks must process the same journal. The tasks can apply the data to different targets. If a Secure Agent group is used, the tasls can run on different agents.
The feature is intended to improve performance and scalability by reading data just once from the source on behalf of multiple tasks with the same source database..
For Db2 for z/OS sources, you'll need to install a new Db2 stored procedure and create a table for storing DBID, OBID, and PSID information about selected source tables in the group, if you expect this source table information to be greater than 16 KB.
Note: If you installed a previous version of the Db2 stored procedure, you can use it only if you're sure that the DBID, OBID, and PSID information for the selected source tables will not exceed 16 KB, or if you do not plan to use CDC staging groups.
To get access to this feature for your organization, contact Informatica Global Customer Support. They'll set the cdir.cdc.group.enabled feature flag to true for your organization.
After access is given, you can enable staging and specify a staging group name when you create a database ingestion and replication task. You can then add new tasks to the group. The staged data is retained in storage for 14 days by default but you can adjust this retention period up to 365 days.

Application Ingestion and Replication

The November 2024 release of Application Ingestion and Replication includes the following new feature and enhancement:

Audit apply mode for Google BiqQuery targets

You can configure application ingestion and replication incremental load tasks that have a Google BigQuery target and any source type to use Audit apply mode. Jobs in Audit apply mode write a row for each DML operation on a source table to a generated audit table on the target. Application Ingestion and Replication marks the INSERT and UPDATE records in the backlog with an "E" in the INFA_OPERATION_TYPE column on the target. Optionally, you can add columns that contain metadata about the changes, such as SQL operation type, timestamp, owner, transaction ID, and sequence, and before image, to the audit table. This feature is useful when you need an audit trail of changes to perform downstream processing on the data before writing it to the target database or when you need to examine the metadata for changes.
Note: The audit tables cannot have constraints other than indexes.

Database Ingestion and Replication

The November 2024 release of Database Ingestion and Replication includes the following new features and enhancements:

PostgreSQL targets with PostgreSQL and SQL Server sources

You can use PostgreSQL targets in database ingestion and replication jobs that have a PostgreSQL or SQL Server source. To connect to the PostgreSQL target, use the PostgreSQL connector.

Audit and Soft Deletes apply modes for PostgreSQL and SQL Server targets

For database ingestion and replication incremental load and combined initial and incremental load jobs that have PostgreSQL or SQL Server targets, you can now configure tasks to use the Audit and Soft Deletes apply modes.
For PostgreSQL targets, the Audit and Soft Deletes apply modes are supported for jobs that have an Oracle source.
Use the Audit apply mode to write a row for each DML operation on a source table to a generated audit table on the target. This feature is useful when you need an audit trail of changes to perform downstream processing on the data before writing it to the target database or when you need to examine the metadata for changes.
Use the Soft Deletes mode to process DML delete operations on the source as soft deletes on SQL Server targets. Database Ingestion and Replication marks the soft-deleted records with a "D" in the INFA_OPERATION_TYPE column on the target without actually deleting the records.
For more information, see Database Ingestion and Replication > Configuring a database ingestion and replication task > Configuring the target.

New authentication type in MongoDB Mass Ingestion source connections

You can now configure MongoDB Mass Ingestion source connections for database ingestion and replication initial load and incremental load jobs to use X.509 authenication. Select X.509 in the Authentication field and then specify the SSL KeyStore file path and password. Optionally, you can specify the TrustStore file path and password. Previouosly, only user name and password authentication was available.
For more information, see Connectors and connections > Data Ingestion and Replication connection properties > MongoDB Mass Ingestion connection properties.

SSL encryption support for Oracle Database Ingestion target connections

Database ingestion and replication jobs that have an Oracle source and an Oracle target can use SSL encryption for the target Oracle Database Ingestion connection. This feature is supported for jobs that use any load type. The connection's authentication mode can be either Oracle Database Authentication or Kerberos.
When you configure Oracle Database Ingestion connection properties, set the Encryption Method property to SSL. Then set the related properties, including Crypto Protocol Version, Validate Server Certificate=True, Trust Store, Trust Store Password, and Host Name in Certificate.
For more information, see Connectors and Connections > Oracle Database Ingestion connection properties.