Data Ingestion and Replication What's New > February 2024 > New features and enhancements
  

New features and enhancements

Read about new features and enhancements in the February 2024 Mass Ingestion release.

Important notices

The following notices identify preview features and any changes to support levels for existing features.

Previews initiated

Effective in the February 2024 release, you can preview the following new functionality in advance of its general release:
Note: Preview functionality is supported for evaluation purposes but is unwarranted and is not supported in production environments or any environment that you plan to push to production. Informatica intends to include the preview functionality in an upcoming release for production use, but might choose not to in accordance with changing market or technical circumstances. For more information, contact Informatica Global Customer Support.

Common

The February 2024 release of Informatica Intelligent Cloud Services Data Ingestion and Replication service includes the following new features that are common to application ingestion and database ingestion tasks.

"Add Last Replicated Time" metadata column records timestamp of last DML operation applied to a Google BigQuery or Snowflake target table

For application ingestion and database ingestion jobs that have a Google BigQuery or Snowflake target and use any load type and any apply mode, you can add a metadata column to the target tables that records the date and time at which the last DML operation was applied to the target table. To add the column, select the Add Last Replicated Time check box in the Advanced section on the Target page of the task wizard. You can optionally add a prefix to the name of the metadata column to easily identify it and to prevent conflicts with the names of existing columns.

Support for generating Databricks Delta unmanaged tables on the target

When you create an application ingestion or database ingestion task, you can optionally select the Create Unmanaged Tables check box on the Target page of the task wizard to generate Databricks Delta target tables as unmanaged tabes instead of managed tables. If you do so, you must also specify a parent directory that exists in Amazon S3 or Microsoft Azure Data Lake Storage to hold the Parquet files that are generated for each target table when captured DML records are processed.

Ability to control the case of letters in the names of generated target objects on Amazon Redshift, Google BigQuery, and Snowflake

When you create an application ingestion or database ingestion task that has an Amazon Redshift, Google BigQuery, or Snowflake target, you can set options on the Target page of the task wizard to control the case of letters in the names of generated target tables (or objects) and columns (or fields). Previously, the target names were always generated using the same case as the source names unless overridden by a cluster-level or session-level property on the target. Now, if you select Enable Case Transformation, you can select a Case Transformation Strategy option to use all uppercase, all lowercase, or the case of the source object names.
Note: For Snowflake targets, the Enable Case Transformation check box is unavailable if you select the Superpipe option.

Mass Ingestion Applications

The February 2024 release of Mass Ingestion Applications includes the following new features and enhancements:

Support for new targets in initial load jobs with Oracle Fusion sources

Mass Ingestion Applications supports initial load jobs for the following new targets with Oracle Fusion sources using the BICC replication approach:

Child object support for Oracle Fusion sources in incremental load and combined load jobs

When you create an incremental load or combined initial and incremental load job with an Oracle Fusion source, you can choose to include the child object data. This feature applies only if you use the REST replication approach and Google Big Query targets.

Audit apply mode for Snowflake targets

For application ingestion incremental load and combined initial and incremental load jobs with Snowflake targets, you can configure Audit apply mode, instead of using the Standard apply mode, for tasks to write a row for each DML operation on a source table to the generated target table. You can optionally add columns that contain metadata about the changes to the target table, including Add Last Replicated Time, Add Operation Type (selected by default), Add Operation Time, Add Operation Sequence, Add Before Images, Prefix for Metadata Columns (INFA_ by default), Superpipe (selected by default), Merge Frequency, Enable Case Transformation (selected by default), and Case Transformation Strategy.
This feature is useful when you need an audit trail of changes to perform downstream processing on the data before writing it to the target database or when you need to examine the metadata for the changes. The target tables with the audit information cannot have constraints other than indexes.
To enable the use of audit tables, select Audit in the Apply Mode field on the Target page when defining a task. This field is available for new or undeployed tasks. Under Advanced, optionally select the check boxes for the metadata columns.

Mass Ingestion Databases

The February 2024 release of Database Ingestion and Replication includes the following new features and enhancements:

Amazon Aurora PostgreSQL targets with Db2 for i sources in initial load and combined load jobs

You can use Amazon Aurora PostgreSQL targets in database ingestion initial load and combined initial and incremental load jobs that have a Db2 for i source. To connect to the Aurora PostgreSQL target, use the PostgreSQL ODBC driver and the PostgreSQL connector.
Previously, you could use Amazon Aurora PostgreSQL as a target only in incremental load jobs that had a Db2 for i source.