Data Ingestion and Replication What's New > April 2025 > New features and enhancements
  

New features and enhancements

The April 2025 release of Data Ingestion and Replication includes the following new features and enhancements.

Common

The April 2025 release of Data Ingestion and Replication includes the following new features that are common to multiple types of ingestion and replication tasks.

Integration of Data Ingestion and Replication tasks with Data Integration taskflows

You can configure application ingestion and replication tasks and database ingestion and replication tasks to trigger Data Integration taskflows that process and transform the ingested data. This feature is available for initial load and incremental load tasks that have any supported source type and a Snowflake target. When you define an ingestion and replication task, you can select the Execute in Taskflow option on the Let's Go page for the task to become available to add to taskflows in Data Integration. For incremental load jobs, you can also select the Add Cycle ID option on the Task Details page for the target to include cycle ID metadata in the target table. When you configure the taskflow in Data Integration, you can select a deployed ingestion and replication task as an event source. You can also include any appropriate transformation type to transform the ingested data. The taskflow is automatically triggered to start when either the initial load task successfully completes or after each CDC cycle in an incremental load operation. If a CDC cycle ends but the previous taskflow run is still running, the data is queued and waits for the previous taskflow to complete.

CSV staging file format introduced for Databricks targets

The default staging file format for Databricks targets in new application ingestion and replication tasks and database ingestion and replication tasks is now CSV. Using the CSV format can significantly enhance performance during the staging process. When you define a new task in the configuration wizard, you can select CSV or Parquet in the new Staging File Format field under Advanced Target Properties.
Existing tasks continue to use Parquet by default. If you change the format, redeploy the task.

Encrypted passwords in the Data Ingestion and Replication CLI

In the Data Ingestion and Replication CLI, you can use the encryptText command to create an encrypted password from a clear-text password. You can then use the encrypted password in other CLI commands, such as those for creating application or database ingestion and replication tasks, deploying tasks, and viewing job status. The encrypted password key is stored in the key-store.txt file, which is created in the dbmicli directory that contains the .jar files.

Application Ingestion and Replication

The April 2025 release of Application Ingestion and Replication includes the following new features and enhancements:

Introducing SAP OData V2 Connector as a source

You can use SAP OData V2 Connector in application ingestion and replication initial load jobs to transfer data from OData V2-compliant SAP applications to compatible data warehouse and data lake targets. The connector uses the OData V2 API to ensure seamless data transfer between SAP systems and your target endpoints.
You can configure basic authentication to connect to specific SAP services or the catalog service on the SAP Gateway.

Column selection for Netsuite and SAP sources

When you configure an application ingestion and replication task of any load type that uses Netsuite Mass Ingestion Connector or SAP Mass Ingestion Connector to transfer data from a Netsuite or SAP source to a target, you can select the columns you need from the Netsuite or SAP source.

Enable soft deletes for Google BigQuery targets

You can use the Soft Deletes apply mode with Google BigQuery targets in application ingestion and replication incremental load and combined initial and incremental load jobs. This mode causes hard delete operations on the source to process as soft deletes on Google BigQuery targets. Application Ingestion and Replication marks the soft-deleted records with a "D" in the INFA_OPERATION_TYPE column on the target without deleting the records.

Bulk API version 1.0 support for Salesforce sources

You can use Bulk API 1.0 in application ingestion and replication initial and combined initial and incremental load jobs to handle large-scale data from Salesforce. Bulk API 1.0 uses primary-key chunking to achieve parallel processing in Salesforce and optimizes the performance and speed of the task.

Row-level filtering for Salesforce sources

You can configure row-level filtering for a Salesforce source in an application ingestion and replication task of any load type that uses Salesforce Mass Ingestion Connector. Row-level filtering filters data rows for selected source tables and columns before the data is applied to the target. You can create Basic filters or more complex Advanced filters for columns in a selected table.
Note: This feature is available in application ingestion and replication tasks in the new configuration wizard.

Database Ingestion and Replication

The April 2025 release of Database Ingestion and Replication includes the following new features and enhancements:

SAP HANA source table partitioning to improve the performance of initial load jobs

For database ingestion and replication initial load jobs that have an SAP HANA source, you can now enable partitioning of the source tables to distribute rows read from the source across multiple partitions for parallel processing. Database Ingestion and Replication determines the range of partitions by using the ROWID as the partition key. Use this feature to improve the performance of jobs with large source tables. You can enable partitioning and configure the number of partitions when you configure a database ingestion and replication task.

Oracle Database Ingestion connection property for selecting the JDBC driver type

When you define an Oracle Database Ingestion connection to use in database ingestion and replication tasks for connecting to an Oracle source or target, you can now select the Oracle JDBC Thin driver instead of the DataDirect JDBC driver. You do not need to install the JDBC Thin driver. It's packaged with the product. The Oracle JDBC Thin driver has demonstrated better performance when reading data from the source. However, you can't yet use it with SSL encryption. The DataDirect JDBC driver continues to be the default driver for backward compatibility. After creating a connection with the Thin driver, use the connection only in new tasks that haven't previously run with the DataDirect driver.

Standby database as alternate server for Db2 for LUW sources

For database ingestion and replication jobs that have a Db2 for LUW source, you can now use a standby database in read-only mode as an alternate server in case of a failed connection to the primary database.
To specify one or more standby databases, enter the AlternateServers parameter in the Advanced Connection Properties field of the Db2 for LUW Database Ingestion connection properties, as in the following example:
AlternateServers=(server2:50000;DatabaseName=TEST2,server3:50000;DatabaseName=TEST3)

Automatic switchover to another Secure Agent in a Secure Agent group for jobs with Db2 for i and Db2 for z/OS sources

For database ingestion and replication incremental load and combined initial and incremental load jobs that have a Db2 for i or Db2 for z/OS source, if the active Secure Agent on which the job is running goes down unexpectedly, the job can now automatically switch over to another available agent in the group after the 15-minute heartbeat interval elapses.
For Db2 for i and Db2 for z/OS sources, the switchover to another Secure Agent was previously enabled in initial load jobs only.

Undeploy command available for CDC Staging Tasks

If you configured CDC Staging Tasks for database ingestion and replication jobs in the latest configuration wizard, the monitoring interfaces show the status of each task. On the My Jobs page and Monitor > All Jobs page, the Undeploy command is now available in the Actions menu for CDC Staging Tasks. You can undeploy a CDC Staging Task that has the status of Aborted, Deployed, Failed, or Stopped.

Streaming Ingestion and Replication

The April 2024 release of Streaming Ingestion and Replication includes the following new feature and enhancement:

JMS version 3.0 support for Oracle Weblogic JMS provider

For streaming ingestion and replication tasks with a JMS source, you can use JMS version 3.0 with the Oracle Weblogic JMS provider. Specify the JMS version in the JMS source advanced properties in the task. The default version remains 2.0, which is applicable to other compatible JMS providers.