The October 2025 release of Data Ingestion and Replication includes the following changed behavior.
Oracle LOB data resolved as "Not Available" if it cannot be fetched from the database
For database ingestion and replication jobs that have an Oracle source, if LOB data is incompletely logged or cannot be interpreted and it also cannot be fetched directly from the database because the row has been deleted or moved, the LOB data will be replicated as "Not Available".
Previously, if LOB data could not be fetched from the database because the original row was no longer available, the task failed.
Simplified job details to improve clarity and usability
For streaming ingestion and replication jobs, the Performance tab, which previously displayed throughput graphs for source and target along with job performance metrics such as total messages and total kilobits of messages streamed per second is no longer included in the Job Details page.
You can refer to the Overview tab in the Job Details page to view the total number of events and related performance insights.
Change to the Data Ingestion and Replication CLI installation path
The Data Ingestion and Replication Command-Line Interface (CLI) files are now installed in the <Cloud Secure Agent installation>\apps\dbmicli directory. The directory contains .jar files, sample and customized YAML input files, default and customized dbmiclienv properties files, and any key-store.txt file that is generated when encryption is enabled. Previously, the CLI files were installed in the version-dependent path <Cloud Secure Agent installation>\apps\Database_Ingestion\<latest_version>\dbmicli.
The installation also now delivers the dbmiclienv_default.properties file instead of the dbmiclienv.properties file. You can use the default properties file to create a customized one. Your customized file will not be overwritten by subsequent installs.
Source NULL data gets loaded as NULL to Amazon Redshift targets
For application ingestion and replication tasks and database ingestion and replication tasks that have an Amazon Redshift target, if a source column that has a string data type contains NULL data, the data is replicated as infa_null to the CSV file in the staging location. Data Ingestion and Replication uses the NULL AS 'infa_null' clause to interpret any occurrence of 'infa_null' in the CSV file as an actual NULL in the target table.
Previously, when input data contained NULL values for string data type columns, the values were written as empty strings ("") to the Amazon Redshift target.
Output format selection for Amazon S3 targets affected by new Open Table Format options
With the Open Table Format now available to load data to Amazon S3, there’s a small change in how you select output formats for Amazon S3 in application and database ingestion and replication task properties. Previously, you chose the output file format directly as CSV, Avro, or Parquet. Now, you first select either Apache Iceberg or None in the Open Table Format field. If you select None, you can then select CSV, Avro, or Parquet as before. If you select Apache Iceberg, the output format is set to Parquet by default.
Teradata INTEGER and SMALLINT data types are loaded as INTEGER to Amazon S3 targets
Database ingestion and replication initial load tasks that have a Teradata source now map the INTEGER and SMALLINT data types to the INTEGER data type for an Amazon S3 target when the encoding format is configured as Avro or Parquet.
Previously, the INTEGER and SMALLINT data types were loaded to an Amazon S3 target as the STRING data type.
New log file names for the Database Ingestion agent service
The names of the log files that the Database Ingestion service of the Secure Agent generates for its sub-services have been changed to include the application package version. This change applies to each sub-service that runs under the Database Ingestion service. For example:
For the DBMI Agent:
•agent_start.log is renamed to agent_start_<package_version>.log
•agent_stop.log is renamed to agent_stop_<package_version>.log
•agent_status.log is renamed to agent_status_<package_version>.log
For the task container:
•container_service.log is renamed to container_service_<package_version>.log
For Metadata Manager:
•metadata_manager.log is renamed to metadata_manager_<package_version>.log
Including the package version in the log file names enables you to easily differentiate the logs generated by different versions of the application.