Issue | Description |
|---|---|
DBMI-25680 | After the July 2025 release, application and database ingestion and replication jobs with Kafka targets might fail with the following error, even after they’re Undeployed and Deployed again: Error :Not authorized to access group: infaGroup The problem occurs because the default consumer group name of infaGroup that Database Ingestion and Replication uses for high availability was not authorized. |
DBMI-25542 | After the July 2025 release, database ingestion and replication jobs and application ingestion and replication jobs that use the combined load type and that have a data lake target ,such as Microsoft Azure Data Lake Store, use asynchronous uploads of output files for the unload phase by default. If the Use Cycle Partitioning for Data Directory option is selected for these jobs, the output files from the unload phase are incorrectly stored under the data directory instead of under a timestamp subdirectory. |
DBMI-25460 | Database ingestion and replication jobs and application ingestion and replication jobs might encounter the following error: Purging of illegal message failed for topic CONTAINER_SERVICE-task-status-topic |
DBMI-25425 | In rare cases, the DBMI Agent service does not shut down cleanly after prolonged failures of communication with the Informatica cloud, which causes interruptions to the agent restart process and an eventual ERROR state. |
DBMI-19633 | When you create a source table selection rule in the new task configuration wizard, if you define an Exclude rule that excludes a primary key column, the column is deselected.The primary key columns should remain selected for proper CDC processing. |
Issue | Description |
|---|---|
DBMI-23005 | When an application ingestion and replication job or database ingestion and replication job has persistent storage enabled, the following error can be reported if change stream corruption occurs: [CDCPUB_10066] TRACE: [Unexpected error while *examiming local storage :java.io.StreamCorruptedException: invalid stream header: 00000000].]. |
Issue | Description |
|---|---|
DBMI-24057 | Application ingestion and replication jobs and database ingestion and replication jobs that have the DBMI_CDC_READER_CFG_OPTIONS environmental variable set might fail with the following message: "tableRenameHandler" is null |
DBMI-23950 | For application ingestion and replication jobs and database ingestion and replication jobs that are configured to check if the count of DML records from the source has increased since the last check, the DML check might not be performed. |
DBMI-23592 | If you add multiple custom properties or table renaming rules when configuring an application ingestion and replication task or database ingestion and replication task in the latest task wizard, the Back, Next, Save, and Deploy buttons are no longer displayed. |
DBMI-23368 | If you set the writerTruncateTargetOnStartup custom property to false for an application ingestion and replication job or database ingestion and replication job that uses the combined load type and Audit apply mode, the job might truncate old audit data in the target table when it runs the second time after redeployment. |
Issue | Description |
|---|---|
DBMI-26372 | When you view details of an application ingestion and replication job or a database ingestion and replication job from Monitor, the overall job status might be missing from the job details page. (October 2025) |
DBMI-20617 | If you select a primary cloud data warehouse from the Data Integration Home page and then start the Data Ingestion and Replication wizard from the New option in the navigation bar to create a task, the primary cloud data warehouse is not automatically selected on the Destination page. You have to select it again on that page. Workaround: None. (July 2025) |
Issue | Description |
|---|---|
DBMI-25116 | For application ingestion and replication jobs and database ingestion and replication jobs that have an Amazon Redshift target, if a source column of a string datatype contains NULL data, the values are written as empty strings ("") to the target. (October 2025) |
Issue | Description |
|---|---|
DBMI-26101,DBMI-26373 | For application ingestion and replication or a database ingestion and replication tasks that use the Databricks target,the volume generated automatically as the staging environment is not deleted along with the "temp_jobid" directory when the task is in stopped and failed state. (October 2025) |
Issue | Description |
|---|---|
DBMI-24787 | If you configured a proxy server but then tried to bypass it for Secure Agent communication with Google BigQuery targets by setting task_container.jvm.http.nonProxyHosts property at agent level, ingestion and replications jobs that have Google BigQuery targets might fail. The jobs fail because communication still goes through the proxy, and communications from the proxy are blocked a firewall. (October 2025) |
Issue | Description |
|---|---|
DBMI-25753 | For application ingestion and replication tasks and database ingestion and replication tasks that use a single topic for all writes to the Kafka target, the restart logic examines all Kafka topics instead of only the required topic, which might cause the task to fail if access to all topics is not allowed. (October 2025) |
DBMI-23577 | If you enter a password in the Additional Security Properties field of the Kafka connection properties, the password might not be properly masked. (May 2025) |
Issue | Description |
|---|---|
DBMI-19405 | Tests of a Microsoft Azure Synapse Analytics Database Ingestion connection for an application ingestion and replication task or database ingestion and replication task fail with the following error if the user name specified in the connection properties includes a special character such as @ and the target database uses AAD authentication: Cannot instantiate datasource because of error: Failed to initialize pool: Cannot open server 'noconline.onmicrosoft.com' requested by the login. The login failed. ClientConnectionId: <identifier> This problem occurs because the Synapse Analytics connector package is missing some .jar files that are required to connect to the target database in this situation. Workaround: Copy the following jars to the connector package:
The package location is: package-AzureDWGen2MI.xxx\package\dw\thirdparty\informatica.azuredwgen2mi (August 2024) |
Issue | Description |
|---|---|
DBMI-25163 | For application ingestion and replication jobs and database ingestion and replication jobs that have a SQL Server target, if more than three DML operations are performed on primary key column values of the same row in a CDC cycle, the job might fail with a primary key constraint error. (October 2025) |
Issue | Description |
|---|---|
DBMI-23264 | Application ingestion and replication jobs and database ingestion and replication jobs that have an Oracle target might encounter the error ‘OutOfMemoryError: GC overhead limit exceeded ‘ when processing large source transactions. (May 2025) |
Issue | Description |
|---|---|
DBMI-26132 | Application ingestion and replication and database ingestion and replication combined load jobs might fail with the followng error: BacklogReader does not implement insertMarkerWithAnnotations() (October 2025) |
DBMI-26085 | If multiple application ingestion and replication tasks or database ingestion and replication tasks that have differently configured Snowflake target properties are deployed nearly simultaneously, the Data Ingestion and Replication writer might use the incorrect target properties from the other task. In this situation, the deployment might fail or unexpected behavior related to the target table definitions can occur. (October 2025) |
DBMI-25090 | After the May 2025 release, application or database ingestion and replication jobs that have a Snowflake Data Cloud target and use Authorization Code authorization might fail with the following error when the access token that's generated from user interface expires: :caused by: net.snowflake.client.jdbc.SnowflakeSQLLoggedException: JDBC driver internal error: Unsupported authenticator type: OAUTH The error occurs because of the JDBC driver upgrade in the May release. (October 2025) |
DBMI-24147 | Application ingestion and replication jobs and database ingestion and replication jobs that use the combined load type and process empty source tables might fail while executing the Remove query that cleans up staging files on the Snowflake target. (May 2025) |