Data Ingestion and Replication Release Notes > Database Ingestion and Replication > Connector and connectivity issues
  

Connector and connectivity issues

Read the following pages to learn about fixed issues, known limitations, and third-party limitations that apply to connectors that you can use in your service. Consider the following guidelines for the release notes:

Amazon S3 V2 Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-28771
Database ingestion and replication incremental load or combined load jobs that have an Amazon S3 target might create a very large cache file that consumes excessive on-disk space if a mass update occurs on the source. With this fix, on-disk space for the caching data has been improved to accommodate this situation.
(April 2026)
DBMI-26870
Database ingestion and replication apply jobs in a CDC staging group that have an Amazon S3 target fail with the following error if the Region Name connection property is set to a value such as AWS GovCloud (US-West) that is not supported in the older connector package version that is still being used by existing jobs:
Caused by: java.lang.IllegalArgumentException: Cannot create enum from region description AWS GovCloud (US-West) value
(April 2026)
DBMI-26170
When you run a database ingestion and replication job with CDC Staging hosted on an Amazon S3 target, the task might fail if the accessKey and secretKey fields in the S3 connection properties are left empty.
(April 2026)

Known issues

The following table describes known issues:
Issue
Description
DBMI-11732
If database ingestion and replication incremental load or combined initial and incremental load jobs replicate LOB source data to an Amazon S3 target and use the CSV format for the target output file, the LOB data appears as empty strings in the target file.
(November 2022)
DBMI-2297
Although the Amazon S3 connection properties allow users to specify an IAM role, you cannot use temporary security credentials generated by the AssumeRole method of the AWS Security Token Service API to authorize user access to AWS Amazon S3 resources.
(April 2020)

Db2 for i Database Ingestion Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-28584
Database ingestion and replication combined load jobs that have a Db2 for i source and Snowflake target load duplicate record to the target after an insert followed by a rollback operation occurs on the source. In this case, the insert is processed but the rollback operation is not.
(April 2026)
DBMI-25956
A database ingestion and replication incremental load or combined load job that has a Db2 for i source and Snowflake target fails if you add or drop a source table while the job and journaling are stopped and then try to resume the job. A way to resume the job without losing data or re-reading a large amount of source data is needed.
(April 2026)
DBMI-25518
Database ingestion and replication incremental load and combined load jobs that have a Db2 for i source might fail repeatedly with cache container errors after you add a table for CDC processing, even if you resume or redeploy the job.
(October 2025)
DBMI-25516
Database ingestion and replication jobs that have a Db2 for i source might fail when generated key values sent to the cloud cache contain special characters.
(October 2025)
DBMI-25513
Database ingestion and replication jobs that have a Db2 for i source might have degraded performance caused by a large workload on non-transactional Units of Work (UOW).
(October 2025)
DBMI-25497
Database ingestion and replication incremental load and combined load jobs that have a Db2 for z/OS source might encounter degraded performance during CDC processing because extraneous processing that occurs on every stored procedure call.
(October 2025)
DBMI-25335
When database ingestion and replication incremental load or combined load jobs that have a Db2 for i source are restarted, a different sequence token might be generated for the same record, which can cause the job to write the record twice to the target.
(October 2025)
DBMI-25248
Database ingestion and replication incremental load and combined load jobs that have a Db2 for z/OS source might fail with the following error after a Db2 catalog entry that has no table objects is changed, for example, an index update in the SYSINDEXES catalog:
Cannot invoke "String.equals(Object)" because the return value of "com.infa.rat.pwxdbmi.pwxjava.zos.zosSchemaDriftESD.getOwner()" is null>
In this case, the catalog entry should be ignored so that the job can continue processing.
(October 2025)
DBMI=21838
A database ingestion and replication incremental load job that has a Db2 for i source fails when you first attempt to start it if you set the Initial Start Point for Incremental Load property to a specific position or date and time and if the log collector is using only a sequence number and not a log timestamp to access the data.
(April 2026)

Db2 for LUW Database Ingestion Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-28869
Database ingestion and replication incremental load or combined load jobs that have a Db2 for LUW source and use the Log-based CDC method might acquire and hold a persistent lock on the capture catalog table until the job ends.
(April 2026)
DBMI-28857
Database ingestion and replication incremental load or combined load jobs that have a Db2 for LUW source and use the Log-based CDC method might end abnormally when a large number of DML events are waiting to be published and a timer triggers a request for a capture progress marker.
(April 2026)
DBMI-28605
Database ingestion and replication jobs that have a Db2 for LUW source might remove trailing spaces from fixed-length data type columns, such as CHAR and GRAPHIC, resulting in duplicate values and violations of unique constrains.
(April 2026)
DBMI-28382
For database ingestion and replication jobs that have a Db2 LUW source and an Oracle target, the deployment process might not incorporate the source character encoding, resulting in Oracle target columns being created with an incorrect VARCHAR2 size, which causes issues with multibyte characters and the job fails.
(April 2026)

Known issues

The following table describes known issues:
Issue
Description
DBMI-29270
Database ingestion and replication jobs with a Db2 for LUW source fail after a restart attempt if the source database name is less than eight characters in length.
(April 2026)
DBMI-26309
Database ingestion and replication incremental load and combined load jobs that have a Db2 for LUW source on Linux or Windows and use the Log-based CDC method fail if transient ODBC connection errors, such as network errors or database restarts, occur during communication with Db2.
Workaround: Resume the job.
(October 2025)

Db2 for z/OS Database Ingestion Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-28872
Database ingestion and replication incremental load jobs that have a Db2 for z/OS source might stop capturing change data and hang indefinitely because the log parser consumer thread can’t acquire a lock on a parser queue.
(April 2026)
DBMI-28561, DBMI-28989
Database ingestion and replication tasks that have a Db2 for z/OS source generate log timestamps from the Db2 WLM stored procedure, which might contain errors in calculations involving sub‑second values below 0.1.
(April 2026)
DBMI-28143
For database ingestion and replication incremental load or combined load jobs that have a Db2 for z/OS source, the latency of writing changes to the target can’t be calculated as the difference between the values in the Operation Time and Last Replicated Time metadata columns because the Operation Time records the log time of the DML operation. Instead, for latency reporting, the Operation Time should report of the UOW commit time.
(April 2026)
DBMI-28092
Performance statistics for database ingestion and replication incremental load and combined load jobs might indicate a delay in processing in an environment with many sub-second requests for Db2 log data that return few records. Each request creates a record queue. After processing a queue, the Db2 reader waits 1 second by default for the next queue. With this fix, the the wait time is reduced to 100 milliseconds to get the log data from the queues faster and reduce latency.
(April 2026)
DBMI-27718
Database ingestion and replication incremental load and combined load jobs that have a Db2 for z/OS need to write enhanced statistics messages to the job log to help with diagnosing CDC performance issues.
(April 2026)
DBMI-27717
If database ingestion and replication incremental load and combined load jobs that have Db2 for z/OS source encounter Db2 compensation records in the log, a slowdown in CDC processing might occur. The processing of compensation records needs to be improved.
(April 2026)
DBMI-27554
Database ingestion and replication incremental load or combined load jobs might generate multiple inactive connections on a Db2 for z/OS source system if the generation of capture progress markers is enabled and the job disconnects from the subsystem and then retries the connection multiple times. While the job is retrying the connection, each capture progress marker request creates an inactive connection thread on the Db2 subsystem.
DBMI-26380
For database ingestion and replication jobs that have a Db2 for z/OS source, if network connectivity is dropped and then restored, the connection retry attempts fail to restore the log collector connection to the source with the following error:
WARN com.informatica.msglogger - [CDCPUB_10066] TRACE: [zosLogCollector getRBARange(), Log request failed. [informatica][DB2 JDBC Driver]Object has been closed., connection attempts <0>, waiting <10000> ms for retry of error code <0> and error state <HY000>.].
(October 2025)
DBMI-20084
Database ingestion and replication incremental load jobs that have a Db2 for z/OS source might fail if the sequences in the change data read from the source are not in ascending order or have other issues.
(October 2025)

Known issues

The following table describes known issues:
Issue
Description
DBMI-29271
Database ingestion and replication jobs with a Db2 for z/OS source fail after a restart attempt if the source database name is less than eight characters in length.
(April 2026)

Google BigQuery V2 Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-28558
Database ingestion and replication jobs that have a Google BigQuery target run temporary table creation and merge queries that have significantly higher compute costs when using Soft Deletes apply mode, as compared to the costs when using Standard apply mode. This fix adds the CLUSTER BY clause in the DDL for new tables to optimize the queries run on the target when using Soft Deletes mode. This solution requires the source to have a primary key or unique index.
(April 2026)

Google Cloud Storage V2 Connector

Known issues

The following table describes known issues:
Issue
Description
DBMI-11732
If database ingestion and replication incremental load or combined initial and incremental load jobs replicate LOB source data to a Google Cloud Storage target and use the CSV format for the target output file, the LOB data appears as empty strings in the target file.
(November 2022)

Microsoft Azure Data Lake Storage Gen2 Connector

Known issues

The following table describes known issues:
Issue
Description
DBMI-11732
If Database ingestion and replication incremental load or combined initial and incremental load jobs replicate LOB source data to a Microsoft Azure Data Lake Storage Gen2 target and use the CSV format for the target output file, the LOB data appears as empty strings in the target file.
(November 2022)

Microsoft Fabric Data Warehouse Connector

Known issues

The following table describes known issues:
Issue
Description
DBMI-28927
In a combined load database ingestion and replication task enabled with the Audit apply mode, the INFA_OPERATION_TYPE columns are not populated in the initial unload phase, which results in NULL values in operational columns.
(April 2026)
DBMI-27827
If you enable the Add Operation Sequence option for a combined load database ingestion and replication task with the Audit apply mode to load data from Oracle to Microsoft Fabric Data Warehouse, the task fails.
(April 2026)

Microsoft SQL Server Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-28492
Database ingestion and replication incremental and combined load jobs that have a SQL Server source might assign duplicate sequence values to change records, potentially causing inconsistencies or data corruption in the target tables.
(April 2026)
DBMI-27721
Database ingestion and replication combined load jobs with a SQL Server source that uses Query-based CDC and a Snowflake target might fail with the following type of error after running for a few hours:
Writer <CDC-UNLOAD-data_store.dbo.employee Async Object Distributor 1> failed to process <END_OF_TABLE_DATA> marker event received for table <data_store.dbo.fx_rate> with stream id <1> as it is not applicable to flush strategy <TimeoutOrRowCount>
(April 2026)
DBMI-27160
During CDC processing, database ingestion and replication jobs that have a SQL Server source might hang and stop sending change data to the target because the SQL Server DataDirect ODBC driver fails to recognize the timeout set for SQL query execution.
(April 2026)
DBMI-26235
Database ingestion and replication jobs that have an Oracle source and SQL Server target incorrectly map source CLOB columns to target VARCHAR columns by default. Because SQL Server VARCHAR columns support only ASCII data, any non-ASCII characters from the source are not stored correctly in the target. You can change the mapping to a target data type such as NVARCHAR(MAX) to avoid the problem and achieve better data consistency.
(October 2025)
DBMI-26134
Database ingestion and replication incremental load or combined loads with a Snowflake target that uses Snowpipe Steaming (Superpipe option) might fail to restart after a channel invalidation error.
(April 2026)
DBMI-26127
When a deadlock occurs during the FETCH call for a a database ingestion and replication job that has a SQL Server source, the job might fail without attempting any retries.
(April 2026)
DBMI-25825
Database ingestion and replication jobs that have an Oracle source and SQL Server target incorrectly map source VARCHAR2 columns to target VARCHAR columns by default. Because SQL Server VARCHAR columns support only ASCII data, any non-ASCII characters from the source are not stored correctly in the target. Change the default mapping to a target data type such as NVARCHAR(MAX) that would avoid the problem and achieve better data consistency.
(October 2025)
DBMI-25520
Database ingestion and replication incremental load and combined load jobs that have a SQL Server source with XML columns and a SQL Server target might fail with the following error:
Conversion of one or more characters from XML to target collation impossible. Error code: 6355
The error occurs when the job tries to convert the source XML data to the intermediate varchar(max) data type in staging, before writing the data to the target XML column.
(October 2025)

MongoDB Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-27212
For database ingestion and replication combined load jobs that have a MongoDB source and use the Audit apply mode with the Oracle target, the order of audit and non-audit columns in the source schema might not match the order of values on the target side, causing values to be inserted into incorrect columns.
(April 2026)

Known issues

The following table describes known issues:
Issue
Description
DBMI-26342
Database ingestion and replication combined load tasks that have a MongoDB source and use the MongoDB Atlas instance might fail.
(October 2025)

MySQL Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-27622
Database ingestion and replication jobs that have a MySQL source and Snowflake target might fail with the following error after insert or update operations occur on the Sequence column in the INFORMATICA_CDC_RECOVERY table causing unexpectedly long restart token strings:
[Failed to execute target queries. Error: DML operation to table xxx.INFORMATICA_CDC_RECOVERY failed on column SEQUENCE with error: String ‘<long_string>' is too long and would be truncated].
(April 2026)

Oracle Database Ingestion Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-28939
Database ingestion and replication combined initial and incremental load jobs with an Oracle source that contains CLOB columns with the NOT NULL constraint might fail when processing an insert operation that writes an empty LOB to the table followed by an update operation that populates the LOB column with data. In this situation, the following error is issued:
PWX-36200 ORAD: Column conversion error: [36202] NULL value found for column that is not nullable for Table name=<table_name>, column[25] segment[25] RO_TERMS_AND_CONDITIONS. Detail Type = CLOB, Oracle Type = CLOB.
(April 2026)
DBMI-28770
If process ID memory usage on Linux reaches 60%, Secure Agent services might stop, causing database ingestion and replication jobs to fail. This problem occurs because of a memory leak in the Oracle Client.
(April 2026)
DBMI-28382
Deployment of database ingestion and replication jobs that have a Db2 for LUW source and an Oracle target might not incorporate the source character encoding. In this case, Oracle target columns might be created with an incorrect VARCHAR2 size, which can cause multibyte-character issues and the job to fail.
(April 2026)
DBMI-27796
If database ingestion and replication jobs capture change data from Oracle CLOB columns that use BASICFILE storage, unreadable or garbled characters might be replicated to the corresponding target columns.
(April 2026)
DBMI-27212
For database ingestion and replication combined load jobs that have a MongoDB source and use the Audit apply mode with the Oracle target, the order of audit and non-audit columns in the source schema might not match the order of values on the target side, causing values to be inserted into incorrect columns.
(April 2026)
DBMI-27063
After Oracle performs a QMI-array insert operation on an Oracle index-organized table (IOT) that contains only key data, database ingestion and replication jobs that capture change data from that table might fail with a segmentation fault (SIGSEGV) in the PWXORL code.
(April 2026)
DBMI-27055
Database ingestion and replication jobs that have an Oracle physical standby source might time out with the following message:
PWX-36564 ORAD: LRdr for member 2 has been unresponsive for more than the timeout threshold of 600. Last response time was 2025/09/30 06:54:09.
The problem occurs because an incomplete LWN was written to the standby logs. Log transport abandons the standby log in favor of an archive log, which causes a code loop.
(April 2026)
DBMI-25346
Database ingestion and replication incremental load and combined load jobs that have an Oracle source and use the BFILE log access method might report stale buffer files when the files are in an Oracle server file system on Linux. If the default settings for the status check interval and archive wait time are too high, this cache storage problem might not be detected or detection might be delayed.
(April 2026)
DBMI-26235
Database ingestion and replication jobs that have an Oracle source and SQL Server target incorrectly map source CLOB columns to target VARCHAR columns by default. Because SQL Server VARCHAR columns support only ASCII data, any non-ASCII characters from the source are not stored correctly in the target. Change the default mapping to a target data type such as NVARCHAR(MAX) that would avoid the problem and achieve better data consistency.
(October 2025)
DBMI-26002
Database ingestion and replication incremental load or combined load jobs that have an Oracle source might fail with the following error:
PWX-36465 ORAD Info Mbr 1: DM sequence error: Unsupported operation: multi-block (MBU) sequence in process and not kDE_DML_MULTIBLK
(October 2025)
DBMI-25848
Database ingestion and replication combined load jobs that have a source table with a primary key and an Oracle target might not write all change records to the target if DML operations occur in the backlog of captured changes before change apply processing starts.
(October 2025)
DBMI-25825
Database ingestion and replication jobs that have an Oracle source and SQL Server target incorrectly map source VARCHAR2 columns to target VARCHAR columns by default. Because SQL Server VARCHAR columns support only ASCII data, any non-ASCII characters from the source are not stored correctly in the target. Change the default mapping to a target data type such as NVARCHAR(MAX) that would avoid the problem and achieve better data consistency.
(October 2025)
DBMI-25720
Database ingestion and replication combined load jobs that have an Oracle source and use the ARCHIVEONLY or ARCHIVECOPY reader mode might fail with the following error:
[CDCPUB_10066] TRACE: [Unexpected error while creating local storage :Internal logic error! CDC-UNLOAD-PEIPROD.F0901 InputEndpoint Helper. Backlog StorageQueue doesn't exist with ID:<identifier>. Wait minutes:5]
When data is read only from Oracle archive logs, the next archive log to consume might not be available before the default check for backlog storage queue creation occurs.
(October 2025)
DBMI-25582
Database ingestion and replication incremental load or combined load jobs that have an Oracle source and connect to an Oracle Automatic Storage Management (ASM) system to read redo logs might fail with the following timeout error:
36558 ORAD Warn Mbr 2: ASM read timed out after 6 seconds
(October 2025)
DBMI-25489
Database ingestion and replication incremental load and combined load jobs that have an Oracle source and use BFILE access to the redo logs might fail with the following error if the database uses a softlink pointer to the archive log directory:
PWX-36075 OCI Error: ORA-22288: file or LOB operation FILEOPEN failed].
… [CDCPUB_10066] TRACE: [Error message[6] soft link in path].
(October 2025)
DBMI-25848
Database ingestion and replication combined load jobs that have a source with a primary key and an Oracle target might not write all change records to the target if DML operations occur on a row during the backlog phase of unload or initial load processing.
(October 2025)
DBMI-25482
Database ingestion and replication jobs that have an Oracle source and use the CDC_WITH_BACKLOG CDC transitioning method might fail if the Oracle Flashback permissions are missing.
(October 2025)
DBMI-25391
Database ingestion and replications jobs that have an Oracle source might fail if LOB data is incompletely logged or cannot be interpreted and if the data also cannot be fetched directly from the database because the row has been deleted or removed.
(October 2025)
DBMI-25321
For database ingestion and replication jobs that have an Oracle source and any target type, a DML operation from an old cycle might be mistakenly processed during job restart because its table's recovery data wasn't fetched. This behavior might result in a duplicate key error on the target.
(October 2025)
DBMI-25005
Database ingestion and replication combined load jobs that have an Oracle target might fail with the error ORA-00001: unique constraint violated when processing an Update operation if incorrect merge apply logic result in attempts to insert duplicate records to the target:.
(October 2025)
DBMI-24965
Database ingestion and replication jobs that have an Oracle source and use a connection with the RAC Members property configured might fail if one of the RAC members is down.
(October 2025)
DBMI-24588
Database ingestion and replication jobs that have an Oracle source might fail without an error if the required SELECT privilege on the gv$transaction view is missing. This fix avoids job failures by logging an error and continuing processing with gv$transaction view.
(October 2025)
DBMI-23048, DBMI-21130
If you configure an Oracle Database Ingestion connection to use both the Oracle JDBC Thin driver and SSL encryption, when you try to test the connection, the test fails.
(April 2026)

Known issues

The following table describes known issues:
Issue
Description
DBMI-24444
For database ingestion and replication tasks that have an Oracle source and use the Audit apply mode, LOB data might not be correctly replicated to the target. When a non-LOB data is updated, the remaining columns are replicated as NULLs.
Workaround: None.
(May 2025)
DBMI-23360
Database ingestion and replication incremental load or combined load jobs that have an Oracle source with XML columns and an Oracle target might fail with the following Oracle error when processing a DML operation:
[Oracle JDBC Driver][Oracle]ORA-00932: inconsistent datatypes: expected - got CLOB. Error code: 932
Workaround: None.
(April 2025)
DBMI-19145
Database ingestion and replication jobs that use the Log-based CDC method and have an Oracle source with only LOB columns selected and no primary key do not create subtasks on the tables. As a result, the jobs cannot capture change data from the tables. Messages such as the following are written to the log:
[DBMIP_23026] The process [CDC-LOBS] with the thread ID [9708] encountered new table [lob_table_name]. The new table is excluded from processing.
[CDCPUB_10066] TRACE: [PwxCDCRequestProcessor.askSchema() returned: Don't capture.].
Workaround: In the task, select some non-LOB source columns, in addition to the LOB columns, for replication if you want to continue using the table without a primary key.
(July 2024)
DBMI-14767
If a database ingestion and replication job uses an Oracle Database Ingetion source connection that's configured to use the Oracle JDBC Thin driver, the job can replicate up to 39 digits from source columns that have a numeric data type to the target. If a source numeric value has 40 or more digits, the fortieth digit and all additional digits are replaced by zeroes (0) on the target.
Workaround: None.
(September 2023)
DBMI-13605
The Oracle Database Ingestion connection properties page includes no property for entering JDBC connection properties, such as EncryptionLevel, when they're needed.
Workaround: In the Service Name field, you can add JDBC connection properties after the Oracle SID value, using a semicolon (;) as the separator.
(April 2023)
DBMI-10794
Oracle source columns with the TIMESTAMP WITH TIME ZONE data type are supported only for initial load jobs.
Workaround: To enable database ingestion and replication incremental load and combined load jobs to process change data from TIMESTAMP WITH TIME ZONE columns, set the source custom property pwx.cdcreader.oracle.option.additional ENABLETIMSTAMPWITHTZ to Y.
(July 2022)

PostgreSQL Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-27474
Database ingestion and replication incremental load jobs that have a PostgreSQL source and Snowflake target tries to re-create some target tables during redeployment, even if no DDL changes occurred on the corresponding source tables. In this case, HTTP 403 errors appear in the Metadata Manager logs, but the job does not fail as expected. Instead, it continues and drops the target tables.
(April 2026)
DBMI-26959
Database ingestion and replication tasks that have a PostgreSQL source might display error messages if not all sub-partitions are included in the publication.
(April 2026)
DBMI-25323
Database ingestion and replication tasks that have a PostgreSQL source might not correctly process CDC changes for partitioned tables.
(October 2025)

Known issues

The following table describes known issues:
Issue
Description
DBMI-26256
For database ingestion and replication incremental load and combined initial and incremental load tasks that have a PostgreSQL source and use the pgoutput replication plug-in, the generated CDC script does not include partitioned tables in the publication along with the primary tables.
Workaround: Edit the CDC script manually to add the partitions to the publication for any desired partitioned tables and execute the script. The CDC script might need to be altered if adding a table to the publication results in an error.
(October 2025)

SAP HANA Database Ingestion Connector

Known issues

The following table describes known issues:
Issue
Description
DBMI-24971
If a CDC Staging Task is running with associated apply jobs to perform SAP HANA Log-based CDC and a schema drift DDL change occurs on a source table, the schema drift change is ignored with a warning. The apply job that processes the table might fail with the following error, even if the job is restarted:
[DBMIP_23002] The process [CDC-CDC_STAGING InputEndpoint Helper] with the thread ID [HanaLogCollector pwxHanaCDCApi.readNextData() failure. null] received an unexpected error from the PowerExchange Capture Service. Error:

Caused by: java.nio.BufferUnderflowException
Workaround: None.
(July 2025)

Snowflake Data Cloud Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-27721
Database ingestion and replication combined load jobs with a SQL Server source that uses Query-based CDC and a Snowflake target might fail with the following type of error after running for a few hours:
Writer <CDC-UNLOAD-data_store.dbo.employee Async Object Distributor 1> failed to process <END_OF_TABLE_DATA> marker event received for table <data_store.dbo.fx_rate> with stream id <1> as it is not applicable to flush strategy <TimeoutOrRowCount>
(April 2026)
DBMI-27622
Database ingestion and replication jobs that have a MySQL source and Snowflake target might fail with the following error after insert or update operations occur on the Sequence column in the INFORMATICA_CDC_RECOVERY table causing unexpectedly long restart token strings:
[Failed to execute target queries. Error: DML operation to table xxx.INFORMATICA_CDC_RECOVERY failed on column SEQUENCE with error: String ‘<long_string>' is too long and would be truncated].
(April 2026)
DBMI-27474
Database ingestion and replication incremental load jobs that have a PostgreSQL source and Snowflake target tries to re-create some target tables during redeployment, even if no DDL changes occurred on the corresponding source tables. In this case, HTTP 403 errors appear in the Metadata Manager logs, but the job does not fail as expected. Instead, it continues and drops the target tables.
(April 2026)
DBMI-27118
You might encounter out-of-memory issues, particularly when using a Snowpipe Streaming (Superpipe) for a Snowflake target. Now you can use Shenandoah GC to get rid of OOM issues and promptly release memory back to the operating system if a job fails or is aborted.
(April 2026)
DBMI-26387
Database ingestion and replication apply jobs in a CDC staging group might have degraded performance during purge processing of expired files in storage.
(October 2025)
DBMI-26336
Database ingestion and replication incremental load or combined load jobs that process SQL Server source tables without a primary key might write duplicate records to a Snowflake target that uses the Superpipe option if columns with null values are not properly handled by the Snowflake COALESCED function. As a result, the count of records written to the target might be greater than the count of source records read.
(October 2025)
DBMI-26134
Database ingestion and replication incremental load or combined loads with a Snowflake target that uses Snowpipe Steaming (Superpipe option) might fail to restart after a channel invalidation error.
(April 2026)
DBMI-25601
For database ingestion and replication incremental load and combined load jobs that have a Snowflake target with the Superpipe option set and that use Audit apply mode, if you redeploy the job after some Delete operations are recorded in the Snowflake STREAM table but before merge apply processing has completed, the Delete records are not written to the target.
(October 2025)