Data Ingestion and Replication Release Notes > Database Ingestion and Replication > Connector and connectivity issues
  

Connector and connectivity issues

Read the following pages to learn about fixed issues, known limitations, and third-party limitations that apply to connectors that you can use in your service. Consider the following guidelines for the release notes:

Amazon S3 V2 Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-28771
Database ingestion and replication incremental load or combined load jobs that have an Amazon S3 target might create a very large cache file that consumes excessive on-disk space if a mass update occurs on the source. With this fix, on-disk space for the caching data has been improved to accommodate this situation.
(April 2026)

Known issues

The following table describes known issues:
Issue
Description
DBMI-11732
If database ingestion and replication incremental load or combined initial and incremental load jobs replicate LOB source data to an Amazon S3 target and use the CSV format for the target output file, the LOB data appears as empty strings in the target file.
(November 2022)
DBMI-2297
Although the Amazon S3 connection properties allow users to specify an IAM role, you cannot use temporary security credentials generated by the AssumeRole method of the AWS Security Token Service API to authorize user access to AWS Amazon S3 resources.
(April 2020)

Db2 for i Database Ingestion Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-25518
Database ingestion and replication incremental load and combined load jobs that have a Db2 for i source might fail repeatedly with cache container errors after you add a table for CDC processing, even if you resume or redeploy the job.
(October 2025)
DBMI-25516
Database ingestion and replication jobs that have a Db2 for i source might fail when generated key values sent to the cloud cache contain special characters.
(October 2025)
DBMI-25513
Database ingestion and replication jobs that have a Db2 for i source might have degraded performance caused by a large workload on non-transactional Units of Work (UOW).
(October 2025)
DBMI-25497
Database ingestion and replication incremental load and combined load jobs that have a Db2 for z/OS source might encounter degraded performance during CDC processing because extraneous processing that occurs on every stored procedure call.
(October 2025)
DBMI-25335
When database ingestion and replication incremental load or combined load jobs that have a Db2 for i source are restarted, a different sequence token might be generated for the same record, which can cause the job to write the record twice to the target.
(October 2025)
DBMI-25248
Database ingestion and replication incremental load and combined load jobs that have a Db2 for z/OS source might fail with the following error after a Db2 catalog entry that has no table objects is changed, for example, an index update in the SYSINDEXES catalog:
Cannot invoke "String.equals(Object)" because the return value of "com.infa.rat.pwxdbmi.pwxjava.zos.zosSchemaDriftESD.getOwner()" is null>
In this case, the catalog entry should be ignored so that the job can continue processing.
(October 2025)

Db2 for LUW Database Ingestion Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-28869
Database ingestion and replication incremental load or combined load jobs that have a Db2 for LUW source and use the Log-based CDC method might acquire and hold a persistent lock on the capture catalog table until the job ends.
(April 2026)
DBMI-28605
Database ingestion and replication jobs that have a Db2 for LUW source might remove trailing spaces from fixed-length data type columns, such as CHAR and GRAPHIC, resulting in duplicate values and violations of unique constrains.
(April 2026)
DBMI-28382
For database ingestion and replication jobs that have a Db2 LUW source and an Oracle target, the deployment process might not incorporate the source character encoding, resulting in Oracle target columns being created with an incorrect VARCHAR2 size, which causes issues with multibyte characters and the job fails.
(April 2026)

Known issues

The following table describes known issues:
Issue
Description
DBMI-26309
Database ingestion and replication incremental load and combined load jobs that have a Db2 for LUW source on Linux or Windows and use the Log-based CDC method fail if transient ODBC connection errors, such as network errors or database restarts, occur during communication with Db2.
Workaround: Resume the job.
(October 2025)

Db2 for z/OS Database Ingestion Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-28872
Database ingestion and replication incremental load jobs that have a Db2 for z/OS source might stop capturing change data and hang indefinitely because the log parser consumer thread can’t acquire a lock on a parser queue.
(April 2026)
DBMI-28561, DBMI-28989
Database ingestion and replication tasks that have a Db2 for z/OS source generate log timestamps from the Db2 WLM stored procedure, which might contain errors in calculations involving sub‑second values below 0.1.
(April 2026)
DBMI-28092
Performance statistics for database ingestion and replication incremental load and combined load jobs might indicate a delay in processing in an environment with many sub-second requests for Db2 log data that return few records. Each request creates a record queue. After processing a queue, the Db2 reader waits 1 second by default for the next queue. With this fix, the the wait time is reduced to 100 milliseconds to get the log data from the queues faster and reduce latency.
(April 2026)
DBMI-27717
If database ingestion and replication incremental load and combined load jobs that have Db2 for z/OS source encounter Db2 compensation records in the log, a slowdown in CDC processing might occur. The processing of compensation records needs to be improved.
(April 2026)
DBMI-26380
For database ingestion and replication jobs that have a Db2 for z/OS source, if network connectivity is dropped and then restored, the connection retry attempts fail to restore the log collector connection to the source with the following error:
WARN com.informatica.msglogger - [CDCPUB_10066] TRACE: [zosLogCollector getRBARange(), Log request failed. [informatica][DB2 JDBC Driver]Object has been closed., connection attempts <0>, waiting <10000> ms for retry of error code <0> and error state <HY000>.].
(October 2025)
DBMI-20084
Database ingestion and replication incremental load jobs that have a Db2 for z/OS source might fail if the sequences in the change data read from the source are not in ascending order or have other issues.
(October 2025)

Google BigQuery V2 Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-28558
Database ingestion and replication jobs that have a Google BigQuery target run temporary table creation and merge queries that have significantly higher compute costs when using Soft Deletes apply mode, as compared to the costs when using Standard apply mode. This fix adds the CLUSTER BY clause in the DDL for new tables to optimize the queries run on the target when using Soft Deletes mode. This solution requires the source to have a primary key or unique index.
(April 2026)

Google Cloud Storage V2 Connector

Known issues

The following table describes known issues:
Issue
Description
DBMI-11732
If database ingestion and replication incremental load or combined initial and incremental load jobs replicate LOB source data to a Google Cloud Storage target and use the CSV format for the target output file, the LOB data appears as empty strings in the target file.
(November 2022)

Microsoft Azure Data Lake Storage Gen2 Connector

Known issues

The following table describes known issues:
Issue
Description
DBMI-11732
If Database ingestion and replication incremental load or combined initial and incremental load jobs replicate LOB source data to a Microsoft Azure Data Lake Storage Gen2 target and use the CSV format for the target output file, the LOB data appears as empty strings in the target file.
(November 2022)

Microsoft Fabric Data Warehouse Connector

Known issues

The following table describes known issues:
Issue
Description
DBMI-28927
In a combined load database ingestion and replication task enabled with the Audit apply mode, the INFA_OPERATION_TYPE column is not populated in the initial unload phase, which results in NULL values in operational columns.
(April 2026)
DBMI-27827
If you enable the Add Operation Sequence option for a combined load database ingestion and replication task with the Audit apply mode to load data from Oracle to Microsoft Fabric Data Warehouse, the task fails.
(April 2026)

Microsoft SQL Server Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-28492
Database ingestion and replication incremental and combined load jobs that have a SQL Server source might assign duplicate sequence values to change records, potentially causing inconsistencies or data corruption in the target tables.
(April 2026)
DBMI-26235
Database ingestion and replication jobs that have an Oracle source and SQL Server target incorrectly map source CLOB columns to target VARCHAR columns by default. Because SQL Server VARCHAR columns support only ASCII data, any non-ASCII characters from the source are not stored correctly in the target. Change the default mapping to a target data type such as NVARCHAR(MAX) that would avoid the problem and achieve better data consistency.
(October 2025)
DBMI-26127
When a deadlock occurs during the FETCH call for a a database ingestion and replication job that has a SQL Server source, the job might fail without attempting any retries.
(April 2026)
DBMI-25825
Database ingestion and replication jobs that have an Oracle source and SQL Server target incorrectly map source VARCHAR2 columns to target VARCHAR columns by default. Because SQL Server VARCHAR columns support only ASCII data, any non-ASCII characters from the source are not stored correctly in the target. Change the default mapping to a target data type such as NVARCHAR(MAX) that would avoid the problem and achieve better data consistency.
(October 2025)
DBMI-25520
Database ingestion and replication incremental load and combined load jobs that have a SQL Server source with XML columns and a SQL Server target might fail with the following error:
Conversion of one or more characters from XML to target collation impossible. Error code: 6355
The error occurs when the job tries to convert the source XML data to the intermediate varchar(max) data type in staging, before writing the data to the target XML column.
(October 2025)

MongoDB Connector

Known issues

The following table describes known issues:
Issue
Description
DBMI-26342
Database ingestion and replication combined load tasks that have a MongoDB source and use the MongoDB Atlas instance might fail.
(October 2025)

Oracle Database Ingestion Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-28939
Database ingestion and replication combined initial and incremental load jobs with an Oracle source that contains CLOB columns with the NOT NULL constraint might fail when processing an insert operation that writes an empty LOB to the table followed by an update operation that populates the LOB column with data. In this situation, the following error is issued:
PWX-36200 ORAD: Column conversion error: [36202] NULL value found for column that is not nullable for Table name=<table_name>, column[25] segment[25] RO_TERMS_AND_CONDITIONS. Detail Type = CLOB, Oracle Type = CLOB.
(April 2026)
DBMI-28382
For database ingestion and replication jobs that have a Db2 LUW source and an Oracle target, the deployment process might not incorporate the source character encoding, resulting in Oracle target columns being created with an incorrect VARCHAR2 size, which causes issues with multibyte characters and the job fails.
(April 2026)
DBMI-25346
Database ingestion and replication incremental load and combined load jobs that have an Oracle source and use the BFILE log access method might report stale buffer files when the files are in an Oracle server file system on Linux. If the default settings for the status check interval and archive wait time are too high, this cache storage problem might not be detected or detection might be delayed.
(April 2026)
DBMI-26235
Database ingestion and replication jobs that have an Oracle source and SQL Server target incorrectly map source CLOB columns to target VARCHAR columns by default. Because SQL Server VARCHAR columns support only ASCII data, any non-ASCII characters from the source are not stored correctly in the target. Change the default mapping to a target data type such as NVARCHAR(MAX) that would avoid the problem and achieve better data consistency.
(October 2025)
DBMI-26002
Database ingestion and replication incremental load or combined load jobs that have an Oracle source might fail with the following error:
PWX-36465 ORAD Info Mbr 1: DM sequence error: Unsupported operation: multi-block (MBU) sequence in process and not kDE_DML_MULTIBLK
(October 2025)
DBMI-25848
Database ingestion and replication combined load jobs that have a source table with a primary key and an Oracle target might not write all change records to the target if DML operations occur in the backlog of captured changes before change apply processing starts.
(October 2025)
DBMI-25825
Database ingestion and replication jobs that have an Oracle source and SQL Server target incorrectly map source VARCHAR2 columns to target VARCHAR columns by default. Because SQL Server VARCHAR columns support only ASCII data, any non-ASCII characters from the source are not stored correctly in the target. Change the default mapping to a target data type such as NVARCHAR(MAX) that would avoid the problem and achieve better data consistency.
(October 2025)
DBMI-25720
Database ingestion and replication combined load jobs that have an Oracle source and use the ARCHIVEONLY or ARCHIVECOPY reader mode might fail with the following error:
[CDCPUB_10066] TRACE: [Unexpected error while creating local storage :Internal logic error! CDC-UNLOAD-PEIPROD.F0901 InputEndpoint Helper. Backlog StorageQueue doesn't exist with ID:<identifier>. Wait minutes:5]
When data is read only from Oracle archive logs, the next archive log to consume might not be available before the default check for backlog storage queue creation occurs.
(October 2025)
DBMI-25582
Database ingestion and replication incremental load or combined load jobs that have an Oracle source and connect to an Oracle Automatic Storage Management (ASM) system to read redo logs might fail with the following timeout error:
36558 ORAD Warn Mbr 2: ASM read timed out after 6 seconds
(October 2025)
DBMI-25489
Database ingestion and replication incremental load and combined load jobs that have an Oracle source and use BFILE access to the redo logs might fail with the following error if the database uses a softlink pointer to the archive log directory:
PWX-36075 OCI Error: ORA-22288: file or LOB operation FILEOPEN failed].
… [CDCPUB_10066] TRACE: [Error message[6] soft link in path].
(October 2025)
DBMI-25848
Database ingestion and replication combined load jobs that have a source with a primary key and an Oracle target might not write all change records to the target if DML operations occur on a row during the backlog phase of unload or initial load processing.
(October 2025)
DBMI-25482
Database ingestion and replication jobs that have an Oracle source and use the CDC_WITH_BACKLOG CDC transitioning method might fail if the Oracle Flashback permissions are missing.
(October 2025)
DBMI-25391
Database ingestion and replications jobs that have an Oracle source might fail if LOB data is incompletely logged or cannot be interpreted and if the data also cannot be fetched directly from the database because the row has been deleted or removed.
(October 2025)
DBMI-25321
For database ingestion and replication jobs that have an Oracle source and any target type, a DML operation from an old cycle might be mistakenly processed during job restart because its table's recovery data wasn't fetched. This behavior might result in a duplicate key error on the target.
(October 2025)
DBMI-25005
Database ingestion and replication combined load jobs that have an Oracle target might fail with the error ORA-00001: unique constraint violated when processing an Update operation if incorrect merge apply logic result in attempts to insert duplicate records to the target:.
(October 2025)
DBMI-24965
Database ingestion and replication jobs that have an Oracle source and use a connection with the RAC Members property configured might fail if one of the RAC members is down.
(October 2025)
DBMI-24588
Database ingestion and replication jobs that have an Oracle source might fail if the privilege for the GV$TRANSACTION view is missing.
(October 2025)
DBMI-23048, DBMI-21130
If you configure an Oracle Database Ingestion connection to use both the Oracle JDBC Thin driver and SSL encryption, when you try to test the connection, the test fails.
(April 2026)

Known issues

The following table describes known issues:
Issue
Description
DBMI-24444
For database ingestion and replication tasks that have an Oracle source and use the Audit apply mode, LOB data might not be correctly replicated to the target. When a non-LOB data is updated, the remaining columns are replicated as NULLs.
Workaround: None.
(May 2025)
DBMI-23360
Database ingestion and replication incremental load or combined load jobs that have an Oracle source with XML columns and an Oracle target might fail with the following Oracle error when processing a DML operation:
[Oracle JDBC Driver][Oracle]ORA-00932: inconsistent datatypes: expected - got CLOB. Error code: 932
Workaround: None.
(April 2025)
DBMI-19145
Database ingestion and replication jobs that use the Log-based CDC method and have an Oracle source with only LOB columns selected and no primary key do not create subtasks on the tables. As a result, the jobs cannot capture change data from the tables. Messages such as the following are written to the log:
[DBMIP_23026] The process [CDC-LOBS] with the thread ID [9708] encountered new table [lob_table_name]. The new table is excluded from processing.
[CDCPUB_10066] TRACE: [PwxCDCRequestProcessor.askSchema() returned: Don't capture.].
Workaround: In the task, select some non-LOB source columns, in addition to the LOB columns, for replication if you want to continue using the table without a primary key.
(July 2024)
DBMI-14767
If a database ingestion and replication job uses an Oracle Database Ingetion source connection that's configured to use the Oracle JDBC Thin driver, the job can replicate up to 39 digits from source columns that have a numeric data type to the target. If a source numeric value has 40 or more digits, the fortieth digit and all additional digits are replaced by zeroes (0) on the target.
Workaround: None.
(September 2023)
DBMI-13605
The Oracle Database Ingestion connection properties page includes no property for entering JDBC connection properties, such as EncryptionLevel, when they're needed.
Workaround: In the Service Name field, you can add JDBC connection properties after the Oracle SID value, using a semicolon (;) as the separator.
(April 2023)
DBMI-10794
Oracle source columns with the TIMESTAMP WITH TIME ZONE data type are supported only for initial load jobs.
Workaround: To enable database ingestion and replication incremental load and combined load jobs to process change data from TIMESTAMP WITH TIME ZONE columns, set the source custom property pwx.cdcreader.oracle.option.additional ENABLETIMSTAMPWITHTZ to Y.
(July 2022)

PostgreSQL Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-26959
Database ingestion and replication tasks that have a PostgreSQL source might display error messages if not all sub-partitions are included in the publication.
(April 2026)
DBMI-25323
Database ingestion and replication tasks that have a PostgreSQL source might not correctly process CDC changes for partitioned tables.
(October 2025)

Known issues

The following table describes known issues:
Issue
Description
DBMI-26256
For database ingestion and replication incremental load and combined initial and incremental load tasks that have a PostgreSQL source and use the pgoutput replication plug-in, the generated CDC script does not include partitioned tables in the publication along with the primary tables.
Workaround: Edit the CDC script manually to add the partitions to the publication for any desired partitioned tables and execute the script. The CDC script might need to be altered if adding a table to the publication results in an error.
(October 2025)

SAP HANA Database Ingestion Connector

Known issues

The following table describes known issues:
Issue
Description
DBMI-24971
If a CDC Staging Task is running with associated apply jobs to perform SAP HANA Log-based CDC and a schema drift DDL change occurs on a source table, the schema drift change is ignored with a warning. The apply job that processes the table might fail with the following error, even if the job is restarted:
[DBMIP_23002] The process [CDC-CDC_STAGING InputEndpoint Helper] with the thread ID [HanaLogCollector pwxHanaCDCApi.readNextData() failure. null] received an unexpected error from the PowerExchange Capture Service. Error:

Caused by: java.nio.BufferUnderflowException
Workaround: None.
(July 2025)

Snowflake Data Cloud Connector

Fixed issues

The following table describes fixed issues:
Issue
Description
DBMI-26387
Database ingestion and replication apply jobs in a CDC staging group might have degraded performance during purge processing of expired files in storage.
(October 2025)
DBMI-26336
Database ingestion and replication incremental load or combined load jobs that process SQL Server source tables without a primary key might write duplicate records to a Snowflake target that uses the Superpipe option if columns with null values are not properly handled by the Snowflake COALESCED function. As a result, the count of records written to the target might be greater than the count of source records read.
(October 2025)
DBMI-25601
For database ingestion and replication incremental load and combined load jobs that have a Snowflake target with the Superpipe option set and that use Audit apply mode, if you redeploy the job after some Delete operations are recorded in the Snowflake STREAM table but before merge apply processing has completed, the Delete records are not written to the target.
(October 2025)