Issue | Description |
|---|---|
DBMI-11732 | If database ingestion and replication incremental load or combined initial and incremental load jobs replicate LOB source data to an Amazon S3 target and use the CSV format for the target output file, the LOB data appears as empty strings in the target file. (November 2022) |
DBMI-2297 | Although the Amazon S3 connection properties allow users to specify an IAM role, you cannot use temporary security credentials generated by the AssumeRole method of the AWS Security Token Service API to authorize user access to AWS Amazon S3 resources. (April 2020) |
Issue | Description |
|---|---|
DBMI-25518 | Database ingestion and replication incremental load and combined load jobs that have a Db2 for i source might fail repeatedly with cache container errors after you add a table for CDC processing, even if you resume or redeploy the job. (October 2025) |
DBMI-25516 | Database ingestion and replication jobs that have a Db2 for i source might fail when generated key values sent to the cloud cache contain special characters. (October 2025) |
DBMI-25513 | Database ingestion and replication jobs that have a Db2 for i source might have degraded performance caused by a large workload on non-transactional Units of Work (UOW). (October 2025) |
DBMI-25497 | Database ingestion and replication incremental load and combined load jobs that have a Db2 for z/OS source might encounter degraded performance during CDC processing because extraneous processing that occurs on every stored procedure call. (October 2025) |
DBMI-25335 | When database ingestion and replication incremental load or combined load jobs that have a Db2 for i source are restarted, a different sequence token might be generated for the same record, which can cause the job to write the record twice to the target. (October 2025) |
DBMI-25248 | Database ingestion and replication incremental load and combined load jobs that have a Db2 for z/OS source might fail with the following error after a Db2 catalog entry that has no table objects is changed, for example, an index update in the SYSINDEXES catalog: Cannot invoke "String.equals(Object)" because the return value of "com.infa.rat.pwxdbmi.pwxjava.zos.zosSchemaDriftESD.getOwner()" is null> In this case, the catalog entry should be ignored so that the job can continue processing. (October 2025) |
DBMI-23703 | If database ingestion and replication jobs read data from a Db2 for i source physical file that’s defined in DDS with a unique index but no primary key, the unique index won’t be used when generating the corresponding target object. After this fix, if no primary key is available, unique indexes are fetched from the SYSPARTITIONINDEXES view and used to update the schema used for target generation. (May 2025) |
DBMI-23291 | For database ingestion and replication jobs that have a Db2 for i source, restart advance might not occur. (May 2025) |
Issue | Description |
|---|---|
DBMI-26309 | Database ingestion and replication incremental load and combined load jobs that have a Db2 for LUW source on Linux or Windows and use the Log-based CDC method fail if transient ODBC connection errors, such as network errors or database restarts, occur during communication with Db2. Workaround: Resume the job. (October 2025) |
Issue | Description |
|---|---|
DBMI-26380 | For database ingestion and replication jobs that have a Db2 for z/OS source, if network connectivity is dropped and then restored, the connection retry attempts fail to restore the log collector connection to the source with the following error: WARN com.informatica.msglogger - [CDCPUB_10066] TRACE: [zosLogCollector getRBARange(), Log request failed. [informatica][DB2 JDBC Driver]Object has been closed., connection attempts <0>, waiting <10000> ms for retry of error code <0> and error state <HY000>.]. (October 2025) |
DBMI-20084 | Database ingestion and replication incremental load jobs that have a Db2 for z/OS source might fail if the sequences in the change data read from the source are not in ascending order or have other issues. (October 2025) |
Issue | Description |
|---|---|
DBMI-11732 | If database ingestion and replication incremental load or combined initial and incremental load jobs replicate LOB source data to a Google Cloud Storage target and use the CSV format for the target output file, the LOB data appears as empty strings in the target file. (November 2022) |
Issue | Description |
|---|---|
DBMI-11732 | If Database ingestion and replication incremental load or combined initial and incremental load jobs replicate LOB source data to a Microsoft Azure Data Lake Storage Gen2 target and use the CSV format for the target output file, the LOB data appears as empty strings in the target file. (November 2022) |
Issue | Description |
|---|---|
DBMI-26235 | Database ingestion and replication jobs that have an Oracle source and SQL Server target incorrectly map source CLOB columns to target VARCHAR columns by default. Because SQL Server VARCHAR columns support only ASCII data, any non-ASCII characters from the source are not stored correctly in the target. Change the default mapping to a target data type such as NVARCHAR(MAX) that would avoid the problem and achieve better data consistency. (October 2025) |
DBMI-25825 | Database ingestion and replication jobs that have an Oracle source and SQL Server target incorrectly map source VARCHAR2 columns to target VARCHAR columns by default. Because SQL Server VARCHAR columns support only ASCII data, any non-ASCII characters from the source are not stored correctly in the target. Change the default mapping to a target data type such as NVARCHAR(MAX) that would avoid the problem and achieve better data consistency. (October 2025) |
DBMI-25520 | Database ingestion and replication incremental load and combined load jobs that have a SQL Server source with XML columns and a SQL Server target might fail with the following error: Conversion of one or more characters from XML to target collation impossible. Error code: 6355 The error occurs when the job tries to convert the source XML data to the intermediate varchar(max) data type in staging, before writing the data to the target XML column. (October 2025) |
DBMI-24657 | For database ingestion and replication incremental load or combined load jobs that have a SQL Server source, if an Update operation writes a non-null value of more than 100 characters to a source TEXT column, that data is truncated in the corresponding SQL Server target table. However, the target log tables contain the full, non-truncated data value. (July 2025) |
DBMI-24544 | After database ingestion and replication combined jobs that have a SQL Server source and Oracle target run using Audit apply mode, if you add a source column and then perform a Resync (refresh) operation, the job fails. The problem occurs because the Add Column schema drift operation adds the column at the end of the target table. However, the Resync (refresh) operation causes the added column to appear before the audit columns in the source schema. This discrepancy leads to a mismatch in order of target columns and data received. (July 2025) |
DBMI-24125 | Database ingestion and replication incremental load or combined load jobs that have a SQL Server source and use the Log-based CDC method might incorrectly process an update not involving the primary key as a pair of delete and insert operations that have the same sequence value. After restarting the job, the insert operation might have a different sequence value. This discrepancy occurs because the restart temporarily switches to CDC Tables mode, which doesn’t handle the update in the same manner as Log-based mode. (May 2025) |
DBMI-23957 | Database ingestion and replication jobs that have a Microsoft SQL Server source and process an update in a primary key and a non-primary key column in a single transaction might produce an incorrect timestamp in the DATETIME column on the Microsoft SQL Server target. (May 2025) |
DBMI-23610 | Database ingestion and replication jobs that have a Microsoft SQL Server source might fail to read the CDC capture instance tables because of a deadlock error. (May 2025) |
DBMI-23467 | A database ingestion and replication job that has a Microsoft SQL Server source and is configured to pause between reconnection attempts might retry the connection to the database without waiting for the requested delay. (May 2025) |
Issue | Description |
|---|---|
DBMI-26342 | Database ingestion and replication combined load tasks that have a MongoDB source and use the MongoDB Atlas instance might fail. (October 2025) |
Issue | Description |
|---|---|
DBMI-26235 | Database ingestion and replication jobs that have an Oracle source and SQL Server target incorrectly map source CLOB columns to target VARCHAR columns by default. Because SQL Server VARCHAR columns support only ASCII data, any non-ASCII characters from the source are not stored correctly in the target. Change the default mapping to a target data type such as NVARCHAR(MAX) that would avoid the problem and achieve better data consistency. (October 2025) |
DBMI-26002 | Database ingestion and replication incremental load or combined load jobs that have an Oracle source might fail with the following error: PWX-36465 ORAD Info Mbr 1: DM sequence error: Unsupported operation: multi-block (MBU) sequence in process and not kDE_DML_MULTIBLK (October 2025) |
DBMI-25848 | Database ingestion and replication combined load jobs that have a source table with a primary key and an Oracle target might not write all change records to the target if DML operations occur in the backlog of captured changes before change apply processing starts. (October 2025) |
DBMI-25825 | Database ingestion and replication jobs that have an Oracle source and SQL Server target incorrectly map source VARCHAR2 columns to target VARCHAR columns by default. Because SQL Server VARCHAR columns support only ASCII data, any non-ASCII characters from the source are not stored correctly in the target. Change the default mapping to a target data type such as NVARCHAR(MAX) that would avoid the problem and achieve better data consistency. (October 2025) |
DBMI-25720 | Database ingestion and replication combined load jobs that have an Oracle source and use the ARCHIVEONLY or ARCHIVECOPY reader mode might fail with the following error: [CDCPUB_10066] TRACE: [Unexpected error while creating local storage :Internal logic error! CDC-UNLOAD-PEIPROD.F0901 InputEndpoint Helper. Backlog StorageQueue doesn't exist with ID:<identifier>. Wait minutes:5] When data is read only from Oracle archive logs, the next archive log to consume might not be available before the default check for backlog storage queue creation occurs. (October 2025) |
DBMI-25582 | Database ingestion and replication incremental load or combined load jobs that have an Oracle source and connect to an Oracle Automatic Storage Management (ASM) system to read redo logs might fail with the following timeout error: 36558 ORAD Warn Mbr 2: ASM read timed out after 6 seconds (October 2025) |
DBMI-25489 | Database ingestion and replication incremental load and combined load jobs that have an Oracle source and use BFILE access to the redo logs might fail with the following error if the database uses a softlink pointer to the archive log directory: PWX-36075 OCI Error: ORA-22288: file or LOB operation FILEOPEN failed]. … [CDCPUB_10066] TRACE: [Error message[6] soft link in path]. (October 2025) |
DBMI-25848 | Database ingestion and replication combined load jobs that have a source with a primary key and an Oracle target might not write all change records to the target if DML operations occur on a row during the backlog phase of unload or initial load processing. (October 2025) |
DBMI-25482 | Database ingestion and replication jobs that have an Oracle source and use the CDC_WITH_BACKLOG CDC transitioning method might fail if the Oracle Flashback permissions are missing. (October 2025) |
DBMI-25391 | Database ingestion and replications jobs that have an Oracle source might fail if LOB data is incompletely logged or cannot be interpreted and if the data also cannot be fetched directly from the database because the row has been deleted or removed. (October 2025) |
DBMI-25321 | For database ingestion and replication jobs that have an Oracle source and any target type, a DML operation from an old cycle might be mistakenly processed during job restart because its table's recovery data wasn't fetched. This behavior might result in a duplicate key error on the target. (October 2025) |
DBMI-25005 | Database ingestion and replication combined load jobs that have an Oracle target might fail with the error ORA-00001: unique constraint violated when processing an Update operation if incorrect merge apply logic result in attempts to insert duplicate records to the target:. (October 2025) |
DBMI-24965 | Database ingestion and replication jobs that have an Oracle source and use a connection with the RAC Members property configured might fail if one of the RAC members is down. (October 2025) |
DBMI-24820 | Database ingestion and replication combined load jobs that have an Oracle source and Oracle target might fail with the following error when processing an Update operation if incorrect merge apply logic result in attempts to insert duplicate records to the target: ORA-00001: unique constraint violated (July 2025) |
DBMI-24786 | Database ingestion and replication incremental load jobs that have an Oracle source and Oracle target might fail with the following error because of incorrect merge apply processing: ORA-30926: unable to get a stable set of rows in the source tables. Error code: 30926. (July 2025) |
DBMI-24710 | Database ingestion and replication jobs that have an Oracle source with compressed SECUREFILE LOB columns can write incorrect data to the target. (July 2025) |
DBMI-24588 | Database ingestion and replication jobs that have an Oracle source might fail if the privilege for the GV$TRANSACTION view is missing. (October 2025) |
DBMI-24562 | Database ingestion and replication incremental load jobs fail with the following error when an Oracle database session that is updating tables with LOB columns is killed and an unknown type of backout operation occurs: PWX-36046 ORAD Mbr 2: Log parser found unexpected error. non-KDO_XTYPE_XR pCVRedo for DML Rollback [PwxOrlRrpRedoEntryParser:1962]. (July 2025) |
DBMI-24183 | After the April 2025 release, database ingestion and replication initial load jobs with Oracle sources might fail with the following ODBC driver error: *Caused by: java.lang.ExceptionInInitializerError: Exception java.lang.SecurityException: sealing violation: package oracle.jdbc is sealed [in thread* "CDC-UNLOAD-CNS_CUST.CIS_OE_ORDER_HEADERS_DTLS Object Distributor (632)"] The problem occurs because the February 2025 release required the odbc8.jar file to be in the <Secure Agent >/apps/Database_Ingestion/ext directory. However, in the April 2025 release, the driver isn’t expected at that location. To prevent the error, the driver has been removed from the ext directory. (May 2025) |
DBMI-24177 | Database ingestion and replication jobs that have an Oracle source might produce single-digit values in some fields on the target, whereas the corresponding source fields contain multiple digits. This discrepancy is caused by an internal error, which results in random truncation of character strings. (May 2025) |
DBMI-24059 | After the April 2025 release, database ingestion and replication incremental load or combined load jobs that have an Oracle source without a primary key and a Snowflake target write duplicate rows to the target when processing an Update operation. (May 2025) |
DBMI-24113 | A database ingestion and replication job that has an Oracle source running on AIX, HP-UX, or Solaris platforms might fail to convert LOB data that was set as NULL on updates to Endian data formats. (May 2025) |
DBMI-23758 | If database ingestion and replication initial load jobs that have an Oracle source with XMLTYPE columns use the Oracle JDBC Thin driver, as configured in the Oracle Database Ingestion connection, the jobs might write nulls to the target. (July 2025) |
DBMI-23727 | A database ingestion and replication incremental load or combined load job that has an Oracle source might fail with the following message: PWX-36196 ORAD: No logs with Start SCN 5944410182514 and resetlogs Id 855675717 are available for THREAD# 1. (May 2025) |
DBMI-23704 | After a switch to Daylight Saving Time (DST), running database ingestion and replication jobs that have an Oracle source do not detect the time zone offset change on the database server and consequently display capture progress messages that report an incorrect capture time. (July 2025) |
DBMI-23611 | Database ingestion and replication incremental load or combined load jobs that have an Oracle source fail if they can’t parse redo log records that don’t have a transaction ID (XID) when processing DML changes. (May 2025) |
DBMI-23299 | Database ingestion and replication incremental load and combined load jobs with Oracle sources use the Oracle V$TRANSACTION view to check for open transactions. However, this view provides results for one active RAC node, not all RAC nodes in the cluster. The global GV$TRANSACTION view should be used instead. (May 2025) |
DBMI-22984 | If database ingestion and replication combined load jobs that have an Oracle source and use the ARCHIVEONLY reader mode encounter a long-running open transaction with a begin SCN that is older than the archive log start position, data loss might occur. (July 2025) |
DBMI-22685 | A database ingestion combined load job that has an Oracle source and uses the query-based CDC method might fail if you do not have the EXECUTE ON DBMS_FLASHBACK permission. (July 2025) |
DBMI-20142 | When the source custom property transitionSchedulerType is set to CDC_WITH_BACKLOG, the EXECUTE ON DBMS_FLASHBACK privilege should not be required for combined load jobs with Oracle sources. Any change data that accumulates in the backlog queue is applied by using fuzzy logic instead of a FLASHBACK query that uses a SELECT AS OF scn statement. However, deployment of the task fails with an error stating the FLASHBACK privilege is required. (May 2025) |
Issue | Description |
|---|---|
DBMI-24444 | For database ingestion and replication tasks that have an Oracle source and use the Audit apply mode, LOB data might not be correctly replicated to the target. When a non-LOB data is updated, the remaining columns are replicated as NULLs. Workaround: None. (May 2025) |
DBMI-23360 | Database ingestion and replication incremental load or combined load jobs that have an Oracle source with XML columns and an Oracle target might fail with the following Oracle error when processing a DML operation: [Oracle JDBC Driver][Oracle]ORA-00932: inconsistent datatypes: expected - got CLOB. Error code: 932 Workaround: None. (April 2025) |
DBMI-21130 | If you configure an Oracle Database Ingestion connection to use both the Oracle JDBC Thin driver and SSL encription, when you try to test the connection, the test fails. Workaround: Do not use the JDBC Thin driver with SSL encryption. (April 2025) |
DBMI-19145 | Database ingestion and replication jobs that use the Log-based CDC method and have an Oracle source with only LOB columns selected and no primary key do not create subtasks on the tables. As a result, the jobs cannot capture change data from the tables. Messages such as the following are written to the log: [DBMIP_23026] The process [CDC-LOBS] with the thread ID [9708] encountered new table [lob_table_name]. The new table is excluded from processing. [CDCPUB_10066] TRACE: [PwxCDCRequestProcessor.askSchema() returned: Don't capture.]. Workaround: In the task, select some non-LOB source columns, in addition to the LOB columns, for replication if you want to continue using the table without a primary key. (July 2024) |
DBMI-14767 | If a database ingestion and replication job uses an Oracle Database Ingetion source connection that's configured to use the Oracle JDBC Thin driver, the job can replicate up to 39 digits from source columns that have a numeric data type to the target. If a source numeric value has 40 or more digits, the fortieth digit and all additional digits are replaced by zeroes (0) on the target. Workaround: None. (September 2023) |
DBMI-13605 | The Oracle Database Ingestion connection properties page includes no property for entering JDBC connection properties, such as EncryptionLevel, when they're needed. Workaround: In the Service Name field, you can add JDBC connection properties after the Oracle SID value, using a semicolon (;) as the separator. (April 2023) |
DBMI-10794 | Oracle source columns with the TIMESTAMP WITH TIME ZONE data type are supported only for initial load jobs. Workaround: To enable database ingestion and replication incremental load and combined load jobs to process change data from TIMESTAMP WITH TIME ZONE columns, set the source custom property pwx.cdcreader.oracle.option.additional ENABLETIMSTAMPWITHTZ to Y. (July 2022) |
Issue | Description |
|---|---|
DBMI-25323 | Database ingestion and replication tasks that have a PostgreSQL source might not correctly process CDC changes for partitioned tables. (October 2025) |
Issue | Description |
|---|---|
DBMI-26256 | For database ingestion and replication incremental load and combined initial and incremental load tasks that have a PostgreSQL source and use the pgoutput replication plug-in, the generated CDC script does not include partitioned tables in the publication along with the primary tables. Workaround: Edit the CDC script manually to add the partitions to the publication for any desired partitioned tables and execute the script. The CDC script might need to be altered if adding a table to the publication results in an error. (October 2025) |
Issue | Description |
|---|---|
DBMI-24971 | If a CDC Staging Task is running with associated apply jobs to perform SAP HANA Log-based CDC and a schema drift DDL change occurs on a source table, the schema drift change is ignored with a warning. The apply job that processes the table might fail with the following error, even if the job is restarted: [DBMIP_23002] The process [CDC-CDC_STAGING InputEndpoint Helper] with the thread ID [HanaLogCollector pwxHanaCDCApi.readNextData() failure. null] received an unexpected error from the PowerExchange Capture Service. Error: … Caused by: java.nio.BufferUnderflowException Workaround: None. (July 2025) |
Issue | Description |
|---|---|
DBMI-26387 | Database ingestion and replication apply jobs in a CDC staging group might have degraded performance during purge processing of expired files in storage. (October 2025) |
DBMI-26336 | Database ingestion and replication incremental load or combined load jobs that process SQL Server source tables without a primary key might write duplicate records to a Snowflake target that uses the Superpipe option if columns with null values are not properly handled by the Snowflake COALESCED function. As a result, the count of records written to the target might be greater than the count of source records read. (October 2025) |
DBMI-25601 | For database ingestion and replication incremental load and combined load jobs that have a Snowflake target with the Superpipe option set and that use Audit apply mode, if you redeploy the job after some Delete operations are recorded in the Snowflake STREAM table but before merge apply processing has completed, the Delete records are not written to the target. (October 2025) |
DBMI-24531 | Database ingestion and replication combined load jobs that have a Snowflake target fail with the following error if some Snowflake tables that were previously included in the job are removed: [CDCPUB_10066] TRACE: [Error occurred while creating the channel. Exception: java.lang.UnsupportedOperationException: net.snowflake.ingest.utils.SFException: Open channel request failed: HTTP Status: 400 ErrorBody:{ "status_code" : 4, "message" : "The supplied table does not exist or is not authorized." } The problem occurs because the job still tries to initialize streaming channels and fetch metadata for the removed tables in Snowflake. (July 2025) |
DBMI-24059 | After the April 2025 release, database ingestion and replication incremental load or combined load jobs that have an Oracle source without a primary key and a Snowflake target write duplicate rows to the target when processing an Update operation. (May 2025) |
Issue | Description |
|---|---|
DBMI-23323 | When configuring a database ingestion and replication job that has a Teradata source, if you attempt to fetch the entire schema either by applying rules or downloading include or exclude rules, an error might occur if any views have inaccessible definitions. (May 2025) |