Issue | Description |
---|---|
DBMI-18774 | Database ingestion and replication jobs that have an Amazon Redshift target might fail when creating target tables that have names in mixed case or uppercase. The failure occurs if the Amazon Redshift enable_case_sensitive_identifier cluster configuration parameter is set to false and the Enable Case Transformation target property is not selected for the job. (July 2024) |
Issue | Description |
---|---|
DBMI-11732 | If database ingestion and replication incremental load or combined initial and incremental load jobs replicate LOB source data to an Amazon S3 target and use the CSV format for the target output file, the LOB data appears as empty strings in the target file. (November 2022) |
DBMI-2297 | Although the Amazon S3 connection properties allow users to specify an IAM role, you cannot use temporary security credentials generated by the AssumeRole method of the AWS Security Token Service API to authorize user access to AWS Amazon S3 resources. (April 2020) |
Issue | Description |
---|---|
DBMI-20170 | A database ingestion and replication combined initial and incremental load job that has a Databricks target, contains LOB columns, and has no primary key defined might fail with an error similar to: Statement Builder Helper(214)] ERROR com.informatica.msglogger - [CDCPUB_10066] TRACE: [TableContextCalculator failed with unexpected exception <Index: 68, Size: 68>] (October 2024) |
DBMI-19874 | Database ingestion and replication combined load jobs that have a Databricks target and use the Audit apply mode might fail if the source tables contain decimal columns with a precision greater than 9 digits. (October 2024) |
DBMI-19011 | Database ingestion and replication jobs that have Databricks targets might intermittently fail during apply processing because Databricks fails to retry processing after the following error occurs: [INTERNAL_ERROR] Query could not be scheduled: HTTP Response code: 503. Please try again later. SQLSTATE: XX000/codeblock> (August 2024) |
Issue | Description |
---|---|
DBMI-20755 | Database ingestion and replication incremental load or combined initial and incremental load jobs that have a Db2 for i source might fail when retrying a connection for a SQL GetLogMinMax request: [CDCPUB_10066] TRACE: [IBMiLogCollector connectionRetryChecksAndWait(), Attempts <1> from caller Info error state <58004>>]. (November 2024) |
DBMI-20754 | Database ingestion and replication incremental load jobs that have a Db2 for i source with version of 7.3 or earlier might intermittently fail with the following error due to a code fix in the October release [CDCPUB_10066] TRACE: [IBMiClient System Version <V7R3M000> does not support IBM Internal Log Reads ] (November 2024) |
DBMI-20427 | When you create a database ingestion and replication task that has a Db2 for i source in the new task configuration wizard, if you add a table selection rule that excludes all tables, the associated job still fetches the metadata for the excluded tables, which degrades job performance. (November 2024) |
DBMI-20396 | When you create a database ingestion and replication task that has a Db2 for i source in the new task configuration wizard, if you add a table selection rule that excludes all tables, the associated job still fetches the metadata for the excluded tables, which degrades job performance. |
DBMI-18233 | Database ingestion and replication incremental load jobs that have a Db2 for i source fail repeatedly before issuing the following error message: [CDCPUB_10066] TRACE: [IBMiLogCollector failed creating SQL IBMiLogCollector createNewLogSQLStatement(), current log start sequence <sequence_number> is greater than ending log sequence <sequence_number> for the journal receiver being processed.] (November 2024) (July 2024) |
DBMI-13489 | Database ingestion and replication incremental load and combined load jobs that have a Db2 for i source need a way to prevent the deletion of source journal receivers during CDC processing. (July 2024) |
Issue | Description |
---|---|
DBMI-20985 | Database ingestion and replication incremental load or combined initial and incremental load jobs that have a Db2 for z/OS source might fail with a log parser error like the following one because of incorrect sequence token checking: Caused by: com.infa.rat.pwxdbmi.pwxcdc.intf.CDCReaderException: zosLogParser run(), Error - zosLogParser processLogRecord(), Sequence number error for data sharing environment, 1536 Current Sequence VRS <002> HCSeq UOWSeq LogSeq <00DFB54159EB15B0A800> Last Sequence VRS <002> HCSeq UOWSeq LogSeq You can Resume the job successfully. (November 2024) |
DBMI-20340, DBMI-20232 | A resync request on a Db2 for z/OS source table might cause database ingestion and replication jobs to fail. The problem occurs because z/OS can't get the end-of-log RBA for the connection retry. (October 2024) |
DBMI-20186 | Database ingestion and replication incremental load or combined load jobs that have the Modify Column schema drift option set to Stop Job fail after the precision of a Db2 for z/OS column with the DECIMAL data type changes because of an internal code change, not in response to a customer-initiated DDL change. (October 2024) |
DBMI-19172 | Database ingestion and replication incremental load or combined load jobs that have Db2 for z/OS sources might fail with the following error message when the source pwx.cdcreader.ZOS.UOWManagerMarkerControl custom property is set to True and the readerLogCaptureProgressMarker and readerMinutesRestartAdvanceMarkers custom properties are set to positive numbers: com.infa.rat.pwxdbmi.pwxcdc.intf.CDCReaderException: Previous sequence <sequence_value> is not less than current sequence <sequence_value> (August 2024) |
DBMI-18742 | Database ingestion and replication incremental load or combined load jobs that have a Db2 for z/OS source columns with the LONG VARCHAR data type, which is no longer supported in current Db2 versions, fail with the following error: DB2 z/OS source CDC: Insert, Update, Delete parse failed with <null> With this fix, database ingestion and replication jobs support the LONG VARCHAR data type. (August 2024) |
DBMI-16129 | When you create a database ingestion and replication incremental load task that has a Db2 for z/OS source in the new task configuration wizard, if you add a table selection rule that excludes all tables, the associated job might fail with the a misleading message that reports DATA CAPTURE CHANGES is not enabled on the excluded tables. Workaround: Use Include rules instead. (December 2023) |
Issue | Description |
---|---|
DBMI-19320 | Database ingestion and replication combined initial and incremental load jobs that have a source without a primary key and a Google BigQuery target replicate empty values as nulls when processing Updates, which can cause duplicate values on the target. The Updates are treated as Inserts, so the old data is not deleted before the new data is inserted. (August 2024) |
DBMI-18629 | In a database ingestion and replication task that has an Oracle source and Google BigQuery target, if you add a custom data type mapping that maps the Oracle DATE type to the Google BigQuery TIMESTAMP type, the custom mapping is ignored when you run the job. Instead, the default mapping of Oracle DATE to Google BigQuery DATETIME is used. (July 2024) |
Issue | Description |
---|---|
DBMI-19760 | Database ingestion and replication initial load jobs that have an Oracle source and a Google BigQuery target might fail intermittently with the following error: uploadLocalFile has been failed because of error: Remote host terminated the handshake. Workaround: None. (July 2024) |
Issue | Description |
---|---|
DBMI-11732 | If database ingestion and replication incremental load or combined initial and incremental load jobs replicate LOB source data to a Google Cloud Storage target and use the CSV format for the target output file, the LOB data appears as empty strings in the target file. (November 2022) |
Issue | Description |
---|---|
DBMI-18169 | A database ingestion and replication job that has a Kafka target configured with one-way SSL connection mode does not start because of a missing SSL KeyStore Password connection property which is not required for this mode. (July 2024) |
Issue | Description |
---|---|
DBMI-19022 | When you configure database ingestion and replication initial load jobs that have an Oracle source and Microsoft Azure Data Lake Storage target, if you select Add Operation Type on the Target page, that setting is cleared when the task is saved. As a result, the infa_operation_type metadata column is not added to the target. (August 2024) |
DBMI-18670 | Database ingestion and replication jobs that use a Microsoft Azure Data Lake Storage Gen2 target connection to connect to an Azure Government cloud fail because the connection uses a specific host name by default that is incorrect. With this fix, the connection to the Azure Government cloud succeeds provided that you use Azure Service Principal authentication. (July 2024) |
DBMI-18098 | A database ingestion and replication initial load job that has a Microsoft Azure Data Lake Storage Gen2 target does not attempt a retry if the upload to target fails. (July 2024) |
Issue | Description |
---|---|
DBMI-11732 | If Database ingestion and replication incremental load or combined initial and incremental load jobs replicate LOB source data to a Microsoft Azure Data Lake Storage Gen2 target and use the CSV format for the target output file, the LOB data appears as empty strings in the target file. (November 2022) |
Issue | Description |
---|---|
DBMI-21051 | Database ingestion and replication incremental load and combined initial and incremental load jobs that have a SQL Server source and use the CDC Tables capture method apply duplicate rows to the target if multiple Update operations occur in a single commit on LOB columns in a source table that has no primary key. (November 2024) |
DBMI-20877 | Database ingestion and replication combined initial and incremental load jobs that have a SQL Server source fail during incremental processing with the following error: com.infa.rat.task.plugin.api.basic.TaskException: Helper <CDC_COMBINED InputEndpoint Helper> failed with exception: Helper.isDone get() threw PwxSessionException with StopSession() failed. Caused by: Failed to open a socket. Error:Connection refused: connect The ODBC driver returns an error and connection retries fail with the socket error. (October 2024) |
DBMI-20220 | Database ingestion and replication jobs that use a SQL Server source connection with Windows v2 authentication enabled might fail with a JDBC connection problem. (October 2024) |
Issue | Description |
---|---|
DBMI-20354 | A database ingestion and replication job that has a MongoDB source might fail if a source collection contains a document that refers to another document from a different collection. In this case, the following error can occur: Can't find a codec for class com.mongodb.DBRef (October 2024) |
DBMI-19072 | Database ingestion and replication jobs that have a MongoDB source and Snowflake target add a backslash (\) in JSON object's data while loading it to a VARIANT column on the target. In some cases, users want to use another character in the VARIANT column that better reflects the source data and omit the backslash character. (August 2024) |
Issue | Description |
---|---|
DBMI-21014 | Database ingestion and replication initial load jobs that have a MySQL source fail if the source contains invalid date, datetime, time, or timestamp data that the reader or target writer cannot process. For example, the following data is invalid: “0000-00-00 00:00:00”, “02-31-2024 00:00:00” (November 2024) |
DBMI-20874 | For database ingestion and replication jobs that have a MySQL source, two CDC records recorded at the same time but recorded in two different bin logs may cause the job to fail with the following error: "Previous sequence is not less than current sequence" (November 2024) |
DBMI-19300 | When you configure a database ingestion and replication task that has a MySQL source, the following error might occur on the Source page of the task wizard when the wizard tries to retrieve the list of source tables: Failed to obtain table counts. Error code: b443f108-6ef9-4503-9480-37da1e643bb7. The DBMI agent could not process the request for metadata from org <organization> because of error: HTTP response status code is 400, Cannot find ICU specification of native encoding 'utf16' for vendor 'MYSQL'. This problem occurs when the source uses UTF-16 encoding but the UTF-16 character set is missing from ICU repository for MySQL. (August 2024) |
Issue | Description |
---|---|
DBMI-20171 | Database ingestion and replications incremental load or combined load jobs that have an Oracle source might appear to be Up and Running even they’ve lost connectivity to the source while reading a redo log sequence. In this case, no data is written to the target. The jobs should issue an appropriate alert and fail. November 2024 |
DBMI-19568 | If you perform a Resync (retain) operation on a source table in a database ingestion and replication combined load job that has an Oracle source after a column is dropped, the target table is re-created, instead of retaining the table's current schema and resynchronizing the column data as expected. This behavior occurs even though the job's Drop Column schema drift option is set to Ignore. (August 2024) |
DBMI-19315 | In a database ingestion and replication combined load job that includes an Oracle source, a subtask on a source table that includes a TIMESTAMP WITH LOCAL TIME ZONE column fails after you rename that column and perform a the Resync (refresh) operation on the table subtask. This problem occurs even though the RENAME COLUMN schema drift option is set to Ignore and the Database Ingestion agent service's DBMI_ORACLE_SOURCE_ENABLE_TIMESTAMP_WITH_LOCAL_TZ environment variable is set to true. (August 2024) |
DBMI-19297 | Database ingestion and replication jobs that have a highly encrypted Oracle source in a Real Application Clusters (RAC) Oracle Cloud Infrastructure (OCI) environment might fail with the following error: PWX-36000 ORAD Mbr 2: Internal error Transaction already exists (transaction_ID) in module PwxOrlCtxTMgr:4737 (August 2024) |
DBMI-19126 | Database ingestion and replication tasks that are configured to not include LOBs in capture processing but have source tables with unsupported Oracle BFILE or LOB columns, incorrectly add table-level Exclude rules that specify conditions with the unsupported column names on the Source page of the task wizard. (August 2024) |
DBMI-19124 | Database ingestion and replication incremental load or combined load jobs that have an Oracle source might fail intermittently with the following error if the source database uses Create Table As Select (CTAS) statements or direct-path loads with a dblink: Unexpected PWX fatal error: PWX-36475 ORAD: PowerExchange encountered a redo log assembly error in a committed transaction. The CTAS or direct load operations can cause a different pattern of redo logs to be produced. (October 2024) |
DBMI-19022 | When you configure database ingestion and replication initial load jobs that have an Oracle source and Microsoft Azure Data Lake Storage target, if you select Add Operation Type on the Target page, that setting is cleared when the task is saved. As a result, the infa_operation_type metadata column is not added to the target. (August 2024) |
DBMI-18629 | In a database ingestion and replication task that has an Oracle source and Google BigQuery target, if you add a custom data type mapping that maps the Oracle DATE type to the Google BigQuery TIMESTAMP type, the custom mapping is ignored when you run the job. Instead, the default mapping of Oracle DATE to Google BigQuery DATETIME is used. (July 2024) |
DBMI-18230 | In database ingestion and replication tasks that have an Oracle source, selected tables might be automatically deselected in the following situation:
(July 2024) |
Issue | Description |
---|---|
DBMI-19760 | Database ingestion and replication initial load jobs that have an Oracle source and a Google BigQuery target might fail intermittently with the following error: uploadLocalFile has been failed because of error: Remote host terminated the handshake. Workaround: None. (July 2024) |
DBMI-19145 | Database ingestion and replication jobs that use the Log-based CDC method and have an Oracle source with only LOB columns selected and no primary key do not create subtasks on the tables. As a result, the jobs cannot capture change data from the tables. Messages such as the following are written to the log: [DBMIP_23026] The process [CDC-LOBS] with the thread ID [9708] encountered new table [lob_table_name]. The new table is excluded from processing. [CDCPUB_10066] TRACE: [PwxCDCRequestProcessor.askSchema() returned: Don't capture.]. Workaround: In the task, select some non-LOB source columns, in addition to the LOB columns, for replication if you want to continue using the table without a primary key. (July 2024) |
DBMI-13605 | The Oracle Database Ingestion connection properties page includes no property for entering JDBC connection properties, such as EncryptionLevel, when they're needed. Workaround: In the Service Name field, you can add JDBC connection properties after the Oracle SID value, using a semicolon (;) as the separator. (April 2023) |
DBMI-12331 | If you create custom data type mappings for Oracle source binary_double and binary_float columns in a database ingestion task, the custom mappings are ignored. Instead, the target table is generated using the default mappings of binary_double > float and binary_float > real. When the database ingestion and replication job runs, nulls are written to the target float and real columns. Workaround: None. (February 2023) |
DBMI-10794 | Oracle source columns with the TIMESTAMP WITH TIME ZONE data type are supported only for initial load jobs. (July 2022) |
Issue | Description |
---|---|
DBMI-21041 | Database ingestion and replication jobs that have a PostgreSQL source and use the source object as a view might list NUMERIC data type columns with precision and scale as unsupported. (November 2024) |
DBMI-19989 | Database ingestion and replication incremental load and combined load jobs that have a PostgreSQL source might fail with the following error if some source columns have the MONEY data type and the target output uses the Avro format: [CDCPUB_10066] TRACE: [<CDC-DBMICDC>. createInfaAvroRecordForIEvent() failed with unexpected exception while obtaining the Avro schema. Error: <Internal Logic Error - avroDecimalFromString() received a decimal value <-92233720368547758.08> but the scale provided <0> does not match the scale of the value!>]. (October 2024) |
DBMI-19319 | Database ingestion and replication combined initial and incremental load jobs that have a PostgreSQL source table with a primary key do not always recognize the primary key, which can lead to data inconsistencies on the target. (August 2024) |
Issue | Description |
---|---|
DBMI-15910 | Database ingestion and replication jobs that have a Db2 for i source and Amazon Aurora PosgreSQL target might fail when copying data trom the local .csv file to the target table or to the LOG table (for an incremental load) if the default Secure Agent installation directory path contains spaces. This problem is caused by a known Progress JDBC driver for PostgreSQL issue. The driver does not preserve spaces in the directory path, causing the database ingestion job to not be able to find the .csv file. Workaround: Configure the Secure Agent installation directory path without spaces. (February 2024) |
Issue | Description |
---|---|
DBMI-19098 | Database ingestion and replication incremental load jobs that have SAP HANA sources might fail if you add a new column to the existing source tables. (August 2024) |
DBMI-18243 | If you deselect the Add Operation Type metadata column for database ingestion and replication incremental load or combined load tasks that have an SAP HANA source and Snowflake target and that use Audit apply mode, the following exception occurs when you try to run the associated job: java.lang.IndexOutOfBoundsException: Index: 139, Size: 139 (July 2024) |
Issue | Description |
---|---|
DBMI-15247 | Database ingestion and replication jobs that have an SAP HANA source table with a multiple-column primary key and that use a custom data-type mapping to map a source TIMESTAMP column to a Snowflake target VARCHAR column replicate data to the target incorrectly, which results in invalid data in the target column. Workaround: None. (November 2023) |
DBMI-14370 | Database ingestion and replication incremental load jobs that have an SAP HANA Cloud source with a DECIMAL column that has no precision fail with the following error message: Error executing query job. Message: Query error: Value of type BIGNUMERIC cannot be assigned to column, which has type INT64 at [number]. Error code: 100032 Workaround: None. (August 2023) |
DBMI-12571 | Database ingestion and replication jobs with a SAP HANA source might replicate data from REAL columns of 16 or more significant digits with a loss of precision on the target, causing data corruption. Workaround: None. (April 2023) |
Issue | Description |
---|---|
DBMI-19072 | Database ingestion and replication jobs that have a MongoDB source and Snowflake target add a backslash (\) in JSON object's data while loading it to a VARIANT column on the target. In some cases, users want to use another character in the VARIANT column that better reflects the source data and omit the backslash character. (August 2024) |
Issue | Description |
---|---|
DBMI-20002 | Database ingestion and replication jobs that have a SQL Server source with IMAGE columns that contain nulls and a SQL Server target might fail if you create a custom data-type mapping that maps the IMAGE data type to the VARBINARY data type. In the case, the following error issued: [Informatica][SQLServer JDBC Driver][SQLServer]Explicit conversion from data type image to varchar is not allowed.. Error code: 529 (October 2024) |
DBMI-19988 | A database ingestion and replication incremental load job that has a SQL Server source and contains a table that is not capturing a complete column set, such as excluded LOB or computed columns, might result in corrupted data when performing an Update operation. The job might continue to run without failing. (October 2024) |
DBMI-19743 | A database ingestion and replication incremental load or combined load job that has a SQL Server target might not roll back the target transaction if the transaction fails to commit a recovery checkpoint. (October 2024) |
DBMI-19630 | Database ingestion and replication initial load jobs that have a SQL Server target and process a large amount of data fail if the target connection drops. The jobs cannot retry the target connection. In this case, you have to restart the jobs from the earliest available position in the log, which is inefficient. (August 2024) |
DBMI-19614 | Database ingestion and replication combined load jobs that have a SQL Server source and SQL Server target fail to clean up temporary data files after the initial load phase completes and the data has been loaded to the target. These data files can consume gigabytes of storage unnecessarily. (August 2024) |
DBMI-19122 | If you configure a SQL Server connection to use the Active Directory Password authentication mode and then use that connection for database ingestion and replication incremental load jobs that connect to an Azure SQL Database source, the jobs use another authentication mode, which causes the connection to fail. (August 2024) |
DBMI-18097 | Database ingestion and replication jobs that have a SQL Server source and SQL Server target might fail with the following error when processing large queries to the database: Caused by: java.sql.SQLSyntaxErrorException: [informatica] [SQLServer JDBC Driver] [SQLServer] There is already an object named '<temp_table_name>_VER' in the database. The problem persists even after the job is undeployed, the tables on the target are dropped, and the job is deployed again. (July 2024) |
Issue | Description |
---|---|
DBMI-11552 | Database ingestion and replication initial load jobs that use the Informatica-supplied Progress DataDirect JDBC driver for SQL Server to connect to a SQL Server source fail. Workaround: Download the Microsoft JDBC Driver for SQL Server. (September 2022) |
DBMI-10272 | Database ingestion and replication incremental load or combined initial and incremental load jobs that have a SQL Server source with binary, decimal, or datetimeoffset columns and an Oracle target fail if a DML update operation is followed by an insert. Workaround: None. (May 2022) |