Issue | Description |
---|---|
DBMI-22285 | After you run a database ingestion and replication task that is part of a CDC staging group, if you add another combined load task to the staging group and deploy it and then redeploy its job, the first job might fail with the following error: Cannot invoke "java.util.concurrent.ThreadPoolExecutor.awaitTermination(long, java.util.concurrent.TimeUnit)" The problem occurs when the first job is processing a large amount of data. Note: The CDC staging group feature is available in the new task configuration wizard only. |
DBMI-22172 | If database ingestion and replication combined load jobs in a CDC staging group have not yet written newly captured change data to the staging area, backlog apply processing of change data might be incomplete, causing duplicate inserts on the target. |
DBMI-21877 | A database ingestion and replication combined initial and incremental load job that uses the Audit mode might generate before-image records when there is CDC data during the backlog phase of the task, even though the Add Before Images option is not selected in the advanced properties on the Target page of the task wizard.The job might fail with the following error: The given row cannot be converted to the internal format due to invalid value |
DBMI-21521 | If you define a CDC staging group in the new task configuration wizard, the CDC staging task can't run on a Windows agent. |
DBMI-21485 | If you define a CDC staging task in the new task configuration wizard and then try to export the task, the export operation fails. |
DBMI-21427 | In Operational Insights, if you select CDC Staging as the Job Type filter, no information is displayed, even if CDC Staging Tasks exist. The page is blank. |
Issue | Description |
---|---|
DBMI-21703 | Database ingestion and replication incremental load jobs that have a Db2 for i source might fail if there are two journal receivers in the Attached state in the journal receiver list or if there are no Attached journal receivers in the list. One of the following error messages might be issued: [CDCPUB_10066] TRACE: [IBMiLogCollector getLogMinMax(), Receiver <OMS85N9515> is not the last Journal Receiver in the list. No query needed.]. [CDCPUB_10066] TRACE: [PwxCDCReaderRunHandler encountered error :IBMiLogCollector newEndOfLogChecks() exception, IBMiLogCollector getJournalReceiverInfoByOption(), did not find journal information for receiver name .. Caused by: IBMiLogCollector getJournalReceiverInfoByOption(), did not find journal information for receiver name .]. |
DBMI-21124 | After you upgrade to new release, if the Secure Agent is running multiple database ingestion and replication jobs, a deadlock can occur. In this case, the jobs appear to be running for hours but without moving any data. |
Issue | Description |
---|---|
DBMI-20355 | If you previously deployed a database ingestion and replication job in the Audit or Soft Deletes apply mode, undeployed the job and then redeployed it with a change to the Standard apply more, the metadata columns created for the Audit or Soft Deletes apply mode are retained in the source tables. The target tables no longer receive changes for these columns and the job might fail. |
DBMI-20230 | If a database ingestion and replication job includes a source table without a primary key and you set the custom property readerUseUniqueIndexAsPrimaryKey set to true on the Source page of the task wizard, the job might erroneously consider all indexes for primary key calculation, instead of unique indexes only. |