The following list identifies the guidelines or considerations for using Salesforce Data 360 targets:
•Due to a third-party limitation, Salesforce Data 360 Connector cannot perform read and write operations on the same Data Lake Object (DLO). To work around this, you must manually create a recovery table in the target and specify the recovery table name in the writerRecoveryTableNameAlias custom property before you run the task.
Perform the following tasks to create and configure the recovery table:
1Create a schema file named INFORMATICA_CDC_RECOVERY.yml with the following content:
▪ Select this newly created formula field as the primary key.
7Select the data space configured for your task and deploy the data stream.
8After deployment, open the data stream to copy the Object API Name of the recovery table and keep it handy.
9When you configure a database ingestion and replication task, enter the copied object API name in the writerRecoveryTableNameAlias custom property on the Destination page before you run the task.
•Salesforce Data 360 reserves certain column names such as cdp_sys_SourceVersion, DataSource, DataSourceObject, and KQ_Id for internal use. If your source object contains any of these column names, rename them before you deploy the task to avoid failure.
•In incremental load and combined load jobs, delete operations are ignored and skipped.