Rules and guidelines for mapping and mapping tasks
When you configure a mapping and mapping tasks, adhere to the following guidelines:
•If you use a custom query in the Source transformation and configure create target at runtime in the Target transformation, you must specify the ExtendedColumnMetadata=true property in the Additional Connection Properties field in the PostgreSQL connection. The property helps retain the constraints from the source in the target object.
•Ensure that the list of selected columns, the data types, and the order of the columns that appear in the query matches the columns, data types, and order in which they appear in the source object.
•When you use the query source type in a mapping to read from multiple tables, and you configure a join for one or more tables that have the same column names, the mapping fails.
For example, see the following SQL query that involves a full outer join between two tables EMPLOYEE and DEPARTMENT that are part of the SALES.PUBLIC schema, where two columns have the same name, CITY:
SELECT EMP_ID, NAME, CITY, DEPT_ID, DEPT_NAME, CITY FROM SALES.PUBLIC.EMPLOYEE FULL OUTER JOIN SALES.PUBLIC.DEPARTMENT ON EMP_ID = DEPT_ID
To distinguish the conflicting column names, add aliases that the database can refer to while joining the tables:
SELECT e.EMP_ID, e.NAME, e.CITY as ecity,d.DEPT_ID, d.DEPT_NAME, d.CITY as dcity FROM SALES.PUBLIC.EMPLOYEE e FULL OUTER JOIN SALES.PUBLIC.DEPARTMENT d ON e.EMP_ID = d.DEPT_ID
•When you configure an SQL override for a PostgreSQL source, partitioning is not applicable.
•To configure the batch size consider using the following formula: BatchSize = (<Threshold Memory in Bytes>)/(3* <Approximate Row Size in Bytes>).
In the formula, the threshold memory in bytes represents the maximum memory consumption that you want the operation to consume. The approximate row size is the size of a row, which you can calculate from the target table field definition.
•When you configure a mapping that includes a column with the JSONB data type in the Update Columns property within the Target transformation, and you process a large volume of data, the mapping runs significantly slower and requires a considerable amount of time to process the data.