Consider the following rules and guidelines for Microsoft Fabric Lakehouse mappings:
General guidelines
- You can't create a mapping in advanced mode to read from Microsoft Fabric Data Warehouse and write to Microsoft Fabric Lakehouse. The mapping fails.
- When you specify a custom query for a source object, ensure that the query does not contain a semicolon at the end of the SQL statement. Otherwise, the mapping fails.
- You cannot use input parameters in custom queries that read data from Microsoft Fabric Lakehouse source objects.
- When you use a stored procedure in a custom query to read data from Microsoft Fabric Lakehouse source objects, the mapping fails at runtime.
- When you configure key range partitioning in the Source transformation in a mapping that reads data which contains null values, the mapping ignores the null values and does not write them to the target.
- Ensure that you create the parameter before you override it using a parameter file.
- You can't parameterize a target created at runtime using the parameter file.
- A mapping that uses a MINUS operator in a custom query to read data from Microsoft Fabric Lakehouse source objects fails at runtime. Use an EXCEPT operator instead of a MINUS operator.
- When you specify a table name in a custom query to read data from Microsoft Fabric Lakehouse source objects, specify the exact table name as in the source table because the table name is case sensitive.
- If the mapping contains an advanced filter for the object, for example, <Table2>, in the query options section and an override to the table name in the advanced properties, use the table name that you specify as an override in the advanced properties directly in the advanced filter condition. For example, <Table2>.id >= 1
- To avoid a Java heap size error when you write to Microsoft Fabric Lakehouse, you need to allocate more memory to the DTM in the Secure Agent properties based on the amount of data that you want to write. Increase the -Xms and -Xmx values for the DTM in the JVM options in the system configuration details of the Secure Agent. The recommended -Xms value is 512 MB, while the recommended -Xmx value is 1024 MB.
- When you filter records in a read operation, you can use only the following filter operators:
▪ Equals
▪ Greater
▪ Greater_or_equals
▪ Less_or_equals
▪ Less
▪ Not_Equals
Lookup Transformation
- You cannot set the default column value for the return field in a cached Lookup transformation in a Microsoft Fabric Lakehouse mapping in advanced mode.
- You cannot use Binary and Varbinary datatype columns as lookup conditions.
- When you configure a lookup data filter in a cached lookup, you cannot use a custom query as a lookup object or configure an SQL override. However, the table name override is considered when you configure a lookup data filter.
- You cannot use a custom query as a lookup object or configure an SQL override in an uncached lookup.
- If you parameterize the lookup condition, you can only use the AND operator between multiple conditions or expressions.
SQL override
- When you specify an SQL override statement to override the default SQL query that reads data from a Microsoft Fabric Lakehouse source, ensure that the SQL query does not contain a semicolon at the end of the SQL statement. Otherwise, the mapping in advanced mode fails.
- When you specify an SQL override statement in the Source transformation to call a stored procedure in Microsoft Fabric Lakehouse, the mapping in advanced mode fails at runtime.
- When you use an ORDER BY clause along with Join or Union clauses in a custom query or in an SQL override statement to read data from Microsoft Fabric Lakehouse source objects, ensure that you add "offset 0 rows" at the end of the query. Otherwise, the mapping fails.
- If the mapping contains both a filter for the source object and an SQL override statement in the advanced source properties, the query in the SQL override statement takes precedence over the filter condition.
- The count and order of columns in an SQL override must match the count and order of columns in the connected fields.
Data types
- When you specify the SESSSTARTTIME variable in a query in a mapping, specify the query in the following format:
select to_timestamp('$$$SESSSTARTTIME','YYYY-MM-DD HH24:MI:SS.MS')::TIMESTAMP as xz;
- When you write data to a Microsoft Fabric Lakehouse target created at runtime, Float or Real data types are written as Double data type in the target. To write Float or Real data types, use the Edit Metadata option to edit the data type in the Target transformation.
- When you read data of the Float or Real data type from a source and write to a target, the values written to the target are not accurate.
- A mapping fails in the following cases:
▪ Data is of the Date data type and the date is less than 1582-10-15.
▪ Data is of the Int96 data type and the timestamp is less than 1900-01-01T00:00:00Z.
To resolve this issue, specify the following spark session properties in the mapping task or in the custom properties file for the Secure Agent: