Consider the following rules and guidelines for Microsoft Fabric Lakehouse mappings:
General guidelines
- When you read from Microsoft Fabric Data Warehouse and write to Microsoft Fabric Lakehouse, the mapping fails.
- You can't configure partitioning in a mapping.
- When you specify a custom query for a source object, ensure that the query does not contain a semicolon at the end of the SQL statement. Otherwise, the mapping fails.
- You cannot use input parameters in custom queries that read data from Microsoft Fabric Lakehouse source objects.
- When you use a stored procedure in a custom query to read data from Microsoft Fabric Lakehouse source objects, the mapping fails at runtime.
- Ensure that you create the parameter before you override it using a parameter file.
- You can't parameterize a target created at runtime using the parameter file.
- A mapping that uses a MINUS operator in a custom query to read data from Microsoft Fabric Lakehouse source objects fails at runtime. Use an EXCEPT operator instead of a MINUS operator.
- When you use an ORDER BY clause along with Join or Union clauses in a custom query to read data from Microsoft Fabric Lakehouse source objects, ensure that you add "offset 0 rows" at the end of the query. Otherwise, the mapping fails.
- When you specify a table name in a custom query to read data from Microsoft Fabric Lakehouse source objects, specify the exact table name as in the source table because the table name is case sensitive.
- To avoid a Java heap size error when you write to Microsoft Fabric Lakehouse, you need to allocate more memory to the DTM in the Secure Agent properties based on the amount of data that you want to write. Increase the -Xms and -Xmx values for the DTM in the JVM options in the system configuration details of the Secure Agent. The recommended -Xms value is 512 MB, while the recommended -Xmx value is 1024 MB.
- When you filter records in a read operation, you can use only the following filter operators:
▪ Equals
▪ Greater
▪ Greater_or_equals
▪ Less_or_equals
▪ Less
▪ Not_Equals
Data types
- When you write data to a Microsoft Fabric Lakehouse target created at runtime, Float or Real data types are mapped to and written as Double data type in the target. To write Float or Real data types, use the Edit Metadata option to edit the data type in the Target transformation.
- When you read data of the Float or Real data type from a source and write to a target, the values written to the target are not accurate.
- A mapping fails in the following cases:
▪ Data is of the Date data type and the date is less than 1582-10-15.
▪ Data is of the Int96 data type and the timestamp is less than 1900-01-01T00:00:00Z.
To resolve this issue, specify the following spark session properties in the mapping task or in the custom properties file for the Secure Agent: