Transformations with Microsoft Fabric Data Warehouse
The Source, Target, and other transformations that you configure in mappings in SQL ELT mode can behave differently than in regular mappings.
A mapping in SQL ELT mode includes only transformations that the cloud ecosystem can process.
Some properties in mappings might not apply for mappings in SQL ELT mode. This topic lists the applicable transformations, properties in source and target transformations, functions, and operators that you can use in mappings in SQL ELT mode. It also provides rules and guidelines for specific configurations and points out where they differ from mappings when applicable.
Sources in mappings in SQL ELT mode
When you configure a source connection in a Source transformation, you can choose only a Microsoft Fabric Lakehouse or a Microsoft Fabric Data Warehouse connection.
Microsoft Fabric Data Warehouse source properties
You can configure the following properties for a Microsoft Fabric Data Warehouse source:
•Source type - Single object, query, or parameter.
•Query options - Filter. You can use completely parameterized and advanced filter conditions.
A mapping in SQL ELT mode includes transformations that Microsoft Fabric Data Warehouse can process.
You can use the following transformations in a mapping in SQL ELT mode:
•Aggregator
•Expression
•Filter
•Joiner
•Lookup
•Rank
•Router
•Source
•Target
•Union
Functions in mappings in SQL ELT mode
When you create expressions in a mapping in SQL ELT mode, you must use the native functions and expression syntax of Microsoft Fabric Data Warehouse and not Informatica functions and expression syntax.
You can use the following native functions in a mapping in SQL ELT mode:
Aggregate functions
- APPROX_COUNT()
- AVG()
- CHECKSUM_AGG()
- COUNT()
- COUNT_BIG()
- GROUPING()
- MAX()
- MIN()
- STDEV()
- STDEVP()
- SUM()
- VAR()
- VARP()
Bit manipulation functions
- BIT_COUNT()
- GET_BIT()
- LEFT_SHIFT()
- RIGHT_SHIFT()
- SET_BIT()
Cryptographic functions
- HASHBYTES()
Date and Time functions
- GETUTCDATE()
Expression functions
- NULLIF()
JSON functions
- ISJSON()
Mathematical functions
- ABS()
- ACOS()
- ASIN()
- ATAN()
- ATN2()
- CEILING()
- COS()
- COT()
- DEGREES()
- EXP()
- FLOOR()
- LOG()
- LOG10()
- PI()
- POWER()
- RADIANS()
- ROUND()
- SIGN()
- SIN()
- SQRT()
- SQUARE()
- TAN()
Metadata functions
- OBJECTPROPERTY()
- SCHEMA_ID()
- SCHEMA_NAME()
- STATS_DATE()
Security functions
- SUSER_NAME()
- SUSER_SNAME()
- USER_NAME()
String functions
- ASCII()
- BASE64_DECODE()
- BASE64_ENCODE()
- CHAR()
- CHARINDEX()
- CONCAT()
- CONCAT_WS()
- DIFFERENCE()
- FORMAT()
- LEFT()
- LEN()
- LOWER()
- LTRIM()
- PATINDEX()
- QUOTENAME()
- REPLACE()
- REPLICATE()
- REVERSE()
- RIGHT()
- RTRIM()
- SOUNDEX()
- SPACE()
- STR()
- STUFF()
- SUBSTRING()
- TRIM()
- UNICODE()
- UPPER()
System functions
- COMPRESS()
- DECOMPRESS()
- ISNULL()
- ISNUMERIC()
- NEWID()
Operators in mappings in SQL ELT mode
When you use mappings in SQL ELT mode, Data Integration converts the expression in the transformation by determining equivalent operators in the database. If there is no equivalent operator, Data Integration processes the transformation logic.
The table lists the operators that you can push to Microsoft Fabric Data Warehouse:
+
%
<>
AND
-
>
>=
OR
*
<
<=
NOT
/
=
!=
-
Rules and guidelines in mappings in SQL ELT mode
Certain rules and guidelines apply to mappings in SQL ELT mode.
General rules and guidelines
•When you run a mapping in SQL ELT mode that uses a custom query in the Source or Target transformation along with a table name or schema name override, the table or schema specified in the custom query takes precedence.
•When you use a Joiner transformation to join two tables with the same name and a schema name override is configured for both the sources, the mapping fails to honor the schema override.
• When you configure a Lookup transformation to look up data of the Bit data type, the mapping fails with the following error:
[ERROR] ELT failed due to the following error\: [Operand data type bit is invalid for max operator.]
Ensure that the lookup object does not contain the Bit data type.
•You cannot specify columns of Binary data type in a lookup condition.
•You cannot use a Filter transformation to filter data with the date or datetime2 data types.
•When you use an ISNUMERIC() function, you cannot process date and datetime2 data types.
•When you use the ABS() function to process a BIGINT value of -9,223,372,036,854,775,808, the mapping fails with an arithmetic overflow error. Ensure that the lower boundary value for BIGINT is -9,223,372,036,854,775,807.
•In String functions that accept varbinary arguments, do not pass constant values of the varbinary data type. Instead, pass the varbinary column to the String function. For example, instead of SUBSTRING(0x496E666F726D61746963612049E646961, 1, 4) or SUBSTRING('0x496E666F726D61746963612049E646961', 1, 4), you need to pass the varbinary data type in the format SUBSTRING(col_varbin, 1, 4) where col_varbin is a column of the varbinary data type.
•When you use constant values in functions in an Expression transformation, and those expressions are implicitly converted to the Bigint data type, the mapping fails with the following error:
[ERROR] ELT failed due to the following error\: [Invalid length parameter passed to the substring function.]
•When you preview data or run a mapping in SQL ELT mode to write float or real data types, the source data and target data don't match in some cases. The values written to the target are approximate and not the same as in the source. To write the exact values, map real and float data types with numeric or decimal data types.
•When you parameterize a mapping in SQL ELT mode, do not select the Allow parameter to be overridden at run time option.
Microsoft Azure Data Lake Storage Gen2 source guidelines
Certain rules and guidelines apply to mappings in SQL ELT mode when you load data from Microsoft Azure Data Lake Storage Gen2 source to Microsoft Fabric Data Warehouse.
•You can only configure a Shared Key authentication type to access Microsoft Azure Data Lake Storage Gen2.
•When you select the source type as File, consider the following guidelines:
- Ensure that the File Name Override and Directory Override properties do not contain wildcard characters.
- Do not select Allow Wildcard Characters and Recursive Directory Read options.
•When you select the source type as Directory, consider the following guidelines:
- All the source files in the directory must contain the same metadata.
- All the files must have the same file format. For example, flat file or Parquet.
- When you read flat files, you can only configure the formatting options, Delimiter, Qualifier, and First Data Row. Ensure that the files have the same format.
- All the files must have the same format to read compressed data from the source. For example, Gzip, Snappy or uncompressed.
- When you specify a file in the File Name Override property, the file specified at run time overrides the file specified in the object.
- When you specify a file in the File Name Override property, recursive directory read is not honored.
•If files or directories contain the asterisk mark (*), the Secure Agent treats * as a wildcard character.
•You can only use the asterisk mark (*) as a wildcard character in file name and directory override.
•Do not specify a root directory (/) when you override the directory.
•Dates before 1582-10-15 00:00:00.000000 with datetime2 datatype stored in Parquet files as int96 may result in incorrect data due to historical calendar differences.
•When you select Gzip as the compression format, you can read Gzip compressed flat files or Parquet files.
•When you select None as the compression format, you can read Snappy compressed Parquet files.
•When you use the Parquet file format and select None as the compression format, the mapping reads both uncompressed and Snappy compressed files.
•Mapping fails if the filename, directory name, or subdirectory name contains Unicode characters or special characters other than an underscore (_) character.