Informatica Mappings
This section describes new Informatica mapping features in version 10.2.2.
Data Types
Effective in version 10.2.2, you can enable high-precision mode in batch mappings that run on the Spark engine. The Spark engine can process decimal values with up to 38 digits of precision.
For more information, see the Informatica Big Data Management 10.2.2 User Guide.
Mapping Outputs
Effective in version 10.2.2, you can use mapping outputs in batch mappings that run as Mapping tasks in workflows on the Spark engine. You can persist the mapping outputs in the Model repository or bind the mapping outputs to workflow variables.
For more information, see the "Mapping Outputs" chapter in the Informatica 10.2.2 Developer Mapping Guide and the "Mapping Task" chapter in the Informatica 10.2.2 Developer Workflow Guide.
Mapping Parameters
Effective in version 10.2.2, you can assign expression parameters to port expressions in Aggregator, Expression, and Rank transformations that run in the native and non-native environments.
For more information, see the "Where to Assign Parameters" and "Dynamic Mappings" chapters in the Informatica 10.2.2 Developer Mapping Guide.
Optimizer Levels
Effective in version 10.2.2, you can configure the Auto optimizer level for mappings and mapping tasks. With the Auto optimization level, the Data Integration Service applies optimizations based on the execution mode and mapping contents.
The optimizer level default for new mappings is Auto.
When you upgrade to version 10.2.2, optimizer levels configured in mappings remain the same. To use the Auto optimizer level with upgraded mappings, you must manually change the optimizer level.
For more information, see the "Optimizer Levels" chapter in the Informatica 10.2.2 Developer Mapping Guide.
Sqoop
Effective in version 10.2.2, you can use the following new Sqoop features:
- Incremental data extraction support
- You can configure a Sqoop mapping to perform incremental data extraction based on an ID or timestamp. With incremental data extraction, Sqoop extracts only the data that changed since the last data extraction. Incremental data extraction increases the mapping performance.
- Vertica connectivity support
- You can configure Sqoop to read data from a Vertica source or write data to a Vertica target.
- Spark engine optimization for Sqoop pass-through mappings
- When you run a pass-through mapping with a Sqoop source on the Spark engine, the Data Integration Service optimizes mapping performance in the following scenarios:
- - You write data to a Hive target that was created with a custom DDL query.
- - You write data to an existing Hive target that is either partitioned with a custom DDL query or partitioned and bucketed with a custom DDL query.
- - You write data to an existing Hive target that is both partitioned and bucketed.
- --infaownername argument support
- You can configure the --infaownername argument to indicate whether Sqoop must honor the owner name for a data object.
For more information, see the Informatica Big Data Management 10.2.2 User Guide.