What's New and Changed > Part IV: Versions 10.2.2 - 10.2.2 HotFix 1 > 10.2.2 Important Notices > Release Tasks
  

Release Tasks

This section describes release tasks in version 10.2.2. Release tasks are tasks that you must perform after you upgrade to version 10.2.2.

Big Data Management

This section describes release tasks for Big Data Management in version 10.2.2.

Decimal Data Types

If you upgrade to version 10.2.2, mappings that are enabled for high-precision mode and run on the Spark engine must use a scale argument for the TO_DECIMAL and TO_DECIMAL38 functions. If the functions do not have a scale argument, the mappings will fail.
For example, if a pre-upgraded mapping uses high-precision mode and contains the expression TO_DECIMAL(3), you must specify a scale argument before you can run the upgraded mapping on the Spark engine. When the expression has a scale argument, the expression might be TO_DECIMAL(3,2).
For more information, see the Informatica Big Data Management 10.2.2 User Guide.

Mass Ingestion

Effective in version 10.2.2, you can use the Mass Ingestion tool to ingest data using an incremental load.
If you upgrade to version 10.2.2, mass ingestion specifications are upgraded to have incremental load disabled. Before you can run incremental loads on existing specifications, complete the following tasks:
  1. 1. Edit the specification.
  2. 2. On the Definition page, select Enable Incremental Load.
  3. 3. On the Source and Target pages, configure the incremental load options.
  4. 4. Save the specification.
  5. 5. Redeploy the specification to the Data Integration Service.
Note: The redeployed mass ingestion specification runs on the Spark engine.
For more information, see the Informatica Big Data Management 10.2.2 Mass Ingestion Guide.

Python Transformation

If you upgrade to version 10.2.2, the Python transformation can process data more efficiently in Big Data Management.
To experience the improvements in performance, configure the following Spark advanced properties in the Hadoop connection:
infaspark.pythontx.exec
Required to run a Python transformation on the Spark engine for Data Engineering Integration. The location of the Python executable binary on the worker nodes in the Hadoop cluster.
For example, set to:
infaspark.pythontx.exec=/usr/bin/python3.4
If you use the installation of Python on the Data Integration Service machine, set the value to the Python executable binary in the Informatica installation directory on the Data Integration Service machine.
For example, set to:
infaspark.pythontx.exec=INFA_HOME/services/shared/spark/python/lib/python3.4
infaspark.pythontx.executorEnv.PYTHONHOME
Required to run a Python transformation on the Spark engine for Data Engineering Integration and Data Engineering Streaming. The location of the Python installation directory on the worker nodes in the Hadoop cluster.
For example, set to:
infaspark.pythontx.executorEnv.PYTHONHOME=/usr
If you use the installation of Python on the Data Integration Service machine, use the location of the Python installation directory on the Data Integration Service machine.
For example, set to:
infaspark.pythontx.executorEnv.PYTHONHOME=
INFA_HOME/services/shared/spark/python/
After you configure the advanced properties, the Spark engine does not use Jep to run Python code in the Python transformation.
For information about installing Python, see the Informatica Big Data Management 10.2.2 Integration Guide.

Big Data Streaming

This section describes release tasks for Big Data Streaming in version 10.2.2.

Kafka Target

Effective in version 10.2.2, the data type of the key header port in the Kafka target is binary. Previously, the data type of the key header port was string.
After you upgrade, to run an existing streaming mapping, you must re-create the data object, and update the streaming mapping with the newly created data object.
For more information about re-creating the data object, see the Big Data Management 10.2.2 Integration Guide.

Kafka Connection Properties

After you upgrade, for a Kafka connection, configure the Kafka messaging broker version to 0.10.1.x-2.0.0.

PowerExchange Adapters for Informatica

This section describes releases tasks for Informatica adapters in version 10.2.2.

PowerExchange for HBase

Effective in version 10.2.2, you must run a mapping on the Spark engine to look up data in an HBase resource.
If you previously configured a mapping to run in the native environment to look up data in an HBase resource, you must update the execution engine to Spark after you upgrade to version 10.2.2. Otherwise, the mapping fails.
For more information, see the Informatica PowerExchange for HBase 10.2.2 User Guide.

PowerExchange for Microsoft Azure SQL Data Warehouse

After you upgrade from a previous release to version 10.2.2, the existing mappings that contain the following data types fail on the Spark engine at run time:
To run the existing mappings successfully, you must map these data types to the string data type or re-import the object.
For more information, see the Informatica PowerExchange for PowerExchange for Microsoft Azure SQL Data Warehouse 10.2.2 User Guide.