What's New and Changed > Part III: Versions 10.4 - 10.4.0.2 > 10.4 What's Changed > Data Engineering Integration
  

Data Engineering Integration

This section describes the changes to Data Engineering Integration in version 10.4.0.

Data Preview

Effective in version 10.4.0, the Data Integration Service uses Spark Jobserver to preview data on the Spark engine. Spark Jobserver allows for faster data preview jobs because it maintains a running Spark context instead of refreshing the context for each job. Mappings configured to run with Amazon EMR, Cloudera CDH, and Hortonworks HDP use Spark Jobserver to preview data.
Previously, the Data Integration Service used spark-submit scripts for all data preview jobs on the Spark engine. Mappings configured to run with Azure HDInsight and MapR use spark-submit scripts to preview data on the Spark engine. Previewing data on mappings configured to run with Azure HDInsight and MapR is available for technical preview.
For more information, see the "Data Preview" chapter in the Data Engineering Integration 10.4.0 User Guide.

Union Transformation

Effective in version 10.4.0, you can choose a Union transformation as the preview point when you preview data. Previously, the Union transformation was not supported as a preview point.

infacmd dp Commands

You can use the infacmd dp plugin to perform data preview operations. Use infacmd dp commands to manually start and stop the Spark Jobserver.
The following table describes infacmd dp commands:
Command
Description
startSparkJobServer
Starts Spark Jobserver on the Integration Service machine.
By default, the Spark Jobserver starts when you preview hierarchical data.
stopSparkJobServer
Stops the Spark Jobserver running on specified Integration Service.
By default, the Spark Jobserver stops if it is idle for 60 minutes or when the Data Integration Service is stopped or recycled.
For more information, see the "infacmd dp Command Reference" chapter in the Informatica 10.4.0 Command Reference.

Date/Time Format on Databricks

Effective in version 10.4.0, when the Databricks Spark engine reads and writes date/time values, it uses the format YYYY-MM-DD HH24:MM:SS.US.
Previously, you set the format in the mapping properties for the run-time preferences of the Developer tool.
You might need to perform additional tasks to continue using date/time data on the Databricks engine. For more information, see the "Databricks Integration" chapter in the Data Engineering 10.4.0 Integration Guide.

Null Values in Target

Effective in version 10.4.0, the following changes are applicable when you write data to a complex file:
Note: You can manually edit the schema if you do not want to allow null values to the target. You cannot edit the schema to prevent the null values in the target with mapping flow enabled.
These changes are applicable to the following adapters:

Python Transformation

Effective in version 10.4.0, you access resource files in the Python code by referencing an index in the array resourceFilesArray. Use resourceFilesArray in new mappings that you create in version 10.4.0.
Previously, the array was named resourceJepFile. Upgraded mappings that use resourceJepFile will continue to run successfully.
For more information, see the "Python Transformation" chapter in the Informatica Data Engineering Integration 10.4.0 User Guide.