Big Data Management User Guide > Processing Hierarchical Data on the Spark Engine > Processing Hierarchical Data on the Spark Engine Overview
  

Processing Hierarchical Data on the Spark Engine Overview

You can use complex data types, such as array, struct, and map, in mappings that run on the Spark engine. With complex data types, the Spark engine directly reads, processes, and writes hierarchical data in complex files.
The Spark engine can process hierarchical data in Avro, JSON, and Parquet complex files. The Spark engine uses complex data types to represent the native data types for hierarchical data in complex files. For example, a hierarchical data of type record in an Avro file is represented as a struct data type on the Spark engine.
You can develop mappings for the following hierarchical data processing scenarios:
To read from and write to complex files, you create complex file data objects. Configure the read and write operations for the complex file data object to project columns as complex data types. Read and Write transformations based on these complex file data objects can read and write hierarchical data.
Configure the following objects and transformation properties in a mapping to process hierarchical data:
You can also use hierarchical conversion wizards to simplify some of the mapping development tasks.