Big Data Management User Guide > Mapping Transformations in the Hadoop Environment > Aggregrator Transformation in the Hadoop Environment
  

Aggregrator Transformation in the Hadoop Environment

The Aggregator transformation processing in the Hadoop environment depends on the engine that runs the transformation.

Aggregator Transformation Support on the Blaze Engine

Some processing rules for the Blaze engine differ from the processing rules for the Data Integration Service.

Mapping Validation

Mapping validation fails in the following situations:

Aggregate Functions

If you use a port in an expression in the Aggregator transformation but you do not use the port within an aggregate function, the Blaze engine might use any row in the port to process the expression.
The row that the Blaze engine uses might not be the last row in the port. Hadoop execution is distributed, and thus the Blaze engine might not be able to determine the last row in the port.

Data Cache Optimization

The data cache for the Aggregator transformation is optimized to use variable length to store binary and string data types that pass through the Aggregator transformation. The optimization is enabled for record sizes up to 8 MB. If the record size is greater than 8 MB, variable length optimization is disabled.
When variable length is used to store data that passes through the Aggregator transformation in the data cache, the Aggregator transformation is optimized to use sorted input and a pass-through Sorter transformation is inserted before the Aggregator transformation in the run-time mapping.
To view the Sorter transformation, view the optimized mapping or view the execution plan in the Blaze validation environment.
During data cache optimization, the data cache and the index cache for the Aggregator transformation are set to Auto. The sorter cache for the Sorter transformation is set to the same size as the data cache for the Aggregator transformation. To configure the sorter cache, you must configure the size of the data cache for the Aggregator transformation.

Aggregator Transformation Support on the Spark Engine

Some processing rules for the Spark engine differ from the processing rules for the Data Integration Service.

Mapping Validation

Mapping validation fails in the following situations:

Aggregate Functions

If you use a port in an expression in the Aggregator transformation but you do not use the port within an aggregate function, the Spark engine might use any row in the port to process the expression.
The row that the Spark engine uses might not be the last row in the port. Hadoop execution is distributed, and thus the Spark engine might not be able to determine the last row in the port.

Data Cache Optimization

You cannot optimize the data cache for the transformation to store data using variable length.

Aggregator Transformation Support on the Hive Engine

Some processing rules for the Hive engine differ from the processing rules for the Data Integration Service.

Mapping Validation

Mapping validation fails in the following situations:

Aggregate Functions

If you use a port in an expression in the Aggregator transformation but you do not use the port within an aggregate function, the Hive engine might use any row in the port to process the expression.
The row that the Hive engine uses might not be the last row in the port. Hadoop execution is distributed, and thus the Hive engine might not be able to determine the last row in the port.

Data Cache Optimization

You cannot optimize the data cache for the transformation to store data using variable length.