PowerExchange Adapters
This section describes new PowerExchange adapter features in version 10.1.
PowerExchange Adapters for Informatica
This section describes new Informatica adapter features in version 10.1.
PowerExchange for HDFS
Effective in version 10.1, you can use PowerExchange for HDFS to read Avro and Parquet data files from and write Avro and Parquet data files to HDFS and local file system without using a Data Processor transformation.
For more information, see the Informatica PowerExchange for HDFS 10.1 User Guide.
PowerExchange for Hive
Effective in version 10.1, you can use char and varchar data types in mappings. You can also select different Hive databases when you create a data object and a mapping.
For more information, see the Informatica PowerExchange for Hive 10.1 User Guide.
PowerExchange for Teradata Parallel Transporter API
Effective in version 10.1, you can enable Teradata Connector for Hadoop (TDCH) to run a Teradata mapping on a Blaze engine. When you run the mapping, the Data Integration Service pushes the mapping to a Hadoop cluster and processes the mapping on a Blaze engine, which significantly increases the performance.
For more information, see the Informatica PowerExchange for Teradata Parallel Transporter API 10.1 User Guide.
PowerExchange Adapters for PowerCenter
This section describes new PowerCenter adapter features in version 10.1.
PowerExchange for Greenplum
Effective in version 10.1, you can configure Kerberos authentication for native Greenplum connections.
This feature is also available in 9.6.1 HotFix 4. It is not available in 10.0.
For more information, see the "Greenplum Sessions and Workflows" chapter in the Informatica 10.1 PowerExchange for Greenplum User Guide for PowerCenter.