PowerExchange Adapters
This section describes new PowerExchange adapter features in 10.2.
PowerExchange Adapters for Informatica
This section describes new Informatica adapter features in 10.2.
PowerExchange for Amazon Redshift
Effective in version 10.2, PowerExchange for Amazon Redshift includes the following new features:
- •You can read data from or write data to the Amazon S3 buckets in the following regions:
- - Asia Pacific (Mumbai)
- - Asia Pacific (Seoul)
- - Canada (Central)
- - China(Beijing)
- - EU (London)
- - US East (Ohio)
- •You can run Amazon Redshift mappings on the Spark engine. When you run the mapping, the Data Integration Service pushes the mapping to a Hadoop cluster and processes the mapping on the Spark engine, which significantly increases the performance.
- •You can use AWS Identity and Access Management (IAM) authentication to securely control access to Amazon S3 resources.
- •You can connect to Amazon Redshift Clusters available in Virtual Private Cloud (VPC) through VPC endpoints.
- •You can use AWS Identity and Access Management (IAM) authentication to run a session on the EMR cluster.
For more information, see the Informatica PowerExchange for Amazon Redshift 10.2 User Guide.
PowerExchange for Amazon S3
Effective in version 10.2, PowerExchange for Amazon S3 includes the following new features:
- •You can read data from or write data to the Amazon S3 buckets in the following regions:
- - Asia Pacific (Mumbai)
- - Asia Pacific (Seoul)
- - Canada (Central)
- - China (Beijing)
- - EU (London)
- - US East (Ohio)
- •You can compress data in the following formats when you read data from or write data to Amazon S3 in the native environment and Spark engine:
Compression format | Read | Write |
---|
Bzip2 | Yes | Yes |
Deflate | No | Yes |
Gzip | Yes | Yes |
Lzo | Yes | Yes |
None | Yes | Yes |
Snappy | No | Yes |
- •You can select the type of source from which you want to read data in the Source Type option under the advanced properties for an Amazon S3 data object read operation. You can select Directory or File source types.
- • You can select the type of the data sources in the Resource Format option under the Amazon S3 data objects properties. You can read data from the following source formats:
- - Binary
- - Flat
- - Avro
- - Parquet
- •You can connect to Amazon S3 buckets available in Virtual Private Cloud (VPC) through VPC endpoints.
- •You can run Amazon S3 mappings on the Spark engine. When you run the mapping, the Data Integration Service pushes the mapping to a Hadoop cluster and processes the mapping on the Spark engine.
- •You can choose to overwrite the existing files. You can select the Overwrite File(s) If Exists option in the Amazon S3 data object write operation properties to overwrite the existing files.
- •You can use AWS Identity and Access Management (IAM) authentication to securely control access to Amazon S3 resources.
- •You can filter the metadata to optimize the search performance in the Object Explorer view.
- •You can use AWS Identity and Access Management (IAM) authentication to run a session on the EMR cluster.
For more information, see the Informatica PowerExchange for Amazon S3 10.2 User Guide.
PowerExchange for HBase
Effective in version 10.2, PowerExchange for HBase contains the following new features:
- •You can use PowerExchange for HBase to read from sources and write to targets stored in the WASB file system on Azure HDInsight.
- •You can associate a cluster configuration with an HBase connection. A cluster configuration is an object in the domain that contains configuration information about the Hadoop cluster. The cluster configuration enables the Data Integration Service to push mapping logic to the Hadoop environment.
For more information, see the Informatica PowerExchange for HBase 10.2 User Guide.
PowerExchange for HDFS
Effective in version 10.2, you can associate a cluster configuration with an HDFS connection. A cluster configuration is an object in the domain that contains configuration information about the Hadoop cluster. The cluster configuration enables the Data Integration Service to push mapping logic to the Hadoop environment.
For more information, see the Informatica PowerExchange for HDFS 10.2 User Guide.
PowerExchange for Hive
Effective in version 10.2, you can associate a cluster configuration with an Hive connection. A cluster configuration is an object in the domain that contains configuration information about the Hadoop cluster. The cluster configuration enables the Data Integration Service to push mapping logic to the Hadoop environment.
For more information, see the Informatica PowerExchange for Hive 10.2 User Guide.
PowerExchange for MapR-DB
Effective in version 10.2, PowerExchange for MapR-DB contains the following new features:
- •You can run MapR-DB mappings on the Spark engine. When you run the mapping, the Data Integration Service pushes the mapping to a Hadoop cluster and processes the mapping on the Spark engine, which significantly increases the performance.
- •You can configure dynamic partitioning for MapR-DB mappings that you run on the Spark engine.
- •You can associate a cluster configuration with an HBase connection for MapR-DB. A cluster configuration is an object in the domain that contains configuration information about the Hadoop cluster. The cluster configuration enables the Data Integration Service to push mapping logic to the Hadoop environment.
For more information, see the Informatica PowerExchange for MapR-DB 10.2 User Guide.
PowerExchange for Microsoft Azure Blob Storage
Effective in version 10.2, you can read data from or write data to a subdirectory in Microsoft Azure Blob Storage. You can use the Blob Container Override and Blob Name Override fields to read data from or write data to a subdirectory in Microsoft Azure Blob Storage.
For more information, see the Informatica PowerExchange for Microsoft Azure Blob Storage 10.2 User Guide.
PowerExchange for Microsoft Azure SQL Data Warehouse
Effective in version 10.2, you can run Microsoft Azure SQL Data Warehouse mappings in a Hadoop environment on Kerberos enabled clusters.
For more information, see the Informatica PowerExchange for Microsoft Azure SQL Data Warehouse 10.2 User Guide.
PowerExchange for Salesforce
Effective in version 10.2, you can use version 39 of Salesforce API to create a Salesforce connection and access Salesforce objects.
For more information, see the Informatica PowerExchange for Salesforce 10.2 User Guide.
PowerExchange Adapters for PowerCenter
This section describes new PowerCenter adapter features in version 10.2.
PowerExchange for Amazon Redshift
Effective in version 10.2, PowerExchange for Amazon Redshift includes the following new features:
- •You can read data from or write data to the China (Beijing) region.
- •When you import objects from AmazonRSCloudAdapter in the PowerCenter Designer, the PowerCenter Integration Service lists the table names alphabetically.
- •In addition to the existing recovery options in the vacuum table, you can select the Reindex option to analyze the distribution of the values in an interleaved sort key column.
- •You can configure the multipart upload option to upload a single object as a set of independent parts. TransferManager API uploads the multiple parts of a single object to Amazon S3. After uploading, Amazon S3 assembles the parts and creates the whole object. TransferManager API uses the multipart uploads option to achieve performance and increase throughput when the content size of the data is large and the bandwidth is high.
You can configure the Part Size and TransferManager Thread Pool Size options in the target session properties.
- •PowerExchange for Amazon Redshift uses the commons-beanutils.jar file to address potential security issues when accessing properties. The following is the location of the commons-beanutils.jar file:
<Informatica installation directory>server/bin/javalib/505100/commons-beanutils-1.9.3.jar
For more information, see the Informatica PowerExchange for Amazon Redshift 10.2 User Guide for PowerCenter.
PowerExchange for Amazon S3
Effective in version 10.2, PowerExchange for Amazon S3 includes the following new features:
- •You can read data from or write data to the China (Beijing) region.
- •You can read multiple files from Amazon S3 and write data to a target.
- •You can write multiple files to Amazon S3 target from a single source. You can configure the Distribution Column options in the target session properties.
- •When you create a mapping task to write data to Amazon S3 targets, you can configure partitions to improve performance. You can configure the Merge Partition Files option in the target session properties.
- •You can specify a directory path that is available on the PowerCenter Integration Service in the Staging File Location property.
- •You can configure the multipart upload option to upload a single object as a set of independent parts. TransferManager API uploads the multiple parts of a single object to Amazon S3. After uploading, Amazon S3 assembles the parts and creates the whole object. TransferManager API uses the multipart uploads option to achieve performance and increase throughput when the content size of the data is large and the bandwidth is high.
You can configure the Part Size and TransferManager Thread Pool Size options in the target session properties.
For more information, see the Informatica PowerExchange for Amazon S3 version 10.2 User Guide for PowerCenter.
PowerExchange for Microsoft Dynamics CRM
Effective in version 10.2, you can use the following target session properties with PowerExchange for Microsoft Dynamics CRM:
- •Add row reject reason. Select to include the reason for rejection of rows to the reject file.
- •Alternate Key Name. Indicates whether the column is an alternate key for an entity. Specify the name of the alternate key. You can use alternate key in update and upsert operations.
- •You can configure PowerExchange for Microsoft Dynamics CRM to run on AIX platform.
For more information, see the Informatica PowerExchange for Microsoft Dynamics CRM 10.2 User Guide for PowerCenter.
PowerExchange for SAP NetWeaver
Effective in version 10.2, PowerExchange for SAP NetWeaver includes the following new features:
- •When you run ABAP mappings to read data from SAP tables, you can use the STRING, SSTRING, and RAWSTRING data types. The SSTRING data type is represented as SSTR in PowerCenter.
- •When you read or write data through IDocs, you can use the SSTRING data type.
- •When you run ABAP mappings to read data from SAP tables, you can configure HTTP streaming.
For more information, see the Informatica PowerExchange for SAP NetWeaver 10.2 User Guide for PowerCenter.