PowerExchange Adapters for Informatica
This section describes new Informatica adapter features in version 10.2.1.
PowerExchange for Amazon Redshift
Effective in version 10.2.1, PowerExchange for Amazon Redshift includes the following features:
- •You can configure a cached lookup operation to cache the lookup table on the Spark engine and an uncached lookup operation in the native environment.
- •For a server-side encryption, you can configure the customer master key ID generated by AWS Key Management Service in the connection in the native environment and Spark engine.
For more information, see the Informatica PowerExchange for Amazon Redshift 10.2.1 User Guide.
PowerExchange for Amazon S3
Effective in version 10.2.1, PowerExchange for Amazon S3 includes the following features:
For more information, see the Informatica PowerExchange for Amazon S3 10.2.1 User Guide.
PowerExchange for Cassandra
Effective in version 10.2.1, the Informatica Cassandra ODBC driver supports asynchronous write.
To enable asynchronous write on a Linux operating system, you must add the EnableAsynchronousWrites key name in the odbc.ini file and set the value to 1.
To enable asynchronous write on a Windows operating system, you must add the EnableAsynchronousWrites property in the Windows registry for the Cassandra ODBC data source name and set the value as 1.
For more information, see the Informatica PowerExchange for Cassandra 10.2.1 User Guide.
PowerExchange for HBase
Effective in version 10.2.1, you can use an HBase data object read operation to look up data in an HBase resource. Run the mapping in the native environment or on the Spark engine to look up data in an HBase resource. You can enable lookup caching and also parameterize the lookup condition.
The lookup feature for PowerExchange for HBase is available for technical preview. Technical preview functionality is supported but is not production-ready. Informatica recommends that you use in non-production environments only.
For more information, see the Informatica PowerExchange for HBase 10.2.1 User Guide.
PowerExchange for HDFS
Effective in version 10.2.1, you can use the following new PowerExchange for HDFS features:
- Intelligent structure model support for complex file data objects
You can incorporate an intelligent structure model in a complex file data object. When you add the data object to a mapping that runs on the Spark engine, you can process any input type that the model can parse.
The intelligent structure model feature for PowerExchange for HDFS is available for technical preview. Technical preview functionality is supported but is not production-ready. Informatica recommends that you use in non-production environments only.
For more information, see the Informatica PowerExchange for HDFS 10.2.1 User Guide.
- Dynamic mapping support for complex file sources
You can use complex file sources as dynamic sources in a mapping.
Dynamic mapping support for complex file sources is available for technical preview. Technical preview functionality is supported but is unwarranted and is not production-ready. Informatica recommends that you use these features in non-production environments only.
For more information about dynamic mappings, see the Informatica Developer Mapping Guide.
PowerExchange for Hive
Effective in version 10.2.1, PowerExchange for Hive supports mappings that run PreSQL and PostSQL queries against Hive sources and targets on the Spark engine.
For more information, see the Informatica PowerExchange for Hive 10.2.1 User Guide.
PowerExchange for Microsoft Azure Blob Storage
Effective in version 10.2.1, PowerExchange for Microsoft Azure Blob Storage includes the following functionality:
- •You can run mappings on the Spark engine.
- •You can read and write .csv, Avro, and Parquet files when you run a mapping on the Spark engine and in the native environment.
- •You can read and write JSON and intelligent structure files when you run a mapping on the Spark engine.
- •You can read a directory when you run a mapping on the Spark engine.
- •You can generate or skip header rows when you run a mapping in the native environment. On the Spark engine, the header row is created by default.
- •You can append an existing blob. The append operation is applicable to only to the append blob and in the native environment.
- •You can override the blob or container name. In the Blob Container Override field, specify the container name or sub-folders in the root container with the absolute path.
- •You can read and write .csv files compressed in the gzip format.
All new functionality for PowerExchange for Microsoft Azure Blob Storage is available for technical preview. Technical preview functionality is supported but is not production-ready. Informatica recommends that you use in non-production environments only.
For more information, see the Informatica PowerExchange for Microsoft Azure Blob Storage 10.2.1 User Guide.
PowerExchange for Microsoft Azure SQL Data Warehouse
Effective in version 10.2.1, PowerExchange for Microsoft Azure SQL Data Warehouse includes the following features:
- •You can run mappings on the Spark engine.
- •You can configure key range partitioning when you read data from Microsoft Azure SQL Data Warehouse objects.
- •You can override the SQL query and define constraints when you read data from a Microsoft Azure SQL Data Warehouse object.
- •You can configure pre-SQL and post-SQL queries for source and target objects in a mapping.
- •You can configure the native expression filter for the source data object operation.
- •You can perform update, upsert, and delete operations against Microsoft Azure SQL Data Warehouse tables.
- •You can configure a cached lookup operation to cache the lookup table on the Spark engine and an uncached lookup operation in the native environment.
For more information, see the Informatica PowerExchange for Microsoft Azure SQL Data Warehouse 10.2.1 User Guide.
PowerExchange for Salesforce
Effective in version 10.2.1, you can use version 41 of Salesforce API to create a Salesforce connection and access Salesforce objects. You can use big objects with source and target transformations.
For more information, see the Informatica PowerExchange for Salesforce 10.2.1 User Guide.
PowerExchange for SAP NetWeaver
Effective in version 10.2.1, you can run mappings on the Spark engine to read data from SAP tables.
For more information, see the Informatica PowerExchange for SAP NetWeaver 10.2.1 User Guide.
PowerExchange for Snowflake
Effective in version 10.2.1, PowerExchange for Snowflake includes the following features:
- •You can configure a lookup operation on a Snowflake table. You can also enable lookup caching for a lookup operation to increase the lookup performance. The Data Integration Service caches the lookup source and runs the query on the rows in the cache.
- •You can parameterize the Snowflake connection, and data object read and write operation properties.
- •You can configure key range partitioning for Snowflake data objects in a read or write operation. The Data Integration Service distributes the data based on the port or set of ports that you define as the partition key.
- •You can specify a table name in the advanced target properties to override the table name in the Snowflake connection properties.
For more information, see the Informatica PowerExchange for Snowflake 10.2.1 User Guide.