PowerExchange Adapters
Read this section to learn what's new for PowerExchange adapters in version 10.5.
PowerExchange Adapters for Informatica
Read this section to learn what's new for Informatica PowerExchange adapters in version 10.5.
PowerExchange for Amazon Redshift
Effective in version 10.5, PowerExchange for Amazon Redshift includes the following features:
- •You can read and write data of TIMESTAMPZ data type when you run a mapping in the native environment.
- •You can configure key range partitioning when you read data from an Amazon Redshift source and run the mapping in the native environment.
- •You can configure dynamic partitioning when you write data to an Amazon Redshift target and run the mapping in the native environment.
PowerExchange for Amazon S3
Effective in version 10.5, PowerExchange for Amazon S3 includes the following features:
- •When you run a mapping on the Spark engine, you can read data from and write data to Avro, ORC, and Parquet files organized based on directories.
- •You can configure an Amazon S3-compatible storage to access and manage the data that is stored over an S3 compliant interface. You can use the Scality RING S3-compatible storage. Use the Amazon S3 connection to connect to Scality RING and perform the read and write operations similar to Amazon S3.
- •You can configure Federated Single Sign-On (SSO) authentication to securely access the Amazon S3 resources.
- •You can configure a cached lookup operation to cache the lookup data when you run a mapping on the Spark engine.
- •You can read and write flat files with and without headers.
- •You can configure row delimiter and define qualifier scope when you read and write flat files.
- •You can perform audits for read operations in Amazon S3 mappings that run in the native environment or on the Spark engine.
For more information, see the Informatica 10.5 PowerExchange for Amazon S3 User Guide.
PowerExchange for Google BigQuery
Effective in version 10.5, PowerExchange for Google BigQuery includes the following features:
- •You can write empty strings from a source as null values to a Google BigQuery target.
- •You can use the Merge query to perform update, upsert, or delete operations in a single statement when you write to a Google BigQuery target.
For more information, see the Informatica 10.5 PowerExchange for Google BigQuery User Guide.
PowerExchange for Google Cloud Storage
Effective in version 10.5, PowerExchange for Google Cloud Storage includes the following features:
- •You can read the complete path and names of the Google Cloud Storage source files.
- •You can read and write flat files with and without headers.
- •You can configure row delimiter and define qualifier scope when you read and write flat files.
For more information, see the Informatica 10.5 PowerExchange for Google Cloud Storage User Guide.
PowerExchange for HDFS
Effective in version 10.5, PowerExchange for HDFS includes the following features:
- •When you run a mapping on the Spark engine, you can read data from and write data to Avro, ORC, and Parquet files that are partitioned based on directories.
- •You can perform audits for read operations for complex files such as Avro, Parquet, and JSON in HDFS mappings that run in the native environment or on the Spark engine.
For more information, see the Informatica 10.5 PowerExchange for HDFS User Guide.
PowerExchange for Hive
Effective in version 10.5, you can perform audits for read and write operations in Hive mappings that run in the native environment or on the Spark engine.
For more information, see the Informatica 10.5 PowerExchange for Hive User Guide.
PowerExchange for JDBC V2
Effective in version 10.5, PowerExchange for JDBC V2 includes the following features:
- •You can use JDBC V2 objects as dynamic sources and targets in mappings.
- •You can configure a mapping to create a JDBC V2 target at run time.
- •You can use a JDBC V2 connection with the SAP HANA Database subtype in mappings to read from or write to HANA tables. You can also read from HANA data modelling views, such as attribute, analytic, and calculation views. You can validate and run JDBC V2 mappings on the Spark or Databricks Spark engine.
- •You can perform audits for read operations in JDBC V2 mappings that run in the native environment or on the Spark engine.
For more information, see the Informatica 10.5 PowerExchange for JDBC V2 User Guide.
PowerExchange for Microsoft Azure Blob Storage
Effective in version 10.5, PowerExchange for Microsoft Azure Blob Storage includes the following features:
- •You can read and write flat files with and without headers.
- •You can configure row delimiter and define qualifier scope when you read and write flat files.
For more information, see the Informatica 10.5 PowerExchange for Microsoft Azure Blob Storage User Guide.
PowerExchange for Microsoft Azure Data Lake Storage Gen1
Effective in version 10.5, PowerExchange for Microsoft Azure Data Lake Storage Gen1 includes the following features:
- •You can read and write flat files with and without headers.
- •You can configure row delimiter and define qualifier scope when you read and write flat files.
For more information, see the Informatica 10.5 PowerExchange for Microsoft Azure Data Lake Storage Gen1 User Guide.
PowerExchange for Microsoft Azure Data Lake Storage Gen2
Effective in version 10.5, PowerExchange for Microsoft Azure Data Lake Storage Gen2 includes the following features:
- •When you run a mapping on the Spark engine, you can read data from and write data to Avro, ORC, and Parquet files that are partitioned based on directories.
- •You can read and write flat files with and without headers.
- •You can configure row delimiter and define qualifier scope when you read and write flat files.
For more information, see the Informatica 10.5 PowerExchange for Microsoft Azure Data Lake Storage Gen2 User Guide.
PowerExchange for Microsoft Azure SQL Data Warehouse
Effective in version 10.5, PowerExchange for Microsoft Azure SQL Data Warehouse includes the following features:
- •You can connect to Microsoft Azure Data Lake Storage Gen2 to stage files at runtime.
- •You can stage files in Parquet format when you read data from Microsoft Azure SQL Data Warehouse.
- •You can override the schema name and table name when you read data from and write data to Microsoft Azure SQL Data Warehouse.
- •You can use Gzip compression to compress data in Gzip format when you write data to Microsoft Azure SQL Data Warehouse in the native environment.
- •You can read and write data of Datetimeoffset data type when you run a mapping on the Spark Engine.
- •You can perform audits for read operations in Microsoft Azure SQL Data Warehouse mappings that run in the native environment or on the Spark engine.
- •You can read data from and write data to Microsoft Azure SQL Data Warehouse case-sensitive databases.
For more information, see the Informatica 10.5 PowerExchange for Microsoft Azure SQL Data Warehouse User Guide.
PowerExchange for Salesforce Marketing Cloud
Effective in version 10.5, you can configure dynamic mappings to include the frequent changes to sources, targets, and transformation logic at run time based on parameters and rules that you define.
For more information, see the Informatica 10.5 PowerExchange for Salesforce Marketing Cloud User Guide.
PowerExchange for Snowflake
Effective in version 10.5, PowerExchange for Snowflake includes the following features:
- •You can read from and write to Snowflake data warehouse that is enabled for staging data in Google Cloud Platform.
- •You can perform audits for read operations in Snowflake mappings that run in the native environment or on the Spark engine.
For more information, see the Informatica 10.5 PowerExchange for Snowflake User Guide.
PowerExchange Adapters for PowerCenter
Read this section to learn what's new for PowerCenter adapters in version 10.5.
PowerExchange for Google BigQuery
Effective in version 10.5, PowerExchange for Google BigQuery includes the following features:
- •When you perform pushdown optimization using the Google BigQuery ODBC connection, you can use the 2.2.5.1012 version of the Informatica ODBC Driver for Google BigQuery to connect to Google BigQuery and push CHR(), DATEDIFF(), DECODE(), LPAD(), and RPAD() functions to the Google BigQuery database.
Contact Informatica Global Customer Support to obtain the Informatica ODBC Driver for Google BigQuery.
- •You can use the Merge query to perform update, upsert, or delete operations in a single statement when you write to a Google BigQuery target.
- •When you configure a session to write data to Google BigQuery, you can write empty strings from a source as null values to a Google BigQuery target.
For more information, see the Informatica 10.5 PowerExchange for Google BigQuery User Guide for PowerCenter.
PowerExchange for Google Cloud Storage
Effective in version 10.5, PowerExchange for Google Cloud Storage includes the following features:
- •You can read the complete path and names of the Google Cloud Storage source files.
- •You can read and write flat files with and without headers.
- •You can configure row delimiter and define qualifier scope when you read and write flat files.
For more information, see the PowerExchange for Google Cloud Storage 10.5 User Guide for PowerCenter.
PowerExchange for Greenplum
Effective in version 10.5, when the ODBC subtype in an ODBC connection is Greenplum, you can push down functions, such as AVG(), COUNT(), DATE_COMPARE(), DATE_DIFF(), GET_DATE_PART(), IN(), ISNULL(), MAX(), MIN(), MOD(), STDDEV(), SUM(), and VARIANCE(), to the Greenplum database for processing using full pushdown optimization.
For more information, see the Informatica PowerCenter 10.5 Advanced Workflow Guide.
PowerExchange for Kafka
Effective in version 10.5, you can configure whether you want to read messages from a Kafka broker as a stream or in batches.
For more information, see the Informatica PowerExchange for Kafka 10.5 User Guide for PowerCenter.
PowerExchange for Snowflake
Effective in version 10.5, you can read from and write to Snowflake data warehouse that is enabled for staging data in Google Cloud Platform.
For more information, see the Informatica 10.5 PowerExchange for Snowflake User Guide for PowerCenter.