New Features (9.6.1 HotFix 2)
This section describes new features in version 9.6.1 HotFix 2.
Big Data
This section describes new big data features in version 9.6.1 HotFix 2.
Informatica Analyst
Big Data Edition has the following new features and enhancements for the Analyst tool:
- Analyst tool integration with Hadoop
Effective in version 9.6.1 HotFix 2, you can enable the Analyst tool to communicate with a Hadoop cluster on a specific Hadoop distribution. You must configure the JVM Command Line Options for the Analyst Service.
For more information, see the Informatica 9.6.1 HotFix 2 Application Services Guide.
- Analyst tool connections
Effective in version 9.6.1 HotFix 2, you can use the Analyst tool to connect to Hive or HDFS sources and targets.
For more information, see the Informatica 9.6.1 HotFix 2 Analyst User Guide.
Data Warehousing
Big Data Edition has the following new features and enhancements for data warehousing:
- Binary Data Type
Effective in version 9.6.1 HotFix 2, a mapping in the Hive environment can process expression functions that use binary data.
For more information, see the Informatica 9.6.1 HotFix 2 Big Data Edition User Guide.
- Timestamp and Date Data Type
Effective in version 9.6.1 HotFix 2, PowerExchange for Hive supports the Timestamp and Date data types.
For more information, see the Informatica 9.6.1 HotFix 2 Big Data Edition User Guide.
- File Format
Effective in version 9.6.1 HotFix 2, you can use the Data Processor transformation to read Parquet input or output.
Apache Parquet is a columnar storage format that can be processed in a Hadoop environment. Parquet is implemented to address complex nested data structures, and uses a record shredding and assembly algorithm.
For more information, see the Informatica 9.6.1 HotFix 2 Data Transformation User Guide.
Data Lineage
Effective in version 9.6.1 HotFix 2, you can perform data lineage analysis on big data sources and targets. You can create a Cloudera Navigator resource to extract metadata for big data sources and targets and perform data lineage analysis on the metadata.
For more information, see the Informatica 9.6.1 HotFix 2 Metadata Manager Administrator Guide.
Hadoop Ecosystem
Big Data Edition has the following new features and enhancements for the Hadoop ecosystem:
- Hadoop Distributions
Effective in version 9.6.1 HotFix 2, Big Data Edition added support for the following Hadoop distributions:
- - Cloudera CDH 5.2
- - Hortonworks HDP 2.2
- - IBM BigInsights 3.0.0.0
- - Pivotal HD 2.1
Big Data Edition dropped support for the following Hadoop distributions:
- - Cloudera CDH 5.0
- - Cloudera CDH 5.1
- - Hortonworks HDP 2.1
- - Pivotal HD 1.1
For more information, see the Informatica 9.6.1 HotFix 2 Big Data Edition Installation and Configuration Guide.
Effective in version 9.6.1 HotFix 2, Big Data Edition supports Cloudera CDH clusters on Amazon EC2.
- Kerberos Authentication
Effective in version 9.6.1 HotFix 2, you can configure user impersonation for the native environment. Configure user impersonation to enable different users to run mappings or connect to big data sources and targets that use Kerberos authentication.
- For more information, see the Informatica 9.6.1 Big Data Edition User Guide.
Performance Optimization
Big Data Edition has the following new features for performance optimization:
- Compress data on temporary staging tables
Effective in version 9.6.1 HotFix 2, you can enable data compression on temporary staging tables to optimize performance when you run a mapping in the Hive environment. When you enable data compression on temporary staging tables, mapping performance might increase.
To enable data compression on temporary staging tables, you must configure the Hive connection to use the codec class name that the Hadoop cluster uses. You must also configure the Hadoop cluster to enable compression on temporary staging tables.
For more information, see the Informatica 9.6.1 HotFix 2 Big Data Edition User Guide.
- Parallel sort
Effective in version 9.6.1 HotFix 2, when you use a Sorter transformation in a mapping, the Data Integration Service enables parallel sorting by default when it pushes the mapping logic to the Hadoop cluster.
For more information, see the Informatica 9.6.1 HotFix 2 Big Data Edition User Guide.
Profile Run on Hadoop Sources in Informatica Analyst
Effective in version 9.6.1 HotFix 2, you can create and run a column profile, rule profile, and data domain discovery on Hive and HDFS sources in the Analyst tool.
For more information, see the Informatica 9.6.1 HotFix 2 Big Data Edition User Guide.
Business Glossary
This section describes new Business Glossary features in version 9.6.1 HotFix 2.
- Refresh Asset
Effective in version 9.6.1 HotFix 2, you can refresh an asset in the Glossary workspace. Refresh the asset to view updates to the properties that content managers made after you opened the asset.
For more information, see the Informatica 9.6.1 HotFix 2 Business Glossary Guide.
- Alert for Duplicate Asset Name
- Effective in version 9.6.1 HotFix 2, the Analyst tool displays an alert when you try to create an asset with a name that already exists in the glossary. You can ignore the alert and create the asset with a duplicate name.
For more information, see the Informatica 9.6.1 HotFix 2 Business Glossary Guide.
- LDAP Authentication in Business Glossary Desktop
- Effective in version 9.6.1 HotFix 2, you can use an LDAP domain when you configure server settings to enable the Business Glossary Desktop client to reference the business glossary on a machine that hosts the Analyst Service.
For more information, see the Informatica 9.6.1 HotFix 2 Business Glossary Desktop Installation and Configuration Guide.
Command Line Programs
This section describes new and changed commands and options for the Informatica command line programs in version 9.6.1 HotFix 2.
isp Command
Effective in version 9.6.1 HotFix 2, the following table describes an updated isp command:
Command | Description |
---|
UpdateGrid | Contains the following new option: -ul. Optional. Updates the current node list with the values in the -nl option instead of replacing the list of nodes previously assigned to the grid. If true, infacmd updates the node list with the list of nodes specified using the -nl option along with the nodes previously assigned to the grid. If false, infacmd replaces the node list with the list of nodes specified using the -nl option. Default is false. Contains the following updated option: -nl. Required. Names of the nodes that you want to assign to the grid. This list of nodes replaces or updates the list of nodes previously assigned to the grid based on the -ul option defined. If you specify the -ul option, the -nl option updates the list of nodes previously assigned to the grid. If you do not specify the -ul option, the -nl option replaces the list of nodes previously assigned to the grid. |
Data Quality Accelerators
This section describes new accelerator features in version 9.6.1 HotFix 2.
- Updated reference data sets
- Effective in version 9.6.1 HotFix 2, Informatica updates the reference data sets that the accelerator rules use to analyze and enhance data.
For more information, see the Informatica Data Quality 9.6.1 HotFix 2 Accelerator Guide.
Informatica Developer
This section describes new Informatica Developer features in version 9.6.1 HotFix 2.
- Microsoft SQL Server Datetime2 Data Type
- Effective in version 9.6.1 HotFix 2, Informatica Developer supports the Microsoft SQL Server Datetime2 data type. The Datetime2 data type can store a range of values from Jan 1, 0001 A.D. 00:00:00 to Dec 31, 9999 A.D. 23:59:59.9999999.
Informatica Domain
This section describes new Informatica domain features in version 9.6.1 HotFix 2.
- Informatica on Amazon EC2
- Effective in version 9.6.1 HotFix 2, you can setup and launch Informatica services with multiple nodes on Amazon EC2. You can launch an Informatica domain that contains up to four nodes.
- Informatica DiscoveryIQ
- Effective in version 9.6.1 HotFix 2, Informatica DiscoveryIQ, a product usage tool, sends routine reports on data usage and system statistics to Informatica. Data collection and upload is enabled by default. You can choose to not send any usage statistics to Informatica.
Informatica Transformations
This section describes new Informatica transformation features in version 9.6.1 HotFix 2.
Address Validator Transformation
This section describes the new features on the Address Validator transformation in version 9.6.1 HotFix 2.
- Support for Taiwan addresses in the Mandarin Traditional Chinese script
Effective in version 9.6.1 HotFix 2, you can use the Address Validator transformation to validate Taiwan addresses in the Mandarin Traditional Chinese script. You can use ports from the Discrete or Multiline group to define the input address.
To enter a Mandarin Traditional Chinese address on single line, use the Formatted Address Line 1 port.
- Enhancements to United States address validation
Effective in version 9.6.1 HotFix 2, the Address Validator transformation returns the county name when the address contains a valid ZIP code and locality. The transformation can add the county name regardless of an Ix match status for the address. The transformation adds the name to a Province output port. If the state identifier is absent from the address, the transformation adds the state identifier to a Province port.
When you validate an address that contains hyphenated house numbers, the transformation moves the second part of the house number to a Sub-building port.
- Configurable output format for element descriptors
Effective in version 9.6.1 HotFix 2, you can configure the Address Validator transformation to specify the output format for the following elements:
- - Street, building, and sub-building descriptors in Australia and New Zealand addresses
- - Street descriptors in German addresses.
By default, the transformation returns the descriptor that the reference database specifies for the address. To specify the output format for the descriptors, configure the Global Preferred Descriptor property on the transformation.
- Support for Address Key codes in United Kingdom Addresses
Effective in version 9.6.1 HotFix 2, you can return the address key for a United Kingdom address. The address key is an eight-digit numeric code identifies the address in the Postcode Address File from the Royal Mail. To add the address key to an address, select the Address Key port. To return the address key, the transformation reads supplementary reference data for the United Kingdom.
- Extended data support for Japan
Effective in version 9.6.1 HotFix 2, the Address Validator transformation can validate Ban or block information in a Japan address. The Address Validator transformation writes the data to the Street Name 2 port or an equivalent port for dependent street data.
A Japanese address lists the address elements in order of size, from the largest or most general unit to the smallest or most specific unit. The Ban element follows the Chome element and precedes the Go element in the address.
- Enhancements to Japan address validation
Effective in version 9.6.1 HotFix 2, you can configure the Address Validator transformation to add the Gaiku code to a Japanese address. To add the code to the address, select the Gaiku Code port.
You can combine the current Choumei Aza code and the Gaiku code in a single string and return the address that the codes identify. To return the complete address, select the Choumei Aza and Gaiku Code JP port and configure the transformation to run in address code lookup mode.
The Japanese reference data contains the Gaiku code, the current Choumei Aza code, and any earlier version of the Choumei Aza code for the address. When you set the Matching Extended Archive property to ON, the transformation writes all of the codes to the output address.
- Support for seven-digit postal codes in Israel
Effective in version 9.6.1 HotFix 2, the Address Validator transformation supports the seven-digit postal codes that Israel Post defines for addresses in Israel. The seven-digit postal codes replace the five-digit postal codes that Israel post previously defined. For example, the seven-digit postal code for Nazareth in Israel is 1623726. Previously, the postal code for Nazareth was 16237.
- Enhancement to address validation in Germany, Austria, and Switzerland
- Effective in version 9.6.1 HotFix 2, the Address Validator transformation recognizes keywords, such as Zimmer and App, in the Street Number ports for addresses from Germany, Austria, and Switzerland. The Address Validator transformation writes the keywords to sub-building ports in the output address.
- Support for the IRIS code in French addresses
Effective in version 9.6.1 HotFix 2, you can configure the Address Validator transformation to add the IRIS code to an address in France. To add the code to the address, select the INSEE-9 Code output port.
An IRIS code uniquely identifies a statistical unit in a commune in France. INSEE, or the National Institute for Statistics and Economic Research in France, defines the codes. France has approximately 16,000 IRIS units.
- Support for rooftop geocoding in the United Kingdom
Effective in version 9.6.1 HotFix 2, you can configure the Address Validator transformation to return rooftop-level geocodes for United Kingdom addresses. Rooftop geocodes identify the center of the primary building on a site or a parcel of land.
To generate the rooftop geocodes, set the Geocode Data Type property on the transformation to Arrival Point. You must also install the Arrival Point reference data for the United Kingdom.
- Improved address reference data for Spain
Effective in version 9.6.1 HotFix 2, Informatica updates the address reference data for Spain. The Address Validator transformation can use the address reference data to validate sub-building-level information in Spanish addresses.
- Improved address validation and address reference data for Turkey
Effective in version 9.6.1 HotFix 2, Informatica updates the address reference data for Turkey.
The Address Validator transformation can also perform the following operations when it validates Turkish addresses:
- - The transformation can identify a building name and a street name on the Delivery Address Line 1 port.
- - The transformation adds a slash symbol (/) between a building element and a sub-building element when the sub-building element is a number.
- Improved address validation for Brazil
- Effective in version 9.6.1 HotFix 2, Informatica adds the following improvements to address validation for addresses in Brazil:
- - The Address Validator transformation can add a third level of sub-building information to the Delivery Address Line and Formatted Address Line ports. The Brazil address system contains three levels of sub-building information.
- - The Address Validator transformation validates kilometer information on the Street Additional Info port.
Note: The Address Validator transformation uses a comma, and not a decimal point, in kilometer information for Brazil.
For more information, see the Informatica 9.6.1 HotFix 2 Address Validator Port Reference and the Informatica 9.6.1 HotFix 2 Developer Transformation Guide.
Data Processor Transformation
This section describes the new features in the Data Processor transformation in version 9.6.1 HotFix 2:
- RunMapplet
- The RunMapplet action calls and runs a mapplet as part of a Data Processor transformation. The output of RunMapplet is read into the data holder specified in the RunMapplet action. Use the RunMapplet action to perform tasks such as data masking, data quality, data lookup, and other activities usually related to relational transformations.
- Validation Rules Editor
- You can use the Validation Rules editor to create user-defined rules that validate XML data. If the data violates the rules, the action generates an XML validation report.
- Parquet Input or Output
- Use the New Transformation wizard to create a Data Processor transformation with Parquet input or output.
- Create an XMap Variable for the XMap Source or Target
- You can create an XMap variable to serve as the XMap source or target.
For more information, see the Informatica 9.6.1 HotFix 2 Data Transformation User Guide.
Metadata Manager
This section describes new Metadata Manager features in version 9.6.1 HotFix 2.
Cloudera Navigator Resources
Effective in version 9.6.1 HotFix 2, you can create and configure a Cloudera Navigator resource to extract metadata from the metadata component of Cloudera Navigator. You can create one Cloudera Navigator resource for each Hadoop cluster that is managed by Cloudera Manager.
For more information about creating and configuring Cloudera Navigator resources, see the Informatica 9.6.1 HotFix 2 Metadata Manager Administrator Guide.
For more information about supported metadata source versions, see the
PCAE Metadata Manager XConnect Support Product Availability Matrix on Informatica Network:
https://network.informatica.com/community/informatica-network/product-availability-matrices/overviewMicrosoft SQL Server Integration Services (SSIS) Resources
Effective in version 9.6.1 HotFix 2, you can create and configure a Microsoft SQL Server Integration Services resource to extract metadata from Microsoft SQL Server Integration Services packages. Metadata Manager can extract metadata from packages in the Microsoft SQL Server repository or from a package in a package (.dtsx) file.
For more information about creating and configuring Microsoft SQL Server Integration Services resources, see the Informatica 9.6.1 HotFix 2 Metadata Manager Administrator Guide.
For more information about supported metadata source versions, see the
PCAE Metadata Manager XConnect Support Product Availability Matrix on Informatica Network:
https://network.informatica.com/community/informatica-network/product-availability-matrices/overviewEmbarcadero ERStudio Resources
Effective in version 9.6.1 HotFix 2, you can prevent Metadata Manager from importing attachments from Embarcadero ERStudio. Attachments are also called user-defined properties, or UDPs. To prevent Metadata Manager from importing UDPs, enable the Skip UDP Extraction property when you configure the resource.
For more information about configuring Embarcadero ERStudio resources, see the Informatica 9.6.1 HotFix 2 Metadata Manager Administrator Guide.
PowerCenter Resources
Effective in version 9.6.1 HotFix 2, you can create and load a PowerCenter resource when the PowerCenter repository database type is IBM DB2 for LUW and the database user name differs from the schema name. To specify a schema name that differs from the database user name, enter the schema name in the Schema Name property when you configure the PowerCenter resource.
For more information about configuring PowerCenter resources, see the Informatica 9.6.1 HotFix 2 Metadata Manager Administrator Guide.
PowerCenter Flat Files in the Impact Summary
Effective in version 9.6.1 HotFix 2, the impact summary lists the flat files that are used in PowerCenter resources.
For more information about viewing the impact summary, see the Informatica 9.6.1 HotFix 2 Metadata Manager User Guide.
PowerCenter
This section describes new PowerCenter features in version 9.6.1 HotFix 2.
PowerCenter Upgrade
Effective in version 9.6.1 HotFix 2, PowerCenter preserves the AD50.cfg file when you upgrade from a hotfix or a base release of the same version. The upgrade operation preserves an AD50.cfg file in the server/bin directory and creates an empty configuration file named AD50.cfg.bak in the same directory.
When you upgrade from an earlier PowerCenter version, the upgrade operation writes an empty AD50.cfg file to the server/bin directory. The upgrade operation creates a backup copy of any AD50.cfg file that it finds in the directory.
For more information, see the Informatica 9.6.1 HotFix 2 Upgrade Guides.
PowerExchange
This section describes new PowerExchange features in version 9.6.1 HotFix 2.
PowerExchange infacmd pwx Commands
A new parameter is available for some PowerExchange Logger Service infacmd pwx commands.
The infacmd pwx CreateLoggerService and infacmd pwx UpdateLoggerService commands can now include the following optional startup parameter in the -StartParameters option:
- encryptepwd=encryption_password
A password in encrypted format that enables the encryption of PowerExchange Logger log files. When this password is specified, the PowerExchange Logger can generate a unique encryption key for each Logger log file. The password is stored in the CDCT file in encrypted format. The password is not stored in CDCT backup files and is not displayed in CDCT reports that you generate with the PowerExchange PWXUCDCT utility. To use this encryption password, you must also specify coldstart=Y in the -StartParameters option.
For more information, see the Informatica 9.6.1 HotFix 2 Command Reference.
Encryption of PowerExchange Logger Log Files
You can now encrypt PowerExchange Logger Service log files to prevent unauthorized access to sensitive data that is stored in the log files.
To enable log-file encryption for a PowerExchange Logger Service, specify an encryption password in the startup parameters for a cold start of the PowerExchange Logger Service. You enter the encryption password in one of the following ways:
- •In the infacmd pwx CreateListenerService or infacmd pwx UpdateListenerService command, add the encryptepwd parameter in the -StartParameters option.
- •In the Informatica Administrator, edit the PowerExchange Logger Service configuration properties. In the Start Parameters property, add the encryptepwd parameter.
Note: The PowerExchange Logger uses AES encryption algorithms. You can set the type of AES algorithm in the ENCRYPTOPT statement of the PowerExchange Logger configuration file.
PowerExchange Adapters
This section describes new PowerExchange adapter features in version 9.6.1 HotFix 2.
PowerExchange Adapters for Informatica
This section describes new Informatica adapter features in version 9.6.1 HotFix 2.
PowerExchange for Cassandra
Effective in version 9.6.1 HotFix 2, you can tune consistency levels when you read data from or write data to a Cassandra database. Consistency level determines how data is synchronized on all replicas. Based on your requirement of data accuracy or response time, you can set the required consistency level.
For more information, see the Informatica PowerExchange for Cassandra 9.6.1 HotFix 2 User Guide.
PowerExchange for LinkedIn
Effective in version 9.6.1 HotFix 2, PowerExchange for LinkedIn secures all API calls to LinkedIn by using HTTPS URLs.
For more information, see the Informatica PowerExchange for LinkedIn 9.6.1 HotFix 2 User Guide.
PowerExchange for DataSift
Effective in version 9.6.1 HotFix 2, PowerExchange for DataSift has the following new features and enhancements:
- •You can retrieve data from the DataSift buffer.
- •You can pause and resume the Historics query.
- •You can set the maximum number of attempts to re-establish a connection to DataSift if a connection fails.
For more information, see the Informatica PowerExchange for DataSift 9.6.1 HotFix 2 User Guide.
PowerExchange for Hive
Effective in version 9.6.1 HotFix 2, PowerExchange for Hive has the following new features and enhancements:
- •You can use the user-defined functions in Informatica to transform the Binary data type in a Hive environment.
- •PowerExchange for Hive processes sources and targets that contain the Timestamp data type. The Timestamp data type format is YYYY-MM-DD HH:MM:SS.fffffffff. The Timestamp data type has a precision of 29 and a scale of 9.
- •PowerExchange for Hive processes sources and targets that contain the Date data type. The Date data type has a range of 0000-01-01 to 9999-12-31. The format is YYYY-MM-DD. The Date data type has a precision of 10 and a scale of 0.
For more information, see the Informatica PowerExchange for Hive 9.6.1 HotFix 2 User Guide.
PowerExchange for MongoDB
Effective in version 9.6.1 HotFix 2, the MongoDB ODBC driver creates a virtual table for each column that contain arrays and nested arrays. You can use the MongoDB ODBC driver to read up to five levels of nested columns and write up to three levels of nested columns.
For more information, see the Informatica PowerExchange for MongoDB 9.6.1 HotFix 2 User Guide.
PowerExchange for Salesforce
Effective in version 9.6.1 HotFix 2, PowerExchange for Salesforce has the following new features and enhancements:
- •You can configure PowerExchange for Salesforce to capture changed data from a Salesforce object that is replicateable and contains the CreatedDate and SysModstamp fields.
- •You can use PowerExchange for Salesforce to connect to Salesforce API v30 and v31.
- •The Data Integration Service can push Filter transformation logic to Salesforce sources.
For more information, see the Informatica PowerExchange for Salesforce 9.6.1 HotFix 2 User Guide.
PowerExchange Adapters for PowerCenter
This section describes new PowerCenter adapter features in version 9.6.1 HotFix 2.
PowerExchange for Cassandra
Effective in version 9.6.1 HotFix 2, you can tune consistency levels when you read data from or write data to a Cassandra database. Consistency level determines how data is synchronized on all replicas. Based on your requirement of data accuracy or response time, you can set the required consistency level.
For more information, see the Informatica PowerExchange for Cassandra 9.6.1 HotFix 2 User Guide for PowerCenter.
PowerExchange for MongoDB
Effective in version 9.6.1 HotFix 2, the MongoDB ODBC driver creates a virtual table for each column that contain arrays and nested arrays. You can use the MongoDB ODBC driver to read up to five levels of nested columns and write up to three levels of nested columns.
For more information, see the Informatica PowerExchange for MongoDB 9.6.1 HotFix 2 User Guide for PowerCenter.
PowerExchange for Salesforce Analytics
Effective in version 9.6.1 HotFix 2, you can use PowerExchange for Salesforce Analytics to write data to Salesforce Analytics. You can then run queries on the Salesforce Analytics database to analyze the data.
For more information, see the Informatica PowerExchange for Salesforce Analytics 9.6.1 HotFix 2 User Guide for PowerCenter.
PowerExchange for Vertica
Effective in version 9.6.1 HotFix 2, you can perform the following tasks with PowerExchange for Vertica:
- •You can create Vertica targets in the Target Designer.
- •You can use relational mode to read large volumes of data from a Vertica source. To read data in relational mode, you must create a Vertica relational connection and configure the session to use a relational reader.
- •You can use relational mode to update or delete data in a Vertica target. To write data in relational mode, you must create a Vertica relational connection and configure the session to use a relational writer.
- •When you use bulk mode to write large volumes of data to a Vertica target, you can configure the session to create a staging file. On UNIX operating systems, when you enable file staging, you can also compress the data in a GZIP format. By compressing the data, you can reduce the size of data that is transferred over the network and improve session performance.
- •You can run sessions on a grid to improve session performance.
- •The PowerCenter Integration Service can push transformation logic to Vertica sources and targets that use native drivers. For more information, see the Informatica PowerCenter 9.6.1 HotFix 2 Advanced Workflow Guide.
For more information, see the Informatica PowerExchange for Vertica 9.6.1 HotFix 2 User Guide for PowerCenter.
Workflows
This section describes new workflow features in version 9.6.1 HotFix 2.
Pushdown Optimization for Amazon Redshift
Effective in version 9.6.1 HotFix 2, the PowerCenter Integration Service can push transformation logic to Amazon Redshift sources and targets when the connection type is ODBC.
For more information, see the Informatica PowerCenter 9.6.1 HotFix 2 Advanced Workflow Guide.
Support for Teradata Array Insert
Effective in version 9.6.1 HotFix 2, when you use an ODBC connection to connect to a Teradata target, you can insert arrays of data into the Teradata target instead of inserting data row by row. Inserting arrays of data results in higher session performance.
To insert arrays of data into a Teradata target by using an ODBC connection, configure the OptimizeTeradataWrite custom property at the session level or at the PowerCenter Integration Service level and set its value to 1.
For more information, see the Informatica PowerCenter 9.6.1 HotFix 2 Workflow Basics Guide.