Microsoft Azure Data Lake Storage
Microsoft Azure Data Lake is a scalable data storage and analytics service. The service is hosted on Azure.
When you create an Azure Data Lake Storage resource, you can access the files and folders in the following Azure storage products:
- Azure Data Lake Store or Data Lake Storage Gen1
- To access this repository, Enterprise Data Catalog uses service-to-service or OAuth 2.0 authentication. To use the OAuth 2.0 authentication, you must create an Azure Active Directory (AD) application, and use the client ID and client key from the application for authentication. Enterprise Data Catalog uses SDK to access the repository contents.
- Azure Data Lake Storage Gen2
- Azure Blob storage supports Azure Data Lake Storage Gen2. This is a hierarchical file system. When you create a Azure Data Lake Store resource and choose the Azure Data Lake Storage Gen2 option, you need to enter the user account ID and one of the keys provided in the Access keys section. In the Azure portal, you can view the two keys that are generated for each Azure Data Lake Storage Gen2 storage account in the Settings > Access keys section. To access the files and folders in this hierarchal file system, Enterprise Data Catalog uses REST APIs. Azure uses Shared Key authorization to authenticate the requests. In Enterprise Data Catalog, access and runtime is two times faster for Azure Data Lake Storage Gen2 as compared to Data Lake Storage Gen1 storage.
Objects Extracted
Permissions to Configure the Resource
If you create a new user, ensure that you configure read permission on the data source for the new user account.
Supported File Types
The Microsoft Azure Data Lake Storage resource enables you to extract metadata from structured, unstructured, and extended unstructured files.
The structured files supported are:
- •AVRO files
- •Delimited files
- •Text files
- •JSON files
- •Parquet files
- •XML files
The unstructured files supported are:
- •Apple files
- •Compressed files
- •Email
The extended unstructured files are:
- •VB files
- •ASP files
- •TIF files
- •LOG files
- •CSS files
- •ASPX files
- •DLL files
- •GIF files
- •SQL files
Assign read and write permissions to the files to extract metadata.
Prerequisites
Before you create the resource, ensure that you have met the following prerequisites:
- 1. Merge the certificates in <INFA_HOME>/java/jre/lib/security/cacerts to <INFA_HOME>/services/shared/security/ infa_truststore.jks file.
- 2. Move the infa_truststore.jks file to a common location accessible to all the nodes in the cluster.
- 3. In the HDFS configuration properties of the Ambari interface, update the infa_truststore.jks file path in the ssl.client.truststore.location property and update the infa_truststore.jks password in the ssl.client.truststore.password property.
- 4. Restart the Informatica Cluster Service.
Note: Ensure that you configure the required permissions for the ADLS storage in Azure Active Directory.
Note: If the proxy server used to connect to the data source is SSL enabled, you must download the proxy server certificates on the Informatica domain machine.
Basic Information
The General tab includes the following basic information about the resource:
Information | Description |
---|
Name | The name of the resource. |
Description | The description of the resource. |
Resource type | The type of the resource. |
Execute On | You can choose to execute on the default catalog server or offline. |
Resource Connection Properties
The General tab includes the following properties:
Property | Description |
---|
Account Name | Enter the storage account name that you created in the Azure portal. |
ADLS Source Type | Choose Data Lake Store Gen 1 or Data Lake Store Gen 2 option. |
Client Id | Enter the client ID to connect to the Microsoft Azure Data Lake Store. Use the value listed for the application ID in the Azure portal. This option appears when you choose the Data Lake Store Gen 1 option as the ADLS Source Type. |
Client Key | Enter the client key to connect to the Microsoft Azure Data Lake Store. Use the Azure Active Directory application key value in the Azure portal as the client key. This option appears when you choose the Data Lake Store Gen 1 option as the ADLS Source Type. |
Directory Name | Directory name of the Azure Data Lake Store. |
Auth EndPoint URL | The OAuth 2.0 token endpoint URL in the Azure portal. This option appears when you choose the Data Lake Store Gen 1 option as the ADLS Source Type. |
Storage Account Key | Enter key1 or key2 as the storage account key. Navigate to the Settings > Access keys section in Azure portal to view the storage account keys. This option appears when you choose the Data Lake Store Gen 2 option as the ADLS Source Type. |
Connect through a proxy server | Proxy server to connect to the data source. Default is Disabled. This option appears when you choose the Data Lake Store Gen 2 option as the ADLS Source Type. |
Proxy Host | Host name or IP address of the proxy server. This option appears when you choose the Data Lake Store Gen 2 option as the ADLS Source Type. |
Proxy Port | Port number of the proxy server. This option appears when you choose the Data Lake Store Gen 2 option as the ADLS Source Type. |
Proxy User Name | Required for authenticated proxy. Authenticated user name to connect to the proxy server. This option appears when you choose the Data Lake Store Gen 2 option as the ADLS Source Type. |
Proxy Password | Required for authenticated proxy. Password for the authenticated user name. This option appears when you choose the Data Lake Store Gen 2 option as the ADLS Source Type. |
The Metadata Load Settings tab includes the following properties:
Property | Description |
---|
Enable Source Metadata | Extracts metadata from the data source. |
File Types | Select any or all of the following file types from which you want to extract metadata: - - All. Use this option to specify if you want to extract metadata from all file types.
- - Select. Use this option to specify that you want to extract metadata from specific file types. Perform the following steps to specify the file types:
- 1. Click Select. The Select Specific File Types dialog box appears.
- 2. Select the required files from the following options:
- - Extended unstructured formats. Use this option to extract metadata from file types such as audio files, video files, image files, and ebooks.
- - Structured file types. Use this option to extract metadata from file types, such as Avro, Parquet, JSON, XML, text, and delimited files.
- - Unstructured file types. Use this option to extract metadata from file types such as Microsoft Excel, Microsoft PowerPoint, Microsoft Word, web pages, compressed files, emails, and PDF.
- 3. Click Select.
Note: You can select Specific File Types option in the dialog box to select files under all the categories.
|
Enable Exclusion Filter | Filter to exclude folders from the data source during the metadata extraction phase. This option appears when you choose Azure Data Lake Storage Gen2 V2 as the resource type. |
Filter Condition | Filter condition to exclude folders from the data source. Select the filter condition from the following list: - - Starting With. Excludes all folders that start with the keyword.
- - Ending With. Excludes all folders that end with the keyword.
- - Contains. Excludes all folders that contain the keyword.
- - Named. Excludes all folders that are named as the keyword.
This option appears when you choose Azure Data Lake Storage Gen2 V2 as the resource type. |
Filter Value | Filter value or pattern for the filter condition. Specify the value or pattern within double quotes. Use a comma to separate multiple values. This option appears when you choose Azure Data Lake Storage Gen2 V2 as the resource type. |
Is Filter Case Sensitive | Specify if the filter value is case sensitive. Default is True. This option appears when you choose Azure Data Lake Storage Gen2 V2 as the resource type. |
Other File Types | Extract basic file metadata such as, file size, path, and time stamp, from file types not present in the File Types property. |
Treat Files Without Extension As | Select one of the following options to identify files without an extension: |
Enter File Delimiter | Specify the file delimiter if the file from which you extract metadata uses a delimiter other than the following list of delimiters: - - Comma (,)
- - Horizontal tab (\t)
- - Semicolon (;)
- - Colon (:)
- - Pipe symbol (|)
Verify that you enclose the delimiter in single quotes. For example, '$'. Use a comma to separate multiple delimiters. For example, '$','%','&' |
First Level Directory | Specify a directory or a list of directories under the source directory. If you leave this option blank, Enterprise Data Catalog imports all the files from the specified source directory. To specify a directory or a list of directories, you can perform the following steps: - 1. Click Select.... The Select First Level Directory dialog box appears.
- 2. Use one of the following options to select the required directories:
- - Select from list: select the required directories from a list of directories.
- - Select using regex: provide an SQL regular expression to select schemas that match the expression.
Note: If you want to select multiple directories, you must separate the directories with a semicolon (;). |
Recursive Scan | Recursively scans the subdirectories under the selected first-level directories. Recursive scan is required for partitioned file discovery. |
Enable Partitioned File Discovery | Identifies and publishes horizontally partitioned files under the same directory and files organized in hierarchical Hive-style directory structures as a single partitioned file. |
Non Strict Mode | Detects partitions in parquet files when compatible schemas are identified in the files. |
Case Sensitive | Specifies that the resource is configured for case sensitivity. Select one of the following values: - - True. Select this check box to specify that the resource is configured as case sensitive.
- - False. Clear this check box to specify that the resource is configured as case insensitive.
The default value is True. |
Memory | The memory required to run the scanner job. Select one of the following values based on the data set size imported: Note: For more information about the memory values, see the Tuning Enterprise Data Catalog Performance article. |
Custom Options | JVM parameters that you can set to configure scanner container. Use the following arguments to configure the parameters: - - -Dscannerloglevel=<DEBUG/INFO/ERROR>. Changes the log level of scanner to values, such as DEBUG, ERROR, or INFO. Default value is INFO.
- - -Dscanner.container.core=<No. of core>. Increases the core for the scanner container. The value should be a number.
- - -Dscanner.yarn.app.environment=<key=value>. Key pair value that you need to set in the Yarn environment. Use a comma to separate the key pair value.
- - -Dscanner.pmem.enabled.container.memory.jvm.memory.ratio=<1.0/2.0>. Increases the scanner container memory when pmem is enabled. Default value is 1.
- - -DmaxPartFilesToValidatePerTable=<number>. Validates the specified number of part files in the partitioned table. Default value is 10.
- - -DmaxPartFilesToValidatePerPartition=<number>. Validates the specified number of part files for each partition in the partition table. Default value is 5.
- - -DexcludePatterns=<comma separated regex patterns>. Excludes the files while parsing partition tables based on the regex pattern. By default, file names that start with a period and an underscore are excluded.
|
Track Data Source Changes | View metadata source change notifications in Enterprise Data Catalog. |
Custom Partition Configuration File | Detects custom partitions in the data source. Select the configuration file in JSON format. This option appears when you choose Azure Data Lake Storage Gen2 V2 as the resource type. |
Pruned Partition Configuration File | Specify the configuration file in JSON format for partition pruning. This option appears when you choose Azure Data Lake Storage Gen2 V2 as the resource type. |
Disable Partition Pruning | Option to disable partition pruning. This option appears when you choose Azure Data Lake Storage Gen2 V2 as the resource type. |
You can enable data discovery for an Azure Data Lake Store. For more information, see the
Enable Data Discovery topic.
You can enable composite data domain discovery for an Azure Data Lake Store. For more information, see the
Composite Data Domain Discovery topic.
Profile Avro files
You can extract metadata, discover Avro partitions, and run profiles on Avro files with multiple-level hierarchy using an Azure Data Lake Storage Gen2 resource on the Spark engine. When you run profiles on Avro files, the data types of assets appear in the profiling results of the Enterprise Data Catalog tool.
The following asset data types appear in the profiling results:
- •Arrays with primitive data types. You can view the primitive data type of an array in the System Attributes section of the Overview tab of the asset.
- •Arrays with complex data types. You can expand the list to view the data types of arrays with complex data types in the Fields tab of the asset.
- •Unions with multiple primitive data types. You can expand the list to view the data types of unions with multiple primitive data types that are not null in the Fields tab of the asset. All the data types in the union appear in the list.
- •Unions with null and primitive or complex data type appear as primitive or complex data type respectively in the catalog.
- •Maps. You can expand the list to view the data types of maps with keys and values in the Fields tab of the asset.
- •Only primitive data types appear in the catalog. Logical data types do not appear in the catalog.
When you select the Non Strict Mode in the Metadata Load Settings tab of the resource to detect partitions in Avro files, the partition discovery happens in the strict mode.
If partition folder contains more than 10 subfolders and some files or subfolders contain more than 10 files, some folders are not detected for potential partition. To avoid this issue, you can use the -DmaxChildPathsToValidate JVM option to override the default value and increase the number of folders to be validated.
You cannot profile Avro files that contain any of the following data types:
- •Union of multiple primitive
- •Enum
- •Map with complex values
Note: The Avro file that includes any of the above data types also fails during profiling.