When you configure the Amazon Athena catalog source, you define the settings for the metadata extraction capability and other optional capabilities.
The metadata extraction capability extracts source metadata from external source systems. You can also configure other capabilities that the catalog source includes.
You can save the catalog source configuration at any point after you enter the connection information. After you save the catalog source, you can choose to run the catalog source job. To run the job once, click Run. To run metadata extraction and other capabilities on a recurring schedule, configure schedules on the Schedule tab.
Configure metadata extraction
When you configure the Amazon Athena catalog source, you choose a runtime environment, define filters, and enter configuration parameters for metadata extraction.
1In the Connection and Runtime area, choose a serverless runtime environment or the Secure Agent group where you want to run catalog source jobs.
Note:
Serverless runtime environment options are available if the catalog source works with a serverless runtime environment.
2Choose to retain, delete, or deprecate objects that are deleted from the source system in the catalog with the Metadata Change Option.
- Retain. Retains objects that are deleted from the source system in the catalog. If you update or add a filter, the catalog retains objects extracted from the previous job and extracts additional objects that match the current filter. Objects deleted from the source system are not deleted from the catalog. Enrichments added on deleted objects and relationships are retained.
- Delete. Deletes metadata from the catalog based on objects deleted from the source system and changes you make to the filter. Enrichments added on deleted objects and relationships are also permanently lost. Objects renamed in the source system are removed and recreated in the catalog.
- Deprecate. The lifecycle of objects imported into the catalog moves to Obsolete based on objects deleted from the source system and changes you make to the filter. This does not impact enrichments added on deprecated objects and relationships. Objects renamed in the source system are removed and recreated in the catalog. When you run the catalog source job again for other capabilities such as data classification, relationship discovery, or glossary association, the job doesn't consider obsolete objects. Obsolete objects remain in the catalog until they are purged when you run a Purge Obsolete Objects job on the Explore page.
Note:
You can also change the configured metadata change option when you run a catalog source.
3In the Filters area, define one or more filter conditions to apply for metadata extraction:
aSelect Yes to view filter options.
bFrom the Include/Exclude list, choose to include or exclude metadata based on the filter parameters.
cFrom the Object type list, select Views or External tables depending on the object that you want to extract metadata from. Select All to extract metadata from all objects.
dEnter a value to specify the object location.
Filters can contain the following wildcards:
▪ Question mark. Represents a single character.
▪ Asterisk. Represents multiple characters or empty text.
For object hierarchies, use a dot as a separator. When you enter values for filters, enclose them in double quotes if you use a space before or after the string value.
The following image shows the filter condition options:
4To define an additional filter with an OR condition, click the Add icon.
The following image shows a filter that includes metadata from all views in schemas with names that start with 'athena_schema' and excludes metadata from all tables with names that start with 'athena_table' followed by two additional characters in the 'athena_schema' schema.
5Optional. In the Configuration Parameters area, enter additional settings.
The following table describes the property that you enter for additional settings:
Note:
The
Additional Settings
section appears when you click
Show Advanced
.
Property
Description
Expert Parameters
Enter additional configuration options to be passed at runtime. Required if you need to troubleshoot the catalog source job.
Caution:
Use expert parameters when it is recommended by Informatica Global Customer Support.
6Configure additional capabilities for the catalog source by clicking on the tabs.
Configure lineage discovery
Enable the lineage discovery capability and use CLAIRE to build complete lineage by recommending endpoint catalog source objects to assign to reference catalog source connections.
1Click the Lineage Discovery tab.
2Select Enable Lineage Discovery.
3In the Filters area, define one or more filter conditions to apply for lineage discovery.
To define filters, you can choose to select catalog source types, asset groups, or enter a catalog source name or search from a list of catalog sources.
aSelect Yes to view filter options.
bFrom the Include/Exclude list, choose to include or exclude catalog sources for lineage discovery based on the filter parameters.
cFrom the filter type list, select catalog source type, catalog source name, or asset group.
dIn the filter value field, select the required catalog source types, or click the Search button and select catalog sources or asset groups.
Filters can contain the asterisk wildcard to represent multiple characters or empty text.
The filter options appear.
Examples:
▪ To include or exclude all Oracle catalog sources, select Catalog Source Type as the filter type and select Oracle in the filter value field.
▪ To include or exclude the 'Oracle_Retail' catalog source, select Catalog Source Name as the filter type and search for the catalog source or enter Oracle_Retail in the filter value field.
▪ To include or exclude all catalog sources with names that start with 'Oracle', select Catalog Source Name as the filter type and search for the catalog source or enter Oracle* in the filter value field.
▪ To include or exclude all catalog sources with names that end with 'Retail', select Catalog Source Name as the filter type and search for the catalog source or enter *Retail in the filter value field.
▪ To include or exclude all catalog sources with names that contain 'Ret', select Catalog Source Name as the filter type and search for the catalog source or enter *Ret* in the filter value field.
▪ To include or exclude all catalog sources that are part of the 'Financial Group' asset group, select Asset Group as the filter type and search Financial Group in the filter value field.
Note:
You can't add more than one include or exclude filter for the same filter type.
eOptionally, to define an additional filter with an AND condition, click the Add icon.
For more information about lineage discovery, see Lineage discovery.
Configure data profiling and quality
Enable the data profiling capability to evaluate the quality of metadata extracted from the Amazon Athena source system.
1Click the Data Profiling and Quality tab.
2Expand Data Profiling and select Enable Data Profiling.
Note:
Ensure that you have permissions on all the staging connections that you use in your data profiling configuration. You can't run the job if you don't have permissions on the connections that you use. Select connections that you have access to, or ask the administrator to grant the necessary permissions on the connections that you want to use.
3In the Connection and Runtime area, choose the Secure Agent group where you want to run catalog source jobs.
4Optionally, specify data profiling filters to run the profile on a subset of the metadata that you extract.
aSelect Yes to view filter options.
bFrom the Include/Exclude list, choose to include or exclude metadata based on the filter parameters.
cFrom the Object type list, select All, Views, or External tables depending on the object that you want to profile.
dEnter a value to specify the object location.
You can use an asterisk as a wildcard. An asterisk represents multiple characters.
Examples:
▪ You extracted all objects from the ‘athena_schema1’ schema and now you want to run a profile on the ‘athena_view1’ view in the schema. Select Include Metadata from the Include/Exclude list, select Views from the Object type list, and then enter athena_view1 in the input field.
▪ You extracted all objects from the ‘athena_schema1’ schema and now you want to run a profile on the 'obj1' object. Select Include Metadata from the Include/Exclude list, select All from the Object type list, and then enter obj1 in the input field.
To include or exclude multiple objects, click the Add icon to add filters with the OR condition.
5In the Parameters area, configure the parameters.
The following table describes the parameters that you can enter:
Parameter
Description
Modes of Run
Determine the type of data that you want the data profiling task to collect.
Choose one of the following options:
- Keep Signatures Only. Collects only aggregate information such as data types, average, standard deviation, and patterns.
- Keep Signatures and Values. Collects both signatures and data values.
Profiling Scope
Determine whether you want to run data profiling only on the changes made to the source system or on the entire source system.
Choose one of the following options:
- Incremental. Includes only source metadata that is changed or updated since the last profile run.
- Full. Includes the entire metadata that is extracted based on the filters applied for extraction.
Sampling Type
Determines the sample rows on which you want to run the data profiling task.
Choose one of the following options:
- All Rows. Runs data profiling on all rows in the metadata.
- Limit N Rows. Runs data profiling on a limited number of rows.
- Custom Query. Provides an SQL clause to select sample rows to run the data profiling task.
No of rows to limit
Required if you select Limit N Rows in Sampling Type. Specify the number of rows on which you want to run data profiling.
Sampling Query
Required if you select Custom Query in Sampling Type. Specify an SQL clause to select sample rows to run the data profiling task.
Maximum Precision of String Fields
The maximum precision value for profiles on string data type. You can set a maximum precision value of 255 characters. Default is 50.
Text Qualifier
The character that defines string boundaries. If you select a quote character, profiling ignores delimiters within the quotes. Select a qualifier from the list. Default is Double Quote.
6Expand Data Quality and select Enable Data Quality.
Note:
You can click
Use Data Profiling Parameters
to use the same parameters as in the
Data Profiling
section.
Note:
Ensure that you have permissions on all the staging and flat file connections that you use in your data quality configuration. You can't run the job if you don't have permissions on the connections that you use. Select connections that you have access to, or ask the administrator to grant the necessary permissions on the connections that you want to use.
7In the Connection and Runtime area, choose the Secure Agent group where you want to run catalog source jobs.
8In the Parameters area, configure the parameters.
The following table describes the properties that you can enter:
Parameter
Description
Data Quality Rule Automation
Enable the option to automatically create or update rule occurrences for data elements in the catalog source.
Choose one of the following options:
- Apply on Data Elements linked with Business Dataset. Creates rule occurrences for all data elements that are linked with business data sets in the catalog source.
- Apply on all Data Elements. Creates rule occurrences for all data elements in the catalog source.
Data Quality Remediation
Enable the option to specify a flat file connection to store the list of failed rows so that users can remediate poor data quality scores.
Choose one of the following options:
- No. Doesn't enable the data quality failure ticket option.
- Yes. Shows a list of flat file connections where you write failed rows to customer-managed locations.
Data Quality Failure Ticket
Specify whether you want to create data quality failure tickets for poor data quality scores based on the threshold defined for the rule occurrence in Data Governance and Catalog.
Choose one of the following options:
- No. Doesn't automatically create data quality failure tickets when the data quality scores are poor.
- Yes. Automatically creates data quality failure tickets based on the data quality threshold values you define in Data Governance and Catalog, and notifies you when a data quality score is below the threshold.
Note:
You must configure a workflow event for the data quality failure and enable the event in
Metadata Command Center
.
Cache Result
Enable the option to generate a cache file in the runtime environment and preview the cached results faster in subsequent data preview runs.
Choose one of the following options:
- Agent Cache. Stores the results in the runtime environment cache for seven days by default after the first run.
- No Cache. Doesn't store the preview results in the cache. You can only view live results.
Run Rule Occurrence Frequency
Specify whether you want to run data quality rules based on the frequency defined for the rule occurrence in Data Governance and Catalog.
Sampling Type
Determine the sample rows on which you want to run the data quality task.
Choose one of the following options:
- All Rows. Runs data quality on all rows in the metadata.
- Limit N Rows. Runs data quality on a limited number of rows.
- Custom Query. Provides an SQL clause to select sample rows to run data quality.
No of rows to limit
Required if you select Limit N Rows in Sampling Type. Specify the number of rows on which you want to run data quality.
Sampling Query
Required if you select Custom Query in Sampling Type. Specify an SQL clause to select sample rows to run the data quality task.
Maximum Precision of String Fields
The maximum precision value for string data type. You can set a maximum precision value of 255 characters. Default is 50.
Text Qualifier
The character that defines string boundaries. If you select a quote character, data quality ignores delimiters within the quotes. Select a qualifier from the list. Default is Double Quote.
Configure data classification
Enable the data classification capability to identify and organize data into relevant categories based on the functional meaning of the data.
1Click the Data Classification tab.
2Select Enable Data Classification.
3Choose one or both of the following options:
- Generated Data Classifications. CLAIRE automatically generates data classifications for the data elements.
- Data Classification Rules. Choose from predefined or custom data classifications.
1Click Add Data Classification. The Select Data Classifications dialog box appears.
2Select the data classifications that you want to use.
3Click OK.
Configure glossary associations
Enable the glossary association capability to associate glossary terms with technical assets, or to get recommendations for glossary terms that you can manually associate with technical assets in Data Governance and Catalog.
Metadata Command Center considers all published business terms in the glossary while making recommendations to associate your technical assets.
1Click the Glossary Association tab.
2Select Enable Glossary Association.
3Select Enable auto-acceptance to automatically accept glossary association recommendations.
4Specify the Confidence Score Threshold for Auto-Acceptance to set a threshold limit based on which the glossary association capability automatically accepts the recommended glossary terms.
Note:
Specify a percentage from 80 to 100. If the score is higher than the specified limit, the glossary association capability automatically assigns a matching glossary term to the data element.
5Select Enable Below-threshold Recommendations to receive glossary association recommendations below the auto-acceptance threshold. If you enable auto-acceptance, you can enable below-threshold recommendations to receive glossary recommendations below the auto-acceptance threshold.
6Specify the Confidence Score Threshold for Recommendations to set a threshold based on which the glossary association capability makes recommendations
If you enable auto-acceptance, specify a percentage from 80 to the selected auto-acceptance threshold. You can accept or reject the recommended glossary terms that fall within this range in Data Governance and Catalog.
If you disable auto-acceptance, specify a percentage from 80 to 100 inclusive.
7Choose to automatically assign business names and descriptions to technical assets. You can then choose to retain existing assignments and only assign business names and descriptions to assets that don't have assignments, or allow overwrite of existing assignments.
By default, existing assignments are retained.
8Optional. Choose to ignore specific parts of data elements when making recommendations. Select Yes and enter prefix and suffix keyword values as needed.
Click Select to enter a keyword. You can enter multiple unique prefix and suffix keywords. Keyword values are case insensitive.
9Optional. Choose specific top-level business glossary assets to associate with technical assets. Selecting a top-level asset selects its child assets as well. Select Top-level Glossary Assets and specify the assets on the Select Assets page.
10Optional. Choose to use abbreviations and synonym definitions from lookup tables for accurate glossary association. Select Yes to enable, and then click Select to upload a lookup table.