The July 2025 release of Data Governance and Catalog includes the following new features and enhancements.
Introducing the AI System asset
The AI System asset is a machine-learning based system that leverages multiple AI models to perform a range of tasks, such as generating predictions, content, recommendations, decisions, and actions. The asset records information about the technologies, AI models, and data that is used to perform these tasks.
You can create an AI System asset from the Data Governance and Catalog interface or use the bulk import template to create multiple AI System assets in a single operation. You can also use the Metadata Command Center workflow capabilities when you create or edit AI system assets.
When you create a Data Set asset, you can now choose to specify an AI System asset as the parent of the data set instead of a System asset. You can create several direct relatoinships between AI systems and other assets.
Enhancements to AI Model
The following enhancements apply to the AI Model asset:
Note: If your administrator has configured a custom layout for AI Model assets, you can't view these changes. To view the changes to the AI Model assets, your administrator must update the layouts that they created in Metadata Command Center.
Model lineage
This release introduces a new lineage type that is unique to AI Model assets, Model Lineage. On the Model Lineage tab of an AI Model asset, you can view a visual representation of the data sets and other AI models that are associated with the AI model.
New relationships between AI Model assets and other asset types
You can establish the following types of direct relationships between AI Model assets in addition to the types of relationship currently available:
Source Asset Type
Target Asset Type
Relationship Type
AI Model
AI Model
is the Base for
AI Model
AI Model
is Quantized into
AI Model
AI Model
is Derived from
AI Model
AI Model
is a Quantization of
You can no longer establish the following types of direct relationship between an AI Model asset and other assets:
Source Asset Type
Target Asset Type
Relationship Type
AI Model
AI Model
Contains
AI Model
AI Model
is Used in
AI Model
Data Element
is Generating (Target)
AI Model
Data Element
is Using (Source)
AI Model
Data Set
is Generating (Target)
AI Model
Data Set
is Using (Source)
Note: AI Model assets that have relationships to other assets before the July 2025 upgrade are unaffected.
Updated Data tab experience
The Data tab of an AI Model asset now displays the data sets that are used to train and validate the AI Model.
Previously, the Data tab displayed the data elements that are used by the AI model.
Additional metrics for AI model assets
In Metadata Command Center, you can now define Evaluation Metrics for AI Model assets that enable you to record additional metrics that pertain to the model. You can create a maximum of 20 evaluation metrics. Furthermore, in Metadata Command Center you can customize the Bias Score and Drift Score metrics for the AI Model asset type.
Note: If you want to preserve the Bias Score and Drift Score values for the AI Model assets currently in Data Governance and Catalog, export the AI Model assets to your local folder. before you upgrade to the July 2025 release.
Create AI models from technical assets
You can now create an AI Model business asset from AI Model technical assets or AI Model Version technical assets that are extracted from the Databricks source system.
Enhancements to the Tasks Inbox
Tasks Inbox
The Workflow Inbox page is now renamed to Tasks Inbox.
Simplified inbox experience
On the Tasks Inbox page, the My Tasks and Unassigned Tasks tabs have been replaced with a single Tasks tab. The Tasks tab displays all the tasks that you have claimed and the tasks that are available for you to claim.
Add comments to tasks
You can now add comments to the tasks created for workflows that are defined in Metadata Command Center.
On the Tasks Inbox page, Data Governance and Catalog now displays the Comments tab for a task that you select in the Tasks grid. The Comments tab displays the comments that you added to the task.
Unclaim tasks
You can now unclaim a task that you had previously claimed. You can unclaim tasks to release yourself from the task responsibilities if you are unable to complete the task within the due time.
Note: Before you upgrade to the July 2025 release, complete all tasks pertaining to workflows created through Application Integration. After the upgrade, any open tasks associated with the Application Integration workflows will expire, and the associated tickets will be cancelled.
New public REST APIs
You can use the following public REST APIs to interact with the assets in Data Governance and Catalog:
•Create Assets API. Use the API to create business assets such as business terms, domains, policies, systems, and data sets.
•Update Assets API. Use the API to update business and technical assets. You can update business assets like business terms, domains, policies, systems, and data sets. You can update technical assets like catalog sources, data sources, data sets, data elements, data processes, reports, and classifications.
•Delete Assests API. Use the API to delete business assets like business terms, domains, policies, systems, and data sets.
•Relationships API. You can update, create, and delete relationships.
Data Access Management enhancements
The July release includes the following enhancements to Data Access Management functionality:
•You can now use the Data Governance and Catalog workflow capabilities to design single- or multi-step approval workflows in Metadata Command Center for data access assets. The workflows start when you create or edit data access assets on the Data Access Management page in Data Governance and Catalog.
As part of the introduction of workflow, when creating a data access asset that asset is immediately published unless workflow has been enabled. Therefore, we are removing the Publish REST API endpoint. If workflow is enabled, after creating or editing a data access asset, you must add stakeholders and submit assets for approval through the user interface.
For more information about designing workflows, see Workflows in the Metadata Command Center help.
•You can push down data filter policies into Snowflake's row access policies.
•You can push down data access control policies into the Amazon Redshift cloud data platform.
Data quality enhancements
This release includes the following enhancements to data quality:
•You can associate a workflow to a Data Quality Failure ticket by selecting the Event Category as Data Quality and the Event Type as Data Quality Failure in Metadata Command Center.
•You can create a Data Quality Failure ticket from the Rule Occurrence tab of an asset.
•In a Data Quality widget, if you select a saved search or enter a new search criteria, you can view the data quality results using the bar chart or score chart widget types in addition to donut chart widget type. You can export data quality charts in the .png format or export assets in the .csv or .xls format.
New metrics for data observability
You can monitor the most recently updated and refreshed data along with volume metrics for your datasets in the new Freshness and Volume category within the Data Observability tab.
Bulk export and import enhancements
•The stakeholder column in the exported file now displays the deleted stakeholders. For example, the deleted stakeholders are labelled as John Smith (Deleted).
•If you select Include Asset Details while exporting assets, the export file displays a new column called Stakeholder Details. This column contains stakeholder details such as the role, full name, and email address. You cannot modify the details in this column and use it for re-importing assets.
•You can export up to 50000 assets along with their relationships in a single Microsoft Excel File using the interface and the Export API.
•Reference IDs in export files are now hyperlinked. If you click on an ID, you are redirected to the overview tab of the asset in Data Governance and Catalog.
QuickLook browser extension enhancements
After you install the Informartica QuickLook browser extension, you can configure the POD URL in one of the following ways:
•Select the POD region and the respective cloud provider to automatically configure the POD URL. Optionally, you can manually enter the POD URL.
•If you're logged into Data Governance and Catalog, the browser extension extracts the POD URL from your login details.
•Your organization administrator publishes the POD URL for QuickLook.
New control activity relationships for catalog sources
Microsoft Azure Data Factory now has the following control activity relationships:
•Control Activity - Pipeline Instance
•Conrol Activity - Package Instace
•Conrol Activity - Notebook Instance
•Conrol Activity - Procedure Instance
•Activity Instance - Dataflow
•Activity Instance - Procedure Instance
Microsoft Azure Synapse Analytics now has the following control activity relationships:
•Control Activity - Pipeline Instance
•Conrol Activity - Package Instace
•Conrol Activity - Notebook Instance
•Conrol Activity - Procedure Instance
•Control Activity - Synapse Notebook Instance
•Activity Instance - Dataflow
•Activity Instance - Procedure Instance
Enhancements to Find on Browse page
The tabs on the Browse page displays two new filters All Assets and Top Level Assets. The filter All Assets allows you to perform find on all assets within and across all hierarchies, and the filter Top Level Assets performs finds on only top-level assets. These new filters are also available at the row-level.
Data element classification category
You can now create and define a category for a data element classification in Metadata Command Center. From the Asset Customization page in Metadata Command Center, you can create or edit values for the category attribute for data element classifications. While configuring data classification for the catalog source, you can assign multiple category values to a catalog source.
Data Governance and Catalog displays the category of a data element classification for an asset. You can use category as a filter on the search page. While creating a search-based widget, you can also choose categories as a filter to display on the widget.
Extracting deleted stakeholders
Data Governance and Catalog now allows users with permissions to delete already deleted-stakeholders on an asset from the system. As we can now extract deleted user information, when a user deletes a deleted-stakeholder, these actions have an impact on the audit history and its export.
If the user has left the organization or is deleted, the user appears as John Admin (Deleted).
Search enhancements
This release includes the following enhancements to search:
•You can now use substrings on the search feature to search for assets. Substring is supported for search queries, asset names, alias names, and business names. This enhancement also allows searching substrings even with incomplete terms.
•You can use stakeholder search queries to search for assets with deleted stakeholders. Using search queries, you can find assets before, after, and between a specific date.
•You can use stakeholder search queries to search for assets with deleted stakeholders. For example, you use the Policy asset with John Smith search query to search for policy assets assigned to the deleted stakeholder, John Smith.
Glossary enhancements
The July release includes the following enhancements to help you identify glossaries:
•When you link a glossary to a data element, you can now view the hierarchy of the selected glossary in the newly added Hierarchy column to easily distinguish the glossaries with the same name.
•When hovering over a glossary linked to a data element, the tooltip now includes the reference ID and the complete hierarchial path. This helps in distinguishing multiple glossary terms with similar names.
New interface language
The interface for Data Governance and Catalog now supports the Catalan language.
Documentation update for search query examples
The Search Query Examples chapter in the Asset Discovery help is now reworked for better readability. The search query examples are categorized and split into independent topics. This structure helps you find the desired search queries faster.
New catalog sources
This release includes the following new catalog sources:
- You can extract metadata from the following objects of Databricks Unity Catalog:
▪ AI model
▪ AI model versions
- You can use OAuth machine-to-machine authentication to connect to a Databricks source system.
- When you extract metadata from Databricks notebooks, you can use the Python Default Variables Values property to specify values for Python default variables.
You can now run incremental metadata extraction jobs on Microsoft Fabric Data Warehouse catalog sources.
A full metadata extraction extracts all objects from the source to the catalog. An incremental metadata extraction considers only the changed and new objects since the last successful catalog source job run. Incremental metadata extraction doesn’t remove deleted objects from the catalog and doesn’t extract metadata of code-based objects.
Use abbreviations and synonyms for glossary association
You can choose to use the data in a lookup table as synonyms and abbreviations to associate glossary terms with technical assets. To use the data in a lookup table, enable the Glossary Association Synonyms option in the lookup table.
Connection assignment can be a time-consuming task. To simplify this, you can now use CLAIRE to help build complete lineage of a catalog source by recommending the endpoint catalog source objects to be assigned to reference catalog source connections. To view CLAIRE recommendations, you need to enable lineage discovery when you configure a catalog source. When you run the catalog source job, Metadata Command Center assigns the reference catalog source connections to CLAIRE recommended endpoint catalog source objects. You can then view the list of CLAIRE recommendations and accept or reject them.
For more information about lineage discovery, see Lineage discovery.
Define filters when you link catalog sources
When you link catalog sources to generate lineage automatically with CLAIRE, you can choose to define filters for both source and target catalog sources.
If you want to create a workflow that is similar to an existing one, you can clone the existing workflow and modify the workflow name and other details as per your requirement.
For more information about designing workflows, see Workflows.
Select objects for metadata extraction filters
When you define filters for metadata extraction, you can select an object from a list of objects available in the source system.
You can select an object from a list when you configure the following catalog sources:
You can import and use the following predefined data classifications to perform data classification on a source system:
•Indian Phone Number
•Indian City
•Indian District
•Indian PIN
•Indian State
•Indian Goods and Services Tax Identification Number (GSTIN)
•Indian EPIC Number
•India Passport Number
For more information, see the Predefined data element classifications in Cloud Data Governance and Catalog how-to library article.
Epoch time format for custom partition detection
You can define the configuration file to detect the epoch time format in the following source systems:
•Amazon S3
•Google Cloud Storage
•Hadoop Distributed File System
•Microsoft Azure Blob Storage
•Microsoft Azure Data Lake Storage Gen2
•Microsoft Fabric OneLake
•Oracle Cloud Object Storage
•SFTP File System
Epoch time is the number of milliseconds between the current time and midnight January 1, 1970 UTC. For example, the epoch timestamp for 10/11/2021 12:04:41 GMT (MM/dd/yyyy HH:mm:ss) is 1633953881 and the timestamp in milliseconds is 1633953881000.
To detect the epoch time format, define the JSON file as {"CustomPartitionPatterns": ["@"]}
Enhancements to data element classification
You can use reference data from Reference 360 to lookup values for data element classifications and associate reference data when you define data element classifications in Metadata Command Center.