The July 2025 release of Data Governance and Catalog includes the following new features and enhancements.
Introducing the AI System asset
The AI System asset is a machine learning-based system that uses multiple AI models to perform a range of tasks, such as generating predictions, content, recommendations, decisions, and actions. The asset records information about the technologies, AI models, and data that is used to perform these tasks.
You can create an AI System asset from the Data Governance and Catalog interface or use the bulk import template to create multiple AI System assets in a single operation. You can also use the Metadata Command Center workflow capabilities when you create or edit AI System assets.
You can discover the AI System assets that you create on the Browse page in Data Governance and Catalog. On the Customize page in Metadata Command Center, you can customize the predefined attributes that are available to AI System assets.
When you create a Data Set asset, you can now choose to specify an AI System asset as the parent of the data set instead of a System asset. You can create several direct relationships between AI System assets and other assets.
The following image shows a sample AI System asset:
For more information about the AI System asset, see AI System.
Enhancements to AI Model assets
The following enhancements apply to the AI Model asset:
Note: If your administrator has configured a custom layout for AI Model assets, you can't view these changes. Before you can view the changes to the AI Model assets, your administrator must update the layouts that they created in Metadata Command Center.
Model lineage
The July release introduces a new lineage type that is unique to AI Model assets, Model Lineage. On the Model Lineage tab of an AI Model asset, you can view a visual representation of the data sets and other AI models that are associated with the AI model.
The following image shows the Model Lineage tab of an AI Model asset:
New relationships between AI Model assets and other asset types
You can establish the following types of direct relationships between AI Model assets in addition to the types of relationship currently available:
Source Asset Type
Target Asset Type
Relationship Type
AI Model
AI Model
is the Base for
AI Model
AI Model
is Quantized into
AI Model
AI Model
is Derived from
AI Model
AI Model
is a Quantization of
You can no longer establish the following types of direct relationship between an AI Model asset and other assets:
Source Asset Type
Target Asset Type
Relationship Type
AI Model
AI Model
Contains
AI Model
AI Model
is Used in
AI Model
Data Element
is Generating (Target)
AI Model
Data Element
is Using (Source)
AI Model
Data Set
is Generating (Target)
AI Model
Data Set
is Using (Source)
Note: AI Model asset relationships that you defined before the July 2025 upgrade are unaffected.
Updated Data tab experience
The Data tab of an AI Model asset now displays the data sets that are used to train and validate the AI model.
Previously, the Data tab displayed the data elements that the AI model uses.
Additional metrics for AI Model assets
In Metadata Command Center, you can now define Evaluation Metrics for AI Model assets that enable you to record additional metrics that pertain to the model. You can create a maximum of 20 evaluation metrics. Furthermore, in Metadata Command Center you can customize the Bias Score and Drift Score metrics for the AI Model asset type.
The following image shows the Overview tab of an AI Model asset:
Create AI models from technical assets
You can now create an AI Model business asset from an AI Model Core Version technical asset that you extracted from a Databricks source system.
For more information about the AI Model asset, see AI Model.
Enhancements to the Tasks Inbox
Tasks Inbox
The Workflow Inbox page is now renamed to Tasks Inbox.
Simplified inbox experience
On the Tasks Inbox page, the My Tasks and Unassigned Tasks tabs are replaced with a single Tasks tab. The Tasks tab displays all the tasks that you have claimed and the tasks that are available for you to claim.
Add comments to tasks
When you perform an action on a task, you can now add comments to the task.
On the Tasks Inbox page, Data Governance and Catalog now displays the Comments tab for a task that you select in the Tasks grid. The Comments tab displays the comments that you added to the task and also displays comments that are added to the ticket that is associated with the task.
The following image shows the Comments tab for a task:
View the workflow ID in ticket history
On the History tab of a ticket, Data Governance and Catalog now displays the WorkflowId attribute in the Current Attributes column. The WorkflowId attribute displays the unique identifier the workflow that is associated with th ticket.
Note: The Date and Changed By fields are updated because Data Governance and Catalog added the WorkflowId attribute to your existing tickets.
Unclaim tasks
You can now unclaim a task that you had previously claimed. You can unclaim tasks to release yourself from the task responsibilities if you are unable to complete the task within the due time.
The following image shows the Unclaim button for a task:
Note: Before you upgrade to the July 2025 release, complete all tasks pertaining to workflows created through Application Integration. After the upgrade, any open tasks associated with the Application Integration workflows will expire, and the associated tickets will be cancelled.
You can use the following public REST APIs to interact with the assets in Data Governance and Catalog:
•Manage Assets API. Use the API to create, update and delete business assets including business terms, metrics domains, policies, systems, and data sets. You can enrich technical assets with business context.
•Manage Relationships API. Use the API to create and delete relationships between assets.
For more information about managing assets API, see Manage Assets in the Data Governance and Catalog help.
For more information about managing relationships API, see Manage Relationships in the Data Governance and Catalog help.
Data Access Management enhancements
The July release includes the following enhancements to Data Access Management functionality:
•You can now use the Data Governance and Catalog workflow capabilities to design single- or multi-step approval workflows in Metadata Command Center for data access assets. The workflows start when you create or edit data access assets on the Data Access Management page in Data Governance and Catalog.
As part of the introduction of workflow, when creating a data access asset that asset is immediately published unless you have enabled workflow. Therefore, we are removing the Publish REST API endpoint. If you enable workflow, after you create or edit a data access asset, you must add stakeholders and submit it for approval through the user interface.
For more information about designing workflows, see Workflows in the Metadata Command Center help.
•You can push down data filter policies into Snowflake's row access policies.
This release includes the following enhancements to data quality:
•You can associate a workflow to a Data Quality Failure ticket by selecting the Event Category as Data Quality and the Event Type as Data Quality Failure in Metadata Command Center.
•You can create a Data Quality Failure ticket from the Rule Occurrence tab of an asset.
•In a Data Quality widget, if you select a saved search or enter a new search criteria, you can view the data quality results using the bar chart or score chart widget types in addition to donut chart widget type.
•You can export data quality charts in the .png format or export assets in the .csv or .xls format.
•In a Data Quality widget, you see the new Score and Date columns.
•In a Data Quality widget, you can preview the latest data quality scores and create data quality failure tickets.
You can select Data Freshness and Data Volume as new filter metrics for a catalog source in Metadata Command Center. After you select Data Freshness and Data Volume metrics and run the catalog source in Metadata Command Center, you can monitor the most recently updated and refreshed data along with volume metrics for your datasets in the new Freshness and Volume category within the Data Observability tab in Data Governance and Catalog.
The following image shows the new data observability metrics in a catalog source:
Lineage visualizations can often become large and complex as it may include mapping tasks, mapping task instances and other transformation objects. This might make it difficult to focus on the source and target objects in the lineage.
To quickly assess the source and target objects in a lineage without the distraction of transformations details, you can hide the data processes or objects that perform transformations or other operations on data. To hide data processes, launch the technical lineage for an asset and select the Hide Data Process option from the settings of the lineage.
You can also save this setting as part of your lineage layout preferences.
For more information about hiding data process for data lineage, see View data lineage.
Bulk export and import enhancements
•The stakeholder column in the exported file now displays the deleted stakeholders. For example, the deleted stakeholders are labelled as John Smith (Deleted).
•If you select Include Asset Details while exporting assets, the export file displays a new column called Stakeholder Details. This column contains stakeholder details such as the role, full name, and email address. You cannot modify the details in this column and use it for re-importing assets.
•You can export up to 50,000 assets along with their relationships in a single Microsoft Excel File using the interface and the Export API.
•Reference IDs in export files are now hyperlinked. If you click on an ID, you are redirected to the overview tab of the asset in Data Governance and Catalog.
For more information about the bulk import process, see Bulk import process.
QuickLook browser extension enhancements
After you install the Informartica QuickLook browser extension, you can configure the POD URL in one of the following ways:
•Select the POD region and the respective cloud provider to automatically configure the POD URL. Optionally, you can manually enter the POD URL.
•If you're logged into Data Governance and Catalog, the browser extension extracts the POD URL from your login details.
•If you're part of a Google Workspace, your administrator can publish the POD URL for QuickLook.
The following image shows the Informatica QuickLook digalog box on the browser extension page:
The tabs on the Browse page displays two new filters All Assets and Top Level Assets. You can apply the filter All Assets to perform find on assets within and across all hierarchies and the filter Top Level Assets perfoms find on parent assets only. Using Find at the table level or at the row level displays the first 25 results only.
The following image shows the filters menu for Find available on some tabs:
For more information about finding assets in the Browse tab, see Browse for assets.
Data element classification category
You can now create and define a classification category for a data element classification in Metadata Command Center. From the Asset Customization tab on the Customize page, you can create or edit values for a classification category attribute of a data element classification. Then, from the Explore page you can add multiple classification categories to a data classification.
The following image shows Classification Category panel on a new data element classification page:
For more information about creating or adding classification categories to a data element classification, see Data classification.
Data Governance and Catalog displays the category of a data element classification for an asset. You can use category as a filter on the search page. While creating a search-based widget, you can also choose categories as a filter to display on the widget.
The following image displays the Classification Categories panel on a data element classification page:
Data Governance and Catalog now allows users with permissions to delete already deleted-stakeholders on an asset from the system. As we can now extract deleted user information, when a user deletes a deleted-stakeholder, these actions have an impact on the audit history and its export.
If the user has left the organization or is deleted, the user appears as John Admin (Deleted).
This release includes the following enhancements to search:
•You can now use contractions or partial terms of asset names to search for assets. Contractions are supported for search queries, asset names, alias names, and business names. For example, for the asset name Customer_AccountBalance_rate, you can search for the asset using Cust*, AccountBalance, accountbal, or other contracted forms.
•You can use stakeholder search queries to search for assets with deleted stakeholders. Using search queries, you can find assets before, after, and between a specific date and time.
•You can use stakeholder search queries to search for assets with deleted stakeholders. For example, you use the Policy asset with John Smith search query to search for policy assets assigned to the deleted stakeholder, John Smith.
The July release includes the following enhancements to help you identify glossaries:
•When you link a glossary to a data element, you can now view the hierarchy of the selected glossary in the newly added Hierarchy column to easily distinguish the glossaries with the same name.
•When hovering over a glossary linked to a data element, the tooltip now includes the reference ID and the complete hierarchial path. This helps in distinguishing multiple glossary terms with similar names.
New interface language
The interface for Data Governance and Catalog now supports the Catalan language.
Documentation update for search query examples
The Search Query Examples chapter in the Asset Discovery help is now reworked for better readability. The search query examples are categorized and split into independent topics. This structure helps you find the desired search queries faster.
- You can extract metadata from the following objects of Databricks Unity Catalog:
▪ AI model
▪ AI model versions
- You can extract table metadata from information_schema for Databricks Unity Catalog.
- You can use OAuth machine-to-machine authentication to connect to a Databricks source system.
- When you extract metadata from Databricks notebooks, you can use the Python Default Variables Values property to specify values for Python default variables.
This release includes the following profiling enhancements:
Microsoft SQL Server
You can run a data profiling job on metadata extracted from any database or schema regardless of the database or schema name that you specified in the connection properties.
Oracle
You can run a data profiling job on metadata extracted from any schema regardless of the schema name that you specified in the connection properties.
Microsoft SQL Server and Oracle
You can profile columns with names up to 128 characters in length.
SAP ERP
You can run a data profiling job on a limited number of rows using the Limit N Rows sampling type.
Teradata Database
You can run profiles on metadata extracted from multiple databases.
You can now run incremental metadata extraction jobs on the following catalog sources:
•Microsoft Fabric Data Lakehouse
•Microsoft Fabric Data Warehouse
A full metadata extraction extracts all objects from the source to the catalog. An incremental metadata extraction considers only the changed and new objects since the last successful catalog source job run. Incremental metadata extraction doesn’t remove deleted objects from the catalog and doesn’t extract metadata of code-based objects.
Use abbreviations and synonyms for glossary association
You can choose to use the data in a lookup table as synonyms and abbreviations to associate glossary terms with technical assets. To use the data in a lookup table, enable the Glossary Association Synonyms option in the lookup table.
Connection assignment can be a time-consuming task. To simplify this, you can now use CLAIRE to help build complete lineage of a catalog source by recommending the endpoint catalog source objects to be assigned to reference catalog source connections. To view CLAIRE recommendations, you need to enable lineage discovery when you configure a catalog source. When you run the catalog source job, Metadata Command Center assigns the reference catalog source connections to CLAIRE recommended endpoint catalog source objects. You can then view the list of CLAIRE recommendations and accept or reject them.
For more information about lineage discovery, see Lineage discovery.
Define filters when you link catalog sources
When you link catalog sources to generate lineage automatically with CLAIRE, you can choose to define filters for both source and target catalog sources.
If you want to create a workflow that is similar to an existing one, you can clone the existing workflow and modify the workflow name and other details as per your requirement.
For more information about designing workflows, see Workflows.
Select objects for metadata extraction filters
When you define filters for metadata extraction, you can select an object from a list of objects available in the source system.
You can select an object from a list when you configure the following catalog sources:
You can import and use the following predefined data classifications to perform data classification on a source system:
•Indian Phone Number
•Indian City
•Indian District
•Indian PIN
•Indian State
•Indian Goods and Services Tax Identification Number (GSTIN)
•Indian EPIC Number
•India Passport Number
For more information, see the Predefined data element classifications in Cloud Data Governance and Catalog how-to library article.
Epoch time format for custom partition detection
You can detect partitions that use the epoch time format in the following source systems:
•Amazon S3
•Google Cloud Storage
•Hadoop Distributed File System
•Microsoft Azure Blob Storage
•Microsoft Azure Data Lake Storage Gen2
•Microsoft Fabric OneLake
•Oracle Cloud Object Storage
•SFTP File System
Epoch time is the number of milliseconds between the current time and midnight January 1, 1970 UTC. For example, the epoch timestamp for 10/11/2021 12:04:41 GMT (MM/dd/yyyy HH:mm:ss) is 1633953881 and the timestamp in milliseconds is 1633953881000.
To detect partitions, define the custom partition in JSON format in the configuration file as: {"CustomPartitionPatterns": ["@"]}
Use reference data from Reference 360 in data classifications
You can use reference data from Reference 360 to look up values when you define data element classifications in Metadata Command Center.
For more information about using reference data to define data element classification, see Data classification.
Job retention policy
System jobs and user jobs get deleted after a retention period. The retention period is 30 days for system jobs and IDMC metadata jobs and 90 days for user jobs.
For information about monitoring jobs, see Jobs in the Administration help.
SAP transports
New SAP transports are available for SAP ERP catalog sources.