What's New > July 2025 > New features and enhancements
  

New features and enhancements

The July 2025 release of Metadata Command Center includes the following new features and enhancements.

New catalog sources

This release includes the following new catalog sources:
For more information about catalog sources, see Catalog Source Configuration.

Enhanced catalog sources

This release includes the following enhancements to catalog sources:
Amazon S3
You can extract metadata from the following object types:
For more information, see Amazon S3.
Databricks
This release includes the following enhancements:
For more information, see Databricks.
File System
You can extract metadata from TAR and ZIP compressed files.
For more information, see File System.
Google BigQuery
You can extract metadata from external tables.
For more information, see Google BigQuery.
Informatica Intelligent Cloud Services
This release includes the following enhancements:
For more information, see Informatica Intelligent Cloud Services.
Kafka
You can use the SASL_SSL protocol mechanism to connect to source systems.
For more information about catalog sources, see Catalog Source Configuration.
Microsoft Azure Data Lake Storage Gen2
You can extract metadata from the following object types:
For more information, see Microsoft Azure Data Lake Storage Gen2.
Microsoft Azure Blob Storage
You can extract metadata from Iceberg tables.
For more information, see Microsoft Azure Blob Storage.
SAP Datasphere
You can now connect to an SAP Datasphere source system with the proxy server settings that you configure for a Secure Agent.
For more information, see SAP Datasphere.
Snowflake
You can add metadata extraction filters based on dynamic tables.
For more information, see Snowflake.
Tableau
You can perform connection assignment to view lineage between an SAP HANA Database source system and a Tableau source system.
For more information, see Tableau.

Configure additional data capabilities

You can configure the following additional data capabilities on catalog sources:
For more information about catalog sources, see Catalog Source Configuration.

Profiling enhancements

This release includes the following profiling enhancements:
Microsoft SQL Server
You can run a data profiling job on metadata extracted from any database or schema regardless of the database or schema name that you specified in the connection properties.
Oracle
You can run a data profiling job on metadata extracted from any schema regardless of the schema name that you specified in the connection properties.
Microsoft SQL Server and Oracle
You can profile columns with names up to 128 characters in length.
SAP ERP
You can run a data profiling job on a limited number of rows using the Limit N Rows sampling type.
Teradata Database
You can run profiles on metadata extracted from multiple databases.
For more information, see Catalog Source Configuration.

Clone workflows

If you want to create a workflow that is similar to an existing one, you can clone the existing workflow and modify the workflow name and other details as per your requirement.
For more information about designing workflows, see Workflows.

Enable lineage discovery for catalog sources

Connection assignment can be a time-consuming task. To simplify this, you can now use CLAIRE to help build complete lineage of a catalog source by recommending the endpoint catalog source objects to be assigned to reference catalog source connections. To view CLAIRE recommendations, you need to enable lineage discovery when you configure a catalog source. When you run the catalog source job, Metadata Command Center assigns the reference catalog source connections to CLAIRE recommended endpoint catalog source objects. You can then view the list of CLAIRE recommendations and accept or reject them.
For more information about lineage discovery, see Lineage discovery.

Define filters when you link catalog sources

When you link catalog sources to generate lineage automatically with CLAIRE, you can choose to define filters for both source and target catalog sources.
For information about linking catalog sources, see Link catalog sources to generate lineage.

Incremental metadata extraction

You can now run incremental metadata extraction jobs on the following catalog sources:
A full metadata extraction extracts all objects from the source to the catalog. An incremental metadata extraction considers only the changed and new objects since the last successful catalog source job run. Incremental metadata extraction doesn’t remove deleted objects from the catalog and doesn’t extract metadata of code-based objects.
For more information, see Catalog Source Configuration.

Use abbreviations and synonyms for glossary association

You can choose to use the data in a lookup table as synonyms and abbreviations to associate glossary terms with technical assets. To use the data in a lookup table, enable the Glossary Association Synonyms option in the lookup table.
For more information, see Catalog Source Configuration.

Select objects for metadata extraction filters

When you define filters for metadata extraction, you can select an object from a list of objects available in the source system.
You can select an object from a list when you configure the following catalog sources:
For more information, see Catalog Source Configuration.

Job retention policy

System jobs and user jobs get deleted after a retention period. The retention period is 30 days for system jobs and IDMC metadata jobs and 90 days for user jobs.
For information about monitoring jobs, see Jobs in the Administration help.

Predefined data element classifications

You can import and use the following predefined data classifications to perform data classification on a source system:
For more information, see the Predefined data element classifications in Cloud Data Governance and Catalog how-to library article.

Runtime environment

When you choose a runtime environment, you can only choose from Secure Agents installed on the operating system applicable to the catalog source.
For more information, see Catalog Source Configuration.

Epoch time format for custom partition detection

You can detect partitions that use the epoch time format in the following source systems:
Epoch time is the number of milliseconds between the current time and midnight January 1, 1970 UTC. For example, the epoch timestamp for 10/11/2021 12:04:41 GMT (MM/dd/yyyy HH:mm:ss) is 1633953881 and the timestamp in milliseconds is 1633953881000.
To detect partitions, define the custom partition in JSON format in the configuration file as: {"CustomPartitionPatterns": ["@"]}

Use reference data from Reference 360 in data classifications

You can use reference data from Reference 360 to look up values when you define data element classifications in Metadata Command Center.
The New data element classification page shows the Inclusion Rule section with the Advanced toggle enabled. The Attributes drop-down displays options such as Attributes, Operators, Built-in Functions, Lookup Tables, Reference Data, and Constants. Reference Data is highlighted.
For more information about using reference data to define data element classification, see Data classification.

Data element classification category

You can now create and define a classification category for a data element classification in Metadata Command Center. From the Asset Customization tab on the Customize page, you can create or edit values for a classification category attribute of a data element classification. Then, from the Explore page you can add multiple classification categories to a data classification.
For more information about creating or adding classification categories to a data element classification, see Data classification.

SAP transports

New SAP transports are available for SAP ERP catalog sources.
For more information about SAP transports, see HOW TO: Import the latest transports for SAP catalog sources in Metadata Command Center.