What's New > April 2025 > Changed behavior
  

Changed behavior

The April 2025 release of Metadata Command Center includes the following changed behaviors.

Predefined workflows

Predefined workflows are now available in Metadata Command Center.
Previously, you had to download and import predefined bundles from Administrator as processes to Application Integration.
For more information about configuring workflows, see Configure workflows.

Schedule organization upgrades

You can now schedule organization upgrades of Metadata Command Center, Data Governance and Catalog, and Data Marketplace after Informatica makes the version available on the POD that you connect to.
Previously, you could not schedule organization upgrades.
For more information about upgrading your organization, see Upgrade organization to the latest version.

Privileges for organization upgrades

To initiate the organization upgrade, you must now be the organization administrator or have the Manage Upgrade privilege for your user role. If you don't initiate the upgrade, Informatica upgrades your organization six weeks after it makes the version available on the POD.
Previously, to initiate the upgrade, you had to be the organization administrator or have the Super Admin privilege for your user role.
For more information about feature privileges that are available for Metadata Command Center in Administrator, see Feature privileges in Administrator.

Snowflake metadata extraction method

Metadata Command Center can now also use information_schema.tables, information_schema.views, and information_schema.columns to extract metadata from Snowflake catalog source.
Previously, Metadata Command Center only used show commands.
For more information about configuring metadata extraction in Snowflake, see Configure metadata extraction.

Select objects for metadata extraction

When you define filters for metadata extraction, you can select an object from a list of objects available in the source system.
Previously, you manually entered the object name in the filter value field.
You can select an object from a list when you configure the following catalog sources:
For more information, see Catalog Source Configuration.

Extract pipeline instances from Microsoft Azure Data Factory source systems

When you extract metadata, unique pipeline instances get extracted by default and the pipeline instance name is followed by a hash. The pipeline runid is not appended to the name. You can view the pipeline runid as a property of the pipeline instance that was previously extracted.
Previously, you had to enable the operational metadata option to extract pipeline instances. The pipeline runid was appended to the pipeline instance name.
For more information, see Microsoft Azure Data Factory.

Catalog source type name change

The following catalog source types have been renamed:
Previous name
Current name
Microsoft Azure Synapse
Microsoft Azure Synapse Data Warehouse
Microsoft Azure Synapse Script
Microsoft Azure Synapse Data Warehouse Script

Rerun the connection assignment job

You can rerun the connection assignment job to resolve any failures that occurred during connection assignment. When you rerun the connection assignment job, the job reassigns the existing endpoint objects to the selected connection.
Previously, you had to rerun the catalog source to resolve any connection assignment failures.
For more information about connection assignment, see Connections.

Random N Percentage Sampling Type for data profiling on Google BigQuery

The Random N Percentage sampling type for data profiling task uses the TABLESAMPLE clause to select random subsets of data. It selects data based on the percentage that you specify in the Percentage of data to select field.
Previously, the sampling type query selected all rows from the table.
When you choose the Random N Percentage sampling type, you can run data profiles on the following objects: