Release Notes > Fixed limitations
  

Fixed limitations

The following tables describe the Informatica Intelligent Cloud ServicesData Profiling fixed limitations. Not all monthly releases include fixed limitations.
The following table lists the fixed limitations in the April 2025 release:
Issue
Description
CP-13101
Partitioning for data profiling tasks is not honored when the number of rows equals or exceeds 2,147,483,647.
The following table lists the fixed limitations in the November 2024 release:
Issue
Description
CP-13102
When you run a profile on Microsoft Fabric Data warehouse assets with the First N Rows sampling option, the profiling fails with the following error:[ERROR] Failed to run the Microsoft Fabric Data Warehouse mapping because of the following error: [Incorrect syntax near '5'.]
The following table lists the fixed limitations in the October 2024 release:
Issue
Description
CP-13096
When you run a profiling task after enabling CLAIRE Insight inference, the CLAIRE Insight inference fails intermittently with the following error: Could not roll back JPA transaction; nested exception is org.hibernate.TransactionException: Unable to rollback against JDBC Connection
CP-12976
Profiling fails on Snowflake with the following error when the schema name is in lowercase: org.springframework.web.client.HttpServerErrorException$InternalServerError: 500 Internal Server Error
CP-12962
When you create a mapplet connection profile from a source mapplet with two joined SQL Snowflake connections and run the profile, the job fails with the following error: Profile job failed with error null.
CP-12906
When you export a profile that is configured with an SQL filter condition, the exported file doesn't contain the filter condition.
CP-12905
Profiling fails on Snowflake with the following error while processing rows of a large table: SQL [n/a]; nested exception is org.hibernate.exception.DataException: could not execute statement
CP-12672
When you create a data profiling task on a data source with the JDBC v2 connection, you can't select a table as the source object after you select a schema.
The following table lists the fixed limitations in the August 2024 release:
Issue
Description
CP-12964
Data Profiling rounds up the profiling result percentages to 0% instead of 0.01% when only one row out of more than 10 million rows matches the specified data quality rule.   
CP-12827
When you update the advanced options for a pattern and rerun a data profiling task, the result displays an incorrect number of rows.
This issue occurs if you drill down or create queries on patterns that are consolidated under the Others category after you run a data profiling task.
The following table lists the fixed limitations in the July 2024 release:
Issue
Description
CP-12908
When you enter additional JDBC connection parameters for a Snowflake connection in a profiling task, Data Profiling recognizes the following keys in uppercase but not lowercase characters:
  • - DB
  • - SCHEMA
CP-12683
When you test a rule specification that CLAIRE generated as an insight on a profile, the test fails.
CP-12385
When you run a profiling job that reads data over a JDBC V2 connection and you filter the data on a date or timestamp column, the job fails.
CP-10872
When you run a profiling job on a mapplet, the profiling job fails. The issue arises when the source and target fields have the same names.
CP-12731
A profile job can fail on an upgraded POD if a profile was running at the time that the POD was upgraded.
CP-12681
Data Quality and Governance adds a duplicate rule occurrence to a scorecard profile in the following scenario:
  • - You import a profiling task and run the task.
  • - You import the profiling task for a second time and run the task again.
CP-12665
When you create a mapplet connection profile from a source mapplet with two joined SQL Snowflake connections and run the profile, the job fails.
The following table lists the fixed limitations in the May 2024 release:
Issue
Description
CP-12458
When you create and save a Databricks profile and then edit the Advanced Options configuration, the columns disappear and the profile is not saved.
CP-12446
When you run a Databricks profile, add a filter or use the SQL override option, and re-run the profile, the job fails with the following error: [FATAL] The following SQL exception occurred: [Unable to prepare statement for query...}
CP-9295
When you run a Databricks profile on a source that contains mixed-case column names and then apply a query on the columns, the query job fails with the following error: SQL Error [ FnName: Prepare – [Simba][Hardy] (80) Syntax or semantic analysis error thrown in server while executing query.
CP-12465
When you run a Databricks profile on complex data types in advanced mode, the profile results for the columns include junk values.
CP-12614
When you add a rule to a profile and then run the profile, you can't compare rules and columns on the Compare Columns tab.
CP-12613
If you want to use the Explore page to run a profile, the Run option doesn't appear in the drop down menu.
CP-12442
When you run a profile on a source mapplet and you want to select Databricks source object, the profile job fails with the following error: The selected mapplet <> is not supported as a source object for profiling.
CP-12417
When you run an Amazon Redshift profile on assets with the SUPER data type, the job fails with the following error: Invalid datatype: super.
CP-12403
When you run a query on Databricks profile results, the job fails with the following error: Syntax or semantic analysis error thrown in server while executing query. This issue occurs when you use mixed case or special characters in your query.
CP-12403
When you run a few profiling tasks concurrently, the insight inference job fails with the following error: Job for profile <> threw an exception. Returning no results.
CP-12616
When you run the profile on a Salesforce source object, the job fails when some scale values in numeric fields are automatically defined as greater than precision values.