Catalog Administrator Guide > Managing Data Domains > Data Domain Discovery on the Databricks Cluster
  

Data Domain Discovery on the Databricks Cluster

Use the Databricks cluster to perform data discovery on the Spark engine. The Databricks cluster is a environment to run the spark jobs. You can run a profile to perform data discovery for the Azure sources using the Databricks cluster.
You need to perform the following steps to connect to the Azure sources in the Databricks cluster:

Prerequisite

Add the following advanced Spark configuration parameters for the Databricks cluster and restart the cluster:

Download and Copy the JAR files for the Profiling Warehouse

  1. 1. Get the Oracle DataDirect JDBC driver JAR files for the profiling warehouse. You can copy the files from the following location: <INFA_HOME>/services/shared/jars/thirdparty/com.informatica.datadirect-dworacle-6.0.0_F.jar.
  2. 2. Place the Oracle DataDirect JDBC driver JAR files in the following locations:

Download and Copy the JAR files for the JBDC Delta Objects

  1. 1. Get the JDBC .jar files for JDBC delta objects. You can download the files from the database vendor website.
  2. 2. Update the genericJDBC.zip with the JDBC delta JAR files in the following location: INFA_HOME/services/CatalogService/ScannerBinaries .
  3. 3. Recycle the Catalog Service.

Configure Custom Properties on the Data Integration Service

  1. 1. Launch Informatica Administrator, and then select the Data Integration Service in the Domain Navigator.
  2. 2. Click the Custom Properties option on the Properties tab.
  3. 3. Set the following custom property to perform automatic installation of the Informatica libraries into the Databricks cluster:
  4. ExecutionContextOptions.databricks.enable.infa.libs.autoinstall: true
  5. 4. Recycle the Data Integration Service.

Supported sources for data domain discovery on the Databricks Cluster