Connections > Databricks connection properties > Job cluster
  

Job cluster

Configure the Spark parameters for job cluster to use Azure and AWS staging based on where the cluster is deployed.
You also need to enable the Secure Agent properties for runtime processing on the job cluster.
You can use job cluster only on the Linux operating system.

Spark configuration

Before you connect to the job cluster, you must configure the Spark parameters on AWS and Azure.

Configuration on AWS

Add the following Spark configuration parameters for the job cluster and restart the cluster:
Ensure that the access and secret key configured has access to the buckets where you store the data for Databricks tables.

Configuration on Azure

Add the following Spark configuration parameters for the job cluster and restart the cluster:
Ensure that the client ID and client secret configured has access to the file systems where you store the data for Databricks tables.

Configure Secure Agent properties

To connect to job cluster, enable the Secure Agent properties for runtime.
Note: This topic does not pertain to Data Ingestion and Replication.
  1. 1In Administrator, select the Secure Agent listed on the Runtime Environments tab.
  2. 2Click Edit.
  3. 3In the System Configuration Details section, select Data Integration Server as the Service and DTM as the Type.
  4. 4Edit the JVMOption field and set the value to -DUseDatabricksSql=false.
  5. This image shows the JVMOption property for mappings.