Databricks Connector > Mappings for Databricks > Before you begin
  

Before you begin

Before you configure and run a mapping, complete the required prerequisites.

Verify permissions

Permissions define the level of access for the operations that you can perform in Databricks.
The following table lists the permission that you need to run SQL commands to perform specific operations in Databricks:
SQL command
Description
CREATE TABLE
Create a managed table.
CREATE VIEW
Create a view.
CREATE TABLE ... LOCATION <EXTERNAL LOCATION>
Create an external table.
DROP TABLE
Delete a table.
DROP VIEW
Delete a view.
ALTER TABLE
Add a column.
SELECT FROM
Read from a table.
INSERT INTO
Use Insert operation with Write Disposition option set to Append.
INSERT OVERWRITE INTO
Use Insert operation with Write Disposition option set to Truncate.
MERGE INTO
Use Update, Upsert, and Delete operations.
DELETE FROM
Empty a table.
COPY INTO
Load data into target (ecosystem SQL ELT with CSV as source).
SELECT ON FILE
Retrieve data from a file staged in either Amazon S3, ADLS Gen2, or a personal staging location using the SELECT FROM command.
Here are examples of the staging locations where you use the SELECT FROM command to retrieve data:
Read from a file in Amazon S3:
INSERT INTO `tgt_table` (col1, col2) SELECT col1, col2 FROM parquet.`s3a://staging.bucket/infa_tmp-20231120_062134_910212391`
Read from a file in ADLS Gen2:
INSERT INTO `tgt_table` (col1, col2) SELECT col1, col2 FROM parquet.`abfss://blob@storage_account.dfs.core.windows.net/infa_tmp-20231120_070825_17050631/`
Read from a file in personal staging location:
INSERT INTO `tgt_table` (col1, col2) SELECT col1, col2 FROM parquet.`stage://tmp/username/infa_tmp-20231120_025852_313310201`
Personal staging location
To use the personal staging location as a staging environment, ensure that you have access to the personal staging location APIs.