Before you use Google BigQuery V2 Connector, you must complete the following prerequisite tasks:
•Ensure you have the project ID, dataset ID, source table name, and target table name when you create mappings in Data Integration.
•Verify that you have read and write access to the Google BigQuery dataset that contains the source table and target table.
•When you read data from or write data to a Google BigQuery table, you must have the required permissions to run the mapping successfully.
•If your organization passes data through a proxy, virtual private cloud, or protective firewall, you must configure your firewall to allow the www.googleapis.com and www.accounts.google.com URI for Google BigQuery V2 Connector to transfer data through a proxy, virtual private cloud, or firewall.
•If you use bulk mode to write data to Google BigQuery, verify that you have write access to the Google Cloud Storage path where the Secure Agent creates the staging file.
•If you use DTM staging mode to write data to Google BigQuery, ensure that you configure the Secure Agent to enable the DTM staging mode.
•If you use staging mode to read data from Google BigQuery, verify that you have read access to the Google Cloud Storage path where the Secure Agent creates the staging file to store the data from the Google BigQuery source.
•To read or write Avro, Parquet, or JSON files, verify that your organization does not have more than one Cloudera 6.1 distribution enabled.