Google Cloud Storage V2 Connector > Mappings for Google Cloud Storage > Troubleshooting a mapping task
  

Troubleshooting a mapping task

Time zone for the Date and Timestamp data type fields in Parquet or Avro file formats defaults to the Secure Agent host machine time zone.  
When you run a mapping in advanced mode to read from or write to fields of the Date and Timestamp data types in the Parquet or Avro file formats, the time zone defaults to the Secure Agent host machine time zone.
To change the Date and Timestamp to the UTC time zone, you can either set the Spark properties globally in the Secure Agent directory for all the tasks in the organization that use this Secure Agent, or you can set the Spark session properties for a specific task from the task properties:  
To set the properties globally, perform the following tasks:
  1. 1Add the following properties to the <Secure Agent installation directory>/apps/At_Scale_Server/41.0.2.1/spark/custom.properties directory:
  2. 2Restart the Secure Agent.
To set the properties for a specific task, navigate to the Spark session properties in the task properties, and perform the following steps:
Data corruption occurs in the target for data of double data type.
When you read data of the double data type from a Google Cloud Storage JSON file and write data to a Google Cloud Storage flat file target, data corruption occurs in the target for the corresponding data of the Double data type.
Workaround: Change the data type of the target column from flat_string data type to flat_number data type and increase the precision to 38 and the scale to 15.
When you run the mapping, the Secure Agent writes the data of double data type to the target column of decimal data type with trailing zeros and without data loss.