Property | Description |
---|---|
Task Name | Name of the task. |
Instance ID | Instance number for the task. For example, if you are looking at the third run of the task, this field displays "3." |
Task Type | Task type, in this case, mapping task. |
Started By | Name of the user or schedule that started the job. |
Start Time | Date and time that the job was started. |
End Time | Date and time that the job completed or stopped. |
Duration | Amount of time the job ran before it completed or was stopped. |
Runtime Environment | Runtime environment in which the job ran. |
Advanced configuration | Advanced configuration that was used to create the advanced cluster. |
Cluster | Advanced cluster where the job runs. You can click the cluster name to navigate directly to the monitoring details for the cluster. |
Property | Description |
---|---|
Status | Job status. A job can have one of the following statuses:
If the advanced cluster is not running when you run a job, the job waits for the cluster to start. During this time the job status is Starting. If the Secure Agent fails while the job is running, the status of the job continues to display Running. You must cancel the job and run the job again. The status for a queued job displays Running. To find out if a job is queued or running, check the session log. |
Execution Plan | Allows you to download the Spark execution plan which shows the runtime Scala code that the advanced cluster uses to run the data logic in the mapping. You can use the Scala code to debug issues in the mapping. |
Error Message | Error message, if any, that is associated with the job. |
Property | Description |
---|---|
Status | Job status. A job can have one of the following statuses:
If the Secure Agent fails while one of the subtasks is running, the statuses of the subtask and the tuning job display Running. You must stop the tuning job and configure tuning from the mapping task details again. The status for a queued job displays Running. To find out if a job is queued or running, check the session log. |
Subtasks | Number of subtasks that are part of the tuning job. Each subtask represents a run of the mapping task. If a link is available, click the link to monitor each mapping task. |
Requested Compute Units Per Hour | Number of serverless compute units per hour that the task requested. You can view the number of requested compute units if the task runs in a serverless runtime environment. |
Total Consumed Compute Units | Total number of serverless compute units that the task consumed. You can view the number of consumed compute units if the task runs in a serverless runtime environment. |
Error Message | Error message, if any, that is associated with the job. |
Property | Description |
---|---|
Status | Status of the Spark task. The Spark task can have one of the following statuses:
If the Secure Agent fails while the job is running, the status of the Spark tasks continues to display Running. You must cancel the job and run the job again. |
Start time | Date and time when the Spark task started. |
End time | Date and time when the Spark task ended. |
Duration | Amount of time that the Spark task ran. |
Memory Per Executor | Amount of memory that each Spark executor uses. |
Cores Per Executor | Number of cores that each Spark executor uses. |
Driver and Agent Job Logs | Select Download to download the Spark driver and agent job logs. |
Advanced Log Location | The log location that is configured in the advanced configuration for the advanced cluster. You can navigate to the advanced log location to view and download the agent job log, Spark driver log, and Spark executor logs. |
Property | Description |
---|---|
Job Name | Name of the Spark job or stage. |
Start time | Date and time when the Spark job or stage started. Start time might be "NA" for aborted tasks. |
End time | Date and time when the Spark job or stage ended. End time might be "NA" for aborted tasks. |
Duration | Amount of time that the Spark job or stage ran. |
Total Tasks | Number of tasks the Spark job or stage attempted. |
Successful Tasks | Number of tasks the Spark job or stage successfully completed. |
Failed Tasks | Number of tasks that the Spark job or stage failed to complete. |
Running Tasks | Number of tasks that the Spark job or stage is currently running. |
Input Size / Records | Size of the file and number of records input by the Spark job or stage. |
Output Size / Records | Size of the file and number of records output by the Spark job or stage. |
Status | Status of the Spark job or stage. The status can be one of the following values:
Note: After you abort a mapping task, there might be some lag time before the Monitor service shows the status as Aborted. |