Monitor > Monitoring Data Integration jobs > Monitoring code tasks
  

Monitoring code tasks

To view detailed information about a specific code task, click the instance name on the My Jobs, All Jobs, or Running Jobs pages.

Job properties

The job properties for each code task instance display general properties about the task.
The following table describes the job properties for the code task:
Property
Description
Task Name
Name of the task.
Instance ID
Instance number for the task. For example, if you are looking at the third run of the task, this field displays "3."
Task Type
Task type, in this case, code task.
CodeTask ID
Code task unique identifier.
Started By
Name of the user that started the job.
Start Time
Date and time that the job was started.
End Time
Date and time that the job completed or stopped.
Duration
Amount of time the job ran before it completed or was stopped.
Runtime Environment
Runtime environment in which the job ran.
Advanced configuration
Advanced configuration that was used to create the advanced cluster.
Cluster
Advanced cluster where the advanced job runs.
You can click the cluster name to navigate directly to the monitoring details for the cluster.

Job results

The job results for each code task instance display the status of the job and success and error statistics.
The following table describes the job results for the code task:
Property
Description
Status
Job status. A job can have one of the following statuses:
  • - Starting. The job is starting.
  • - Running. The job is either queued or running.
  • - Success. The job completed successfully.
  • - Failed. The job did not complete because it encountered errors.
Session Log
Allows you to download the session log file. By default, Informatica Intelligent Cloud Services stores session logs for 10 runs before it overwrites the logs with the latest runs.
Session log files are written to the following directory:
<Secure Agent installation directory>/apps/Data_Integration_Server/logs
Requested Compute Units Per Hour
Number of serverless compute units per hour that the task requested.
You can view the number of requested compute units if the task runs in a serverless runtime environment.
Total Consumed Compute Units
Total number of serverless compute units that the task consumed.
You can view the number of consumed compute units if the task runs in a serverless runtime environment.
Error Message
Error message, if any, that is associated with the job.

Code task API execution parameters

The execution parameters for each code task instance display the API parameters used in the task.
The following table describes the execution parameters for the code task:
Property
Required / Optional
Description
Override Code Task Timeout
Optional
Overrides the code task timeout value for this execution. A value of -1 signifies no timeout.
Log Level
Optional
Log level for session logs, agent job log, Spark driver, and executor logs. Valid values are: none, terse, normal, verboseInitialization, or verboseData.
The default value is normal.
The following table describes the Spark properties for the code task:
Property
Required / Optional
Description
Main Class
Required
Entry point of the Spark application. For example:
org.apache.spark.examples.company.SparkExampleApp
Main Class Arguments
Optional
Ordered arguments sent to the Spark application main class. For example:
--appTypeSPARK_PI_FILES_JARS--
classesToLoadcom.company.test.SparkTest1Class
Primary Resource
Required
Scala JAR file that contains the code task.
JAR File Path
Optional
The directory and file name of the JAR file that is uploaded to the cluster and added to the Spark driver and executor classpaths.
Spark File Path
Optional
The directory and file name of the Spark file that is uploaded to the cluster and available under the current working directory.
Custom Properties
Optional
Spark properties or other custom properties that Data Integration uses.

Spark application task details

The Spark application task details for each code task display under Spark Application Task Results.
Each Spark application task includes the following details:
Property
Description
Status
Status of the Spark task. The Spark task can have one of the following statuses:
  • - Running. The task is running.
  • - Success. The task completed successfully.
  • - Failed. The task did not complete because it encountered errors.
  • - Stopped. The task was stopped.
  • - Unknown. The status of the task is unknown.
If the Secure Agent fails while the job is running, the status of the Spark tasks continues to display Running. You must cancel the job and run the job again.
Start time
Date and time when the Spark task started.
End time
Date and time when the Spark task ended.
Duration
Amount of time that the Spark task ran.
Memory Per Executor
Amount of memory that each Spark executor uses.
Cores Per Executor
Number of cores that each Spark executor uses.
Driver and Agent Job Logs
Select Download to download the Spark driver and agent job logs.
Advanced Log Location
The log location that is configured in the advanced configuration for the advanced cluster. You can navigate to the advanced log location to view and download the agent job log, Spark driver log, and Spark executor logs.
Error Message
Error message, if any, that is associated with the job.
Each Spark application task is translated into Spark jobs, which are further broken down into stages. You can view the following details for each Spark job and stage:
Property
Description
Job Name
Name of the Spark job or stage.
Duration
Amount of time that the Spark job or stage ran.
Total Tasks
Number of tasks the Spark job or stage attempted.
Failed Tasks
Number of tasks that the Spark job or stage failed to complete.
Input Size / Records
Size of the file and number of records input by the Spark job or stage.
Output Size / Records
Size of the file and number of records output by the Spark job or stage.
Status
Status of the Spark job or stage. The status can be one of the following values:
  • - Running. The job or stage is running.
  • - Success. The job or stage completed successfully.
  • - Failed. The job or stage did not complete because it encountered errors.
  • - Aborted. The job or stage did not complete because the user aborted the code task.
Note: After you abort a code task, there might be some lag time before the Monitor service shows the status as Aborted.