A mapping run on a self-service cluster fails when the self-managed Kubernetes cluster is not reachable.
The mapping fails with the following error:
2022-06-23T04:42:10.872+00:00 <getThreadPoolTaskExecutor-502> INFO: Waiting for cluster with Cluster Instance ID : [16y6xhsvjkdeybtzdy1dkx.k8s.local] to start. 2022-06-23T04:42:13.394+00:00 <getThreadPoolTaskExecutor-502> SEVERE: WES_internal_error_An unexpected error occurred during execution.
Verify if you can access the self-managed Kubernetes cluster from the Secure Agent machine.
If you can access the self-managed Kubernetes cluster from the Secure Agent machine and if the mapping is still failing, wait for cluster's idle timeout (30 minutes) and monitor the cluster state. When the cluster state changes to STOP, start the cluster, and then run the mapping.
If you do not want to wait for the cluster's idle timeout, restart the Secure Agent process and then run the mapping.
When you run a mapping, if you stop the self-service cluster in between, the mapping fails with the following error after the restarting cluster:
<SparkTaskExecutor-pool-1-thread-11> SEVERE: Reattemptable operation failed with error: Failure executing: POST at: https://35.84.220.154:6443/api/v1/namespaces/default/pods. Message: pods "spark-infaspark0229e35d4-d9d1-4203-a2b1-d4692ace052finfaspark0-driver" is forbidden: error looking up service account default/infa-spark: serviceaccount "infa-spark" not found, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=Forbidden, status=Failure, additionalProperties={})
To resolve the error, restart the Secure Agent process and then run the mapping.