Advanced Clusters > Advanced clusters > Local clusters
  

Local clusters

A local cluster is a simple, single-node cluster that you can start on the Secure Agent machine. You can use a local cluster to quickly onboard projects for advanced use cases. A local cluster can only run mappings in advanced mode.
A local cluster can run on-premises or in the following cloud environments:
You can set up a local cluster on a virtual machine with minimal permissions and resource requirements.
Note: When you install the Secure Agent on Oracle Cloud Infrastructure, you can create local clusters but you can't create any other types of advanced clusters.
The local cluster has a single node with processing capacity that depends on the local machine. The cluster can access staging and log locations on the cloud or in local storage that is attached to the cluster node. The local cluster times out after five minutes if there are no jobs running on the cluster.
Before you run mappings in advanced mode on a local cluster, make sure that the agent has enough resources so that it can create a cluster and run jobs successfully, especially if the agent is already running other jobs. If the agent doesn’t have enough resources, the jobs that are already running on the agent and the mappings in advanced mode will fail. It’s recommended to have at least 8 cores and 32 GB of memory on the agent machine.

Default local clusters

The agent can create a default local cluster on the agent machine so that you can begin developing and running advanced functionality on small data sets to test mapping logic.
When you run a mapping in advanced mode using an agent that's not associated with an advanced configuration, a default advanced configuration is created and associated with the agent. The agent can use the default configuration to create a default local cluster that can process the mapping.
A default advanced configuration is created in the following situations:
You can view the advanced configuration for the default cluster on the Advanced Clusters page in Administrator. You can edit the configuration to modify the staging location, log location, mapping task timeout, and runtime properties. You can also monitor the default local cluster on the Advanced Clusters page in Monitor.
A default advanced configuration is not created if the operating system on the agent machine can't host a local cluster. In this case, you need to manually create an advanced configuration in Administrator and associate the advanced configuration with the runtime environment.
When your organization is ready to run mappings to process production-scale workloads, complete the following tasks:
  1. 1Set up your cloud environment to host a larger advanced cluster.
  2. 2Create an advanced configuration for the cluster.
  3. 3Edit the default advanced configuration and dissociate it from the runtime environment.
  4. 4Edit the new advanced configuration and associate it with the runtime environment.
A larger cluster can also resolve memory and performance issues that developers encounter during testing.

Default staging and log locations

A default local cluster stores staging and log files on the agent machine when you run jobs on the cluster.
The following table lists the default staging and log locations:
Default location
Description
file:///$ADVANCED_MODE_STAGING
Default staging location in the following directory on the agent machine:
<agent installation directory>/apps/Advanced_Mode_Staging_Dir
file:///$ADVANCED_MODE_LOG
Default log location in the following directory on the agent machine:
<agent installation directory>/apps/Advanced_Mode_Log_Dir
The agent machine must have enough space in the default staging and log locations so that jobs can run successfully. You can change the staging and log locations by editing the advanced configuration for the cluster.
If Data Integration creates a subtask for the Data Integration Server to process data logic in a mapping in advanced mode, the cluster and the Data Integration Server share staging files. To read and write staging files in the staging location, the Data Integration Server uses Hadoop Files V2 Connector.