Contents
name value description
yarn.acl.enable FALSE Are acls enabled.
yarn.admin.acl * ACL of who can be admin of the YARN cluster.
yarn.log-aggregation.retain-check-interval-seconds -1 How long to wait between aggregated log retention checks. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time. Be careful set this too small and you will spam the name node.
yarn.log-aggregation.retain-seconds -1 How long to keep aggregation logs before deleting them. -1 disables. Be careful set this too small and you will spam the name node.
yarn.log-aggregation-enable FALSE Whether to enable log aggregation. Log aggregation collects each container’s logs and moves these logs onto a file-system, for e.g. HDFS, after the application completes. Users can configure the “yarn.nodemanager.remote-app-log-dir” and “yarn.nodemanager.remote-app-log-dir-suffix” properties to determine where these logs are moved to. Users can access the logs via the Application Timeline Server.
yarn.nm.liveness-monitor.expiry-interval-ms 600000 How long to wait until a node manager is considered dead.
yarn.node-labels.enabled FALSE Enable node labels feature
yarn.node-labels.fs-store.root-dir URI for NodeLabelManager. The default value is /tmp/hadoop-yarn-${user}/node-labels/ in the local filesystem.
yarn.nodemanager.address ${yarn.nodemanager.hostname}:0 The address of the container manager in the NM.
yarn.nodemanager.aux-services A comma separated list of services where service name should only contain a-zA-Z0-9_ and can not start with numbers
yarn.nodemanager.aux-services.mapreduce_shuffle.class org.apache.hadoop.mapred.ShuffleHandler
yarn.nodemanager.bind-host The actual address the server will bind to. If this optional address is set, the RPC and webapp servers will bind to this address and the port specified in yarn.nodemanager.address and yarn.nodemanager.webapp.address, respectively. This is most useful for making NM listen to all interfaces by setting to 0.0.0.0.
yarn.nodemanager.hostname 0.0.0.0 The hostname of the NM.
yarn.nodemanager.pmem-check-enabled TRUE Whether physical memory limits will be enforced for containers.
yarn.nodemanager.resource.count-logical-processors-as-cores FALSE Flag to determine if logical processors(such as hyperthreads) should be counted as cores. Only applicable on Linux when yarn.nodemanager.resource.cpu-vcores is set to -1 and yarn.nodemanager.resource.detect-hardware-capabilities is true.
yarn.nodemanager.resource.cpu-vcores -1 Number of vcores that can be allocated for containers. This is used by the RM scheduler when allocating resources for containers. This is not used to limit the number of CPUs used by YARN containers. If it is set to -1 and yarn.nodemanager.resource.detect-hardware-capabilities is true, it is automatically determined from the hardware in case of Windows and Linux. In other cases, number of vcores is 8 by default.
yarn.nodemanager.resource.detect-hardware-capabilities FALSE Enable auto-detection of node capabilities such as memory and CPU.
yarn.nodemanager.resource.memory-mb -1 Amount of physical memory, in MB, that can be allocated for containers. If set to -1 and yarn.nodemanager.resource.detect-hardware-capabilities is true, it is automatically calculated(in case of Windows and Linux). In other cases, the default is 8192MB.
yarn.nodemanager.resource.pcores-vcores-multiplier 1 Multiplier to determine how to convert phyiscal cores to vcores. This value is used if yarn.nodemanager.resource.cpu-vcores is set to -1(which implies auto-calculate vcores) and yarn.nodemanager.resource.detect-hardware-capabilities is set to true. The number of vcores will be calculated as number of CPUs * multiplier.
yarn.nodemanager.resource.percentage-physical-cpu-limit 100 Percentage of CPU that can be allocated for containers. This setting allows users to limit the amount of CPU that YARN containers use. Currently functional only on Linux using cgroups. The default is to use 100% of CPU.
yarn.nodemanager.resource.system-reserved-memory-mb -1 Amount of physical memory, in MB, that is reserved for non-YARN processes. This configuration is only used if yarn.nodemanager.resource.detect-hardware-capabilities is set to true and yarn.nodemanager.resource.memory-mb is -1. If set to -1, this amount is calculated as 20% of (system memory - 2*HADOOP_HEAPSIZE)
yarn.nodemanager.vmem-check-enabled TRUE Whether virtual memory limits will be enforced for containers.
yarn.nodemanager.vmem-pmem-ratio 2.1 Ratio between virtual memory to physical memory when setting memory limits for containers. Container allocations are expressed in terms of physical memory, and virtual memory usage is allowed to exceed this allocation by this ratio.
yarn.resourcemanager.bind-host The actual address the server will bind to. If this optional address is set, the RPC and webapp servers will bind to this address and the port specified in yarn.resourcemanager.address and yarn.resourcemanager.webapp.address, respectively. This is most useful for making RM listen to all interfaces by setting to 0.0.0.0.
yarn.resourcemanager.cluster-id Name of the cluster. In a HA setting, this is used to ensure the RM participates in leader election for this cluster and ensures it does not affect other clusters
yarn.resourcemanager.ha.automatic-failover.enabled TRUE Enable automatic failover. By default, it is enabled only when HA is enabled
yarn.resourcemanager.ha.enabled FALSE Enable RM high-availability. When enabled, (1) The RM starts in the Standby mode by default, and transitions to the Active mode when prompted to. (2) The nodes in the RM ensemble are listed in yarn.resourcemanager.ha.rm-ids (3) The id of each RM either comes from yarn.resourcemanager.ha.id if yarn.resourcemanager.ha.id is explicitly specified or can be figured out by matching yarn.resourcemanager.address.{id} with local address (4) The actual physical addresses come from the configs of the pattern - {rpc-config}.{id}
yarn.resourcemanager.ha.id The id (string) of the current RM. When HA is enabled, this is an optional config. The id of current RM can be set by explicitly specifying yarn.resourcemanager.ha.id or figured out by matching yarn.resourcemanager.address.{id} with local address See description of yarn.resourcemanager.ha.enabled for full details on how this is used.
yarn.resourcemanager.nodes.exclude-path Path to file with nodes to exclude.
yarn.resourcemanager.nodes.include-path Path to file with nodes to include.
yarn.resourcemanager.recovery.enabled FALSE Enable RM to recover state after starting. If true, then yarn.resourcemanager.store.class must be specified.
yarn.scheduler.maximum-allocation-mb 8192 The maximum allocation for every container request at the RM in MBs. Memory requests higher than this will throw an InvalidResourceRequestException.
yarn.scheduler.maximum-allocation-vcores 4 The maximum allocation for every container request at the RM in terms of virtual CPU cores. Requests higher than this will throw an InvalidResourceRequestException.
yarn.webapp.ui2.enable FALSE To enable RM web ui2 application.