Once you have Apache Hadoop Installation completes and able to run HDFS commands, the next step is to do Hadoop Yarn Configuration on Cluster. This post explains how to setup Yarn master on the Hadoop cluster and run a map-reduce example. Before you proceed with this document, please make sure you have Apache Hadoop Installation and the Hadoop cluster is up and running. if you do not have a setup, please follow the below link to set up your cluster and come back to this page. Apache Hadoop Multi Node Cluster Setup on Ubuntu By default Yarn comes with Hadoop distribution hence there is no need of additional installation, just you need to configure to use Yarn and some memory/core settings.
1. Configure yarn-site.xml
On yarn-site.xml file, configure default node manager memory, yarn scheduler minimum, and maximum memory configurations.
Note that SecondaryNameNode & NameNode were started with start-hdfs.sh file. With start-yarn.sh command it started ResourceManager on namenode and NodeManager on data nodes. Now on any datanode run jps command and confirm NadeManager is running.