How to: Job Execution Framework MapReduce V1 & V2

In this blog post, we compare MapReduce v1 to MapReduce v2 (YARN), and describe the MapReduce Job Execution framework. We also take a detailed look at how jobs are executed and managed in YARN and how YARN differs from MapReduce v1.

Note: The material from this blog post is from our free on-demand training course, Developing Hadoop Applications.

How Hadoop executes MapReduce (MapReduce v1) Jobs

To begin, a user runs a MapReduce program on the client node which instantiates a Job client object.

Next, the Job client submits the job to the JobTracker.

Then the job tracker creates a set of map and reduce tasks which get sent to the appropriate task trackers.

The task tracker launches a child process which in turns runs the map or reduce task.

Finally the task continuously updates the task tracker with status and counters and writes its output to its context.

JobTracker Heartbeat        

When task trackers send heartbeats to the job tracker, they include other information such as task status, task counters, and (since task trackers are usually configured as file servers) data read/write status.

Heartbeats tell Job Tracker which Task Trackers are alive. When Job Tracker stops receiving heartbeats from a Task Tracker then:

Job Tracker reschedules tasks on failed Task Tracker to other Task Trackers.

Job Tracker marks Task Tracker as down and won't schedule subsequent tasks there.

Working with Schedulers

Hadoop Job Scheduling 

Two schedulers are available in Hadoop, the Fair scheduler and the Capacity schedulers.

The Fair scheduler is the default. Resources are shared evenly across pools and each user has its own pool by default. You can configure custom pools and guaranteed minimum access to pools to prevent starvation. This scheduler supports pre-emption.

Hadoop also ships with the Capacity scheduler; here resources are shared across queues. You may configure hierarchical queues to reflect organizations and their weighted access to resources. You can also configure soft and hard capacity limits to users within a queue. Queues have ACLs to prevent rogues from accessing the queue. This scheduler supports resource-based scheduling and job priorities.

In MRv1, the effective scheduler is defined in $HADOOP_HOME/conf/mapred-site.xml. In MRv2, the scheduler is defined in $HADOOP_HOME/etc/hadoop/mapred-site.xml.  

The goal of the fair scheduler is to give all users equitable access between pools to cluster resources. Each pool has configurable guaranteed capacity in slots. Each Pool is equal to the number of jobs. Jobs are placed in flat pools. The default is 1 pool per user.

The Fair Scheduler algorithm works by dividing each pool’s minimum map and reduce tasks among jobs. When a slot is free, the algorithm will allocate a job below the minimum share or most starved. Jobs from "over-using" users can be preempted.

You can examine the configuration and status of the Hadoop fair scheduler in the MCS, or by using the Web UI running on the job tracker.

Hadoop Fair Scheduler

The Fair Scheduler was developed at Facebook.

Hadoop Capacity Scheduler 

The goal of the capacity scheduler is to give all queues access to cluster resources. Shares are assigned to queues as percentages of total cluster resources.

  • Each queue has configurable guaranteed capacity in slots.
  • Jobs are placed in hierarchical queues. The default is 1 queue per cluster.
  • Jobs within queue are FIFO. You can configure capacity.scheduler.xml per-queue or per-user.

This scheduler was developed at Yahoo.

Moving to YARN (MapReduce v2)

The motivation for the development of YARN (Yet Another Resource Negotiator, a.k.a. MapReduce v2) was to further support and overcome several limitations of MapReduce v1.

  • One of the primary issues in MapReduce 1 is that the Map and Reduce slot configuration is not dynamic. This inflexibility leads to an underutilization of resources. There is no slot configuration in YARN allowing it to be more flexible.
  • Another limitation of MapReduce v1 is that the Hadoop framework only supports MapReduce jobs. YARN supports both MapReduce and non-MapReduce applications.
  • The job tracker serves as both a resource manager and history server in MRv1,  which limits scalability.  In YARN, the job tracker's role is split out between a separate resource manager and history server to improve scalability. YARN uses the same API and CLI as MapReduce v1, which is similar to Web Ui, making it easy for users and programmers to migrate to.

MapReduce Versions

YARN generalizes resource management for use by new engines and frameworks, allowing resources to be allocated and reallocated for different concurrent applications sharing a cluster. Existing MapReduce applications can run on YARN without any changes. At the same time, because MapReduce is now merely another application on YARN, MapReduce is free to evolve independently of the resource management infrastructure.

The MapReduce Job Lifecycle in YARN

MapReduce Job Liftcycle     

  • User submits an app request by passing config for Application Manager to Resource Manager.
  • Resource Manager allocates a container for Application Manager on a node. Tells Node Manager in charge of that node to launch the Application Manager container.
  • Application Manager registers back with Resource Manager. Asks for more containers to run tasks.
  • Resource Manager allocates the containers on different nodes in the cluster. 
  • Application Manager talks directly to the Node Managers on those nodes to launch the containers for tasks.
  • Application Manager monitors the progress of the tasks.
  • When all the application's tasks are done, Application Manager unregisters itself from Resource Manager.
  • Resource Manager reclaims the previously allocated containers for the application.

The Non-MapReduce Job Lifecycle in MRv2

  • The user submits an app request by passing configuration to the Application Master and to the Resource Manager.
  • The Resource Manager starts the Application Master to allocate a container to the job.
  • The Application Manager launches the container and monitors it.
  • When the Application Manager is done, it then unregisters from the Resource Manager.

How a Job Is Launched In MRv2

MapReduce Job Launch

  • A client submits an application to the YARN Resource Manager, including the information required for the CLC.
  • The Application Manager, which is in the Resource Manager, negotiates a container and bootstraps the Application Master instance for the application.
  • The Application Master registers with the Resource Manager and requests containers.
  • The Application Master communicates with Node Managers to launch the containers it has been granted, specifying the CLC for each container.
  • Then the Application Master manages application execution.
    • During execution, the application provides progress and status information to the Application Master. The client can monitor the application’s status by querying the Resource Manager or by communicating directly with the Application Master.
  • The Application Master reports completion of the application to the Resource Manager.
  • The Application Master un-registers with the Resource Manager, which cleans up the Application Master container.

MapR Direct Shuffle work flow in YARN

Direct shuffle YARN

  • The Application Master service initializes the application by calling initialize Application() on the LocalVolumeAuxiliaryService.
  • The Application Master service requests task containers from the Resource Manager.
  • The Resource Manager sends the App Master information that App Master uses to request containers from the Node Manager.
  • Then the Node Manager on each node launches containers using information about the node’s local volume from the LocalVolumeAuxiliaryService.
  • Data from map tasks is saved in the App Master for later use in Task Completion events which are requested by reduce tasks.
  • As the map tasks completes, map outputs and map-side spills are written to the local volumes on the map task nodes, generating Task Completion events.
  • Reduce tasks fetches Task Completion events from the Application Manager.
    • The task Completion events include information on the location of map output data, enabling reduce tasks to copy data from MapOutput locations.
  • Reduce tasks reads the map output information.
  • Spills and interim merges are written to local volumes on the reduce task nodes.
  • Finally the Application Master calls stopApplication() on the LocalVolumeAuxiliaryService to clean up data on the local volume.

 High-level Differences of YARN Job Management

As mentioned earlier, there are Job Tracker or Task Tracker Web UIs in YARN Job Management.

  • There is no MapR metrics database in YARN.
  • YARN does not use hadoop job command to track status and history.
  • YARN history server gets job history from TCP port 19888.
  • YARN history server currently (4.0.1 release) supports history for M/R only.                                                              

Job history server        

The above image shows a Job History Server in the web UI. This screen shot shows the summary of a launched job in MaprReduce v2 . If you were looking at a live screen, you could drill down to view further job details.

Job history hadoop        

The screen shot above shows the summary of a launched job in MRv2. Here again, you would be able to dig into the job to get more details.

hadoop job history 3 

The partial screen shot above depicts the counters for a launched MapReduce job.

Hadoop job configuration     

The partial screen shot above depicts the configuration parameters that were effective for a particular MapReduce job.          

In the next post, we will detail how to write a simple MapReduce program.


Streaming Data Architecture:

New Designs Using Apache Kafka and MapR Streams




Download for free