Managing, Monitoring, and Testing MapReduce Jobs: Managing Jobs and Tasks

In this post, we will discuss how to use the MapR Control System (MCS) to monitor MRv1 jobs. We will also see how to manage and display jobs, history, and logs using the command line interface.

In part 1 of this post, we focused on how to work with built-in and custom counters, a vital part of monitoring Hadoop job progress. If you missed part 1, you may wish to refer to it before continuing. (Note: The material from this blog post is from one of our free on-demand training coursesDeveloping Hadoop Applications.)

Using the MCS to Monitor MRv1 jobs

You can use the MCS to show granular job and task information in a cluster.

To display this level of information in the MCS, you must configure the metrics database. The MCS can be used to display the metrics only for MRv1 jobs. To view metrics for MRv2 (YARN) jobs, you can use the YARN Resource Manager WebUI.

The first time you log in to the MCS, you’ll need to specify the URL for the metrics database (database-server:3306), username, password, and name of database (metrics).

Displaying jobs in MCS

The screenshot above shows some details about a job executed on the cluster, including the following:

  • Status of job (green indicates success and red indicates failure)
  • Name of job
  • User that submitted the job
  • The time the job started executing
  • Percentage of map tasks executed
  • Percentage of reduce tasks executed
  • The total duration between the time the job started executing and when it finished
  • The job id
  • The time the job was submitted (same or some time before the job started)
  • The time the job completed

You can dig deeper into a job by clicking into the job name or id.

Getting Task details in MCS

You can dig into the details of a task by clicking the task id or primary attempt from the previous screen. The details of the task are displayed as follows:

  • status of the task
  • task attempt id
  • type of task (map or reduce)
  • progress (0-100%)
  • start time of the task
  • finish time of the task
  • time the shuffle phase ended
  • time the sort phase ended
  • duration of the task
  • node the task executed on
  • A link to the log file for the task

You can dig further into the task details by clicking the task attempt id.

You can display the log file associated with a given task by clicking the log link (from previous screen). The details in the log file include the following:

  • Standard out generated from this task
  • Standard error generated from this task
  • Syslog log file entries generated by this task
  • Profile output generated by this task
  • Debug script output generated by this task

Note that debug scripts are optional and must be configured to run.

Using Command Line Interface

You can use the Command Line interface to manage and display jobs, history and logs.

Tracking launched jobs

Use the hadoop job command to list and get the status of the running MapReduce jobs. Note that if you have MRv1 and MRv2 in your environment, then the hadoop command by default points to MRv2. So, in this case we are looking at MRv2 jobs.

Tracking launched jobs

You can also use the Job tracker and Task tracker web UI to track the status of a launched job or to check the history of previously run jobs.

Viewing job history

To view the history of a job, you can run the hadoop job – history command.

Stopping launched job

To stop a job that is already launched, use the hadoop job –kill command rather that the operating system “Kill”.

Logging Information

Logging information

Hadoop logs messages to Log4j by default. The definable levels in the implementation are trace, debug, info, warn, error, and fatal (in increasing order of arbitrary severity). You configure the logging preferences for your Hadoop jobs in the commons-logging.properties file which you place in the CLASSPATH.  

You can write to the configured log system in your map and reduce code. Note that the messages are syslog-style messages. As such, you specify the level of the message with one of the following:

You can configure a log level in your code such that only messages from that level (and higher) are reported to the logging subsystem.  Under normal operating circumstances, you should not need more detail than “info” from your jobs.  But when you are debugging your code, you should enable “debug” level info from your jobs (mapred.map.child.log.level and mapred.reduce.child.log.level).  

In this post, we have seen how to use the MCS to view metrics for MRv1 jobs. We have also seen how to monitor and manage jobs and history using the Command Line Interface. Note that the hadoop command by default will point to MRv2 jobs (if both MRv1 and MRv2 are in your environment).

no

Streaming Data Architecture:

New Designs Using Apache Kafka and MapR Streams

 

 

 

Download for free