Article Index

Resource Scheduling

One of the biggest differences between Hadoop and HPC systems is resource management. In HPC there is a fine grained control of what resources (cores, accelerators, memory, time, etc.) are given to users. These resources are scheduled with tools like Grid Engine, Moab, LoadLeveler, etc. With Hadoop, there is an integrated scheduler that consists of a master Job Tracker, which communicates with Task Trackers on the nodes. All MapReduce work is supervised by the Job Tracker. There are no other job types supported in Hadoop (Version 1).

One interesting difference between an HPC resource scheduler and the Hadoop Task Tracker is fault tolerance. HPC schedulers can detect down nodes and reschedule jobs (as an option) but if the job has not been checkpointing, it must start form the beginning. Hadoop, due to the nature of the MapReduce algorithm, can manage failure through the Job Tracker. Because the task tracker is aware of job placement and data location a failed node (or even a rack of nodes) can be managed at run-time. Thus, when a HDFS node fails, the Job tracker can reassign a task to a node where a redundant copy of the data exists. Similarly, if a map or reduce process fails, the job can be restarted on a new node.

The next generation scheduler for Hadoop is called YARN (Yet Another Resource Negotiator) and offers better scalability and more fine grained control over the job scheduling. Users can request "containers" for MapReduce and other jobs (possibly MPI) which are managed by individual per job Application Masters. With YARN the Hadoop scheduler starts to look like other resource managers, however, it will be backward compatible with many higher level Hadoop tools.

Programming

One of the big differences between Hadoop and HPC is the programming models. Most HPC applications are written in Fortran, C, C++ with the aid of MPI libraries. There are also CUDA based applications as well and those optimized for Intel Phi. The responsibility of the users is actually quite large. Application authors must manage communications, synchronization, I/O, debugging, and possibly checkpointing/restart operations. These tasks are often not easy to get right and can take significant time to implement correctly and efficiently.

Hadoop by offering the MapReduce paradigm only requires that the user create a map step and reduce step (and possibly some others, i.e. combiner). These tasks are devoid of all the minutia of HPC programming. The user only need concern themselves with these two tasks which can be easily debugged and tested using small files on single system. Hadoop also presents a single name space parallel file system (HDFS) to user. Hadoop was written in Java and has a low level interface to write and run MapReduce applications, but it also supports an interface (called Streams) that allows mappers and reducers to be written in any language. Above these language interfaces sit many high level tools such as Apache Pig, a scripting language for Hadoop MapReduce, and Apache Hive, a SQL like interface to Hadoop MapReduce. Many users operate using these and other higher level tools tools and may never actually write actual mappers and reducers. This situation is analogous to application users in HPC that never write actual MPI code.

Parallel Computing Model

MapReduce can be classified as a SIMD (Single Instruction Multiple Data) problem. Indeed, the map step is highly scalable because the same instructions are carried out over all data. Parallelism arises from breaking the data into independent parts. There can be no forward or backward dependencies (side effects) within a map step, that is, the map step may not change any data (even it's own). The reducer step is similar in that it applies the same reduction process to a different set of data (the results of the map step).

In general, the MapReduce model provides a functional programing model rather than procedural one. Similar to a functional language, MapReduce cannot change the input data as part of the mapper or reducer process, which is usually a large file. Such restrictions can at first be seen as inefficient, however, the lack of side effects allows for easy scalability and redundancy.

An HPC cluster on the other hand, can run SIMD and MIMD (Multiple Instruction Multiple Data) jobs. The programmer determines how to execute the parallel algorithm. As noted above, this added flexibility comes with addition responsibilities. There is no restriction, however for a user to create their own MapReduce application withing the framework of a typical HPC cluster.

Big Data Needs Big Solution

There is no doubt that Hadoop is useful when analyzing very large data files. The is no shortage of "Big Data" files in HPC and Hadoop has seen some crossover into some technical computing areas. There is BioPig which extends Apache Pig with a sequence analysis capability. There is also MR-MSPOLYGRAPH, a MapReduce Implementation of a Hybrid Spectral Library-Database Search Method for Large-scale Peptide Identification. In the case of MR-MSPOLYGRAPH, results demonstrated that, relative to the serial version, MR-MSPolygraph reduces the time to solution from weeks to hours, for processing tens of thousands of experimental spectra. There are other applications including Protein sequencing and linear algebra.

Provided your problem fits into the MapReduce framework, Hadoop is powerful way to operate on staggeringly large data sets. Since both the map and reduce step are user defined, highly complex operations can be encapsulated in these steps. Indeed, there is actually no hard requirement for a reducer step if all your work can be done in the map step.

The growth of Hadoop and the hardware on which it runs has been increasing. Certainly it can be seen as a subset of HPC offering a single yet powerful algorithm that has been optimized for a large numbers of commodity servers. There is even some cross-over into technical computing that may see further growth as things like YARN begin to give existing Hadoop clusters more HPC capabilities. Many companies are finding, Hadoop to be the new Corporate HPC for big data.

You have no rights to post comments

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.