Article Index

Linux on HPC clusters seems to be an obvious choice. It was not, however, a forgone conclusion that Linux would end up leading the supercomputing parade when Tom Sterling and Don Becker used it to build the first Beowulf cluster. Inquiring minds want to know "Why Linux on Clusters?"

The Nagging Question: Why Linux on Clusters?

Note: This article originally appeared in the November 2004 issue of Linux Magazine. Due to space concerns some of the original article had to be cut from the magazine. The following version is the original longer version (directors cut?) and while it contains extra arguments for Linux on clusters, it does lack some of Martin Streicher's fine editing.

Mention HPC clusters and one of the first things that comes to mind is Linux. One might assume, that the reason is because Linux is very popular and free. While this is true, the growth of HPC (High Performance Computing) clusters has grown on its own merits and not necessarily on the popularity or cost of Linux in general. Of course, there are server farms based on Linux, but I'm talking about the big-honking clusters crunching away on some fundamental problem in science. What did these users see in Linux that has made it so successful? And, more importantly, what can it tell us about how we might approach HPC in the future.

In the Beginning

The history of the Beowulf project is a fascinating read. The article Beowulf Breakthroughs by Tom Sterling in the June 2003 issue of Linux Magazine is an excellent description of the motivation and people who pioneered the effort. The article is also a great "myth buster" of sorts as it sets the record straight on a few of the historical details that tend to get confused from time to time. For instance, I once overhead a discussion at a trade show where someone was quite proudly pointing out that the name "Beowulf" was chosen (in 1993) as a response to the Microsoft "Wolfpack" NT cluster project (announced in 1995). For more information, see the What is a Beowulf?sidebar.

In any case, why did the Beowulf project use Linux? The answer was rather pragmatic. They needed a UNIX like OS for commodity hardware and at the time (circa 1993), BSD was embroiled in a legal dispute that made it not accessible. When Linux was introduced it provided an alternative. As Tom Sterling has stated:

To be sure, Linux wasn't the first Unix-like operating system to run on a PC, and in the beginning, it wasn't even the best. But unlike anything that had come before it, Linux was the focus and consequence of a revolutionary phenomenon made possible through the Internet: hundreds of coders from around the world, most of which had never met each other, working together, sharing expertise on a new operating system.

It should also be mentioned that at this time, Linux needed Ethernet drivers and if it was to be used in a high performance fashion, it needed good Ethernet drivers. Working as part of the Beowulf Project, Don Becker added this missing piece to Linux. It was a huge benefit to clustering. As those new Fast Ethernet adapters became available, Don was right there to provide the drivers.

Considering the great growth and acceptance of Linux, the choice to use and augment Linux could be considered a brilliant move. In the following discussion, however, we explore not only why Linux has worked so well from the beginning, by why it also flourishes in the current HPC ecosystem.

The Plug and No-Pay/Play Proposition

In the beginning, the users of HPC systems were pretty exclusively a UNIX crowd. Dealing with multiple users, large files, remote access, etc. was not something to be found in commodity software offerings from Microsoft or Apple. UNIX was the way it was done. Of course, there were many different dialects, but it was still UNIX.

Using Linux for clusters was very attractive because it was literally plug-and-play for many users. Well, to be accurate it was at least plug-re-compile-and-play. As an example, consider communication middle-ware like PVM or MPI. All of these packages were ported very quickly to a Linux environment because it looked very much like the other environments these packages supported (e.g. Sun workstations). Many of these packages had already been built using GNU tools as well, so quite often, porting would be nothing more than juggling include files.

Easy porting was really only half the story. Because Linux distributions were freely available, the cost to play was very low. Cobbling together a few Pentium Pro boxes with MPICH and Linux was not an expensive a proposition. Indeed, it provided a very low barrier to entry for cluster computing.

The Open Thing

Of course the the openness of Linux (and Linux distributions), has ignited the growth of Linux for HPC clusters. As mentioned, an almost zero cost of entry provided easy access to test the technology. Once the system was shown to work, the advantages were at times staggering. A ten-fold improvement in price-to-performance over traditional HPC machines was not uncommon. People notice these types of things.

One of the first things HPC users realized they could do was build their own kernel. While the new features are always welcome, compute nodes did not really need sound support, laptop power management, or those very interesting Ham radio extensions. A user could build a kernel with exactly what they need and no more. A small compact kernel means more memory to crunch numbers. Indeed, users have the option of using simple small monolithic (modules compiled in) kernels for their specific needs. Compute nodes could be further paired down with smaller distribution sets, smaller kernels, and even run without the need for hard drives. The compute node can be maximized (or minimized depending how you look at it) in any number of ways for the problem at hand. With clusters "less" is often better.

Another advantage of openness is the ease in which hardware can access the kernel internals. This capability is particularly important with clusters. A common method to optimize communications is called "kernel bypass". In this method, communication take place outside the kernel networking stack and memory is copied (messages passed) between two independent process spaces on different nodes. Of course, to implement this you need to involve the kernel. A particularly good example of this has been the drivers provided by Myricom for their Myrinet network hardware. Many other examples also exist. The Gamma project has optimized Ethernet drivers to provide lower latency communications for clusters as well. All of these optimizations are HPC specific and only possible because the kernel is open.

Another advantage to an open kernel is that version problems can often be fixed by the user. These types of problems come up all the time in clusters. A new driver for library needs to be compiled for the kernel you are running. If the vendor released the driver as open source, you can solve the problem with a quick re-compile. In a closed source scenario, the vendor may need to release countless binary versions of their driver/library (none of which seem to work with yourkernel, by the way).

You have no rights to post comments

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.