Article Index

What about MPI?

Let me be clear. MPI (Message Passing Interface), and PVM (Parallel Virtual Machine) for that matter, are wonderful ideas. They have allowed me and countless others to use collections of processors to achieve great things. Rest assured message passing will not be eclipsed by a new "programming" technology any time soon. Indeed, it will in all likelihood be at the core of most parallel applications in the future because you cannot have parallel computing without communication. As important as MPI is to the HPC world, it does represent a barrier to the domain expert. That is, programming in MPI is too much of an investment for Joe Sixpack programmer. It requires not only code changes, testing and debugging are harder, and possible major re-writes may be necessary. For those cheering on the sidelines, threads and OpenMP are in the same boat. Sure the results can be impressive, but complexity is the cost one pays for working at this level.

Even if we manage to produce an army of MPI programmers, there is another more subtle issue that must be addressed. As written, most parallel programs cannot provide a guarantee of efficient execution on every computer. There is no assurance that when I rebuild my MPI/Pthreads/OpenMP program on a different computer it will run optimally. A discussion of this topic is beyond the scope of this column, but let me just say, that each cluster or SMP machine has a unique ratio of computation to communication. This ratio determines efficiency and should be considered when making decisions about parallelization. For some applications like rendering, this ratio makes little difference, in others it can make a huge difference in performance and determine the way you slice and dice your code. unfortunately, your slicing and dicing may work well on one system, but there is no guarantee it will work well on all systems.

MPI has often been called the machine code for parallel computers. I would have to agree. It is portable, powerful, and unfortunately, in my opinion, too close to the wires for everyday programming. In my parallel computing utopia, MPI and other such methods are as hidden as register loads are in a bash script.

Abstract Art

Climbing above the MPI layer will not come without a cost. Just as there is loss of possible performance when going from assembly language to C, there will be a loss of efficiency when programming without explicit messages. The term often used is a "higher abstraction level". The reasons high level languages are so popular is because they provide a high level of abstraction above the hardware. Programmers move closer to their application and farther away from the the computer.

In my long forgotten article, I made the case that in the early days of computing there was a huge debate concerning the use of a new language, called FORTRAN, instead of assembly language (machine code). Yes, in those dark early days, there was no Perl or Python and the new FORmula TRANslation language was a breakthrough idea because it abstracted away some of the machine and let non-programmers like scientists easily program formulas. The argument went something like this:

Assembly Code Wonk: "If I use FORTRAN instead of assembly language, I loose quite a bit of performance, so I will stick with loading my registers thank you."

FORTRAN Wonk: "Yes, but when the new computer comes next year, I will not have to rewrite my program in a new machine code. And, besides, the new FORTRAN II compiler will optimize my code."

Assembly Code Wonk: "Only time will tell us the best solution. By the way, is that new pencil thin neck tie you are wearing with a new white short sleeve shirt?"

Time did tell us what happened. FORTRAN (or now written as Fortran) allowed many more people to write code. It also allowed code to spread quicker as new machines came on line. Suddenly there was, and still is by the way, vast amounts of Fortran programs doing all kinds of useful things.

If we are going to open up parallel computing to the domain experts we need to introduce a new abstraction level in which to express their problem. My wish is that once the problem is described (or declared) in a new language, compilers and run-time agents can deliver the features I described above.

Cliff Hanging

Now that I have you on the edge of your seat, I need to stop for now. Not to worry though, in the second installment of this theme I will begin to clear the path for some real alternatives to MPI and even suggest some wild ideas. And maybe if we start thinking and talking about these issues and ideas, we may find an answer to my question. I am optimistic, after all Buddha also said, "What we think, we become."

Note: This article is the first of three part series. The second and third parts are:

This article was originally published in Linux Magazine. It has been updated and formatted for the web. If you want to read more about HPC clusters and Linux you may wish to visit Linux Magazine.

Douglas Eadline is editor of ClusterMonkey and does wear a pencil thin necktie, although there is one in his closet somewhere.

You have no rights to post comments

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.