Want to write programs for a cluster? Here is your chance. MPI implementer and cluster workhorse Jeff Squyres guides you through the nuances of writing MPI programs with a set of outstanding tutorials. Unlike other MPI tutorials, Jeff addresses cluster issues and optimizations. Just dive in, you don't even need a cluster to get started.

When the grass grows in parallel, do they know about each other? Do they coordinate? Perhaps they have some kind of collective intelligence? Do they use MPI?

In previous editions of this column, we've talked about the 6 basic functions of MPI, how MPI_INIT and MPI_FINALIZE actually work, and discussed in agonizing detail the differences between MPI ranks, MPI processes, and CPU processors. Armed with this knowledge, you can write large, sophisticated parallel programs. So what's next?

Collective communication is a next logical step - MPI's native ability to involve a group of MPI processes together in a single communication, possibly involving some intermediate computation.

Behind the scenes at MPI studios

In the previous two installments, we covered the basics and fundamentals: what MPI is, some simple MPI example programs, and how to compile and run them. For this column, we will detail what happens in MPI_INIT in a simple MPI application (the "ping-pong" example program in Listing 1).

Proper process processing and placement - say that three times

In my previous columnm, I covered the basics and fundamentals: what MPI is, some a simple MPI example program, and how to compile and run the program. In this installment, let's dive into a common terminology misconception: processes vs. processors - they're not necessarily related!

In this context, a processor typically refers to a CPU. Typical cluster configurations utilize uniprocessors or small Symmetric Multi Processor (SMP) nodes (e.g., 2-4 CPUs each). Hence, "processor" has a physical - and finite - meaning. Dual core CPUs can be treated like SMP nodes.

If you recall, I previously said that MPI is described mostly in terms of "MPI processes," where the exact definition of "MPI process" is up to the implementation (it is usually a process or a thread). An MPI application is composed of one or more MPI processes. It is up to the MPI implementation to map MPI processes onto processors.

What is MPI, really? Who should use it, and how? Learn the fundamentals from an open-source MPI implementor.

The Message Passing Interface (MPI)

The term "MPI" is often bandied about in high-performance computing discussions -- sometimes with love, sometimes with disdain, and sometimes as a buzzword that no one really understands. This month's column attempts to provide both overview information about what MPI is for managers as well as basic technical information about MPI for developers.


Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.