Article Index

Postmortem Analysis

Postmortem analysis is a tool that is frequently overlooked. Those annoying core files that most people either ignore or remove can actually be loaded into a compiler to view a snapshot of the process just before it crashed. Even if race conditions disappear when run under debuggers, core files can still be examined from failed runs; depending on the nature of the error and the operating system's settings, it is not uncommon to get a core file for each failed process in the parallel job. Examining all the core files can provide insight into the cause(s) of a bug.

Intercepting Signals

If all else fails, it may be desirable to install a signal handler to catch segmentation faults (or whatever signal is killing your application) and print out the node's name and the process' PID. Be careful, however - very little can be safely executed in signal context. Listing Two shows an example of setting up a printable string ahead of time; the signal handler itself only invokes write() to output the string and then goes into an infinite loop to wait for a debugger to attach. This method potentially avoids the overhead and possible race condition timing changes caused by active checking in debuggers, increasing the chance of duplicating the bug, and therefore being able to catch it in a debugger.

Listing 3: Sample segmentation fault catcher
 1 #include <stdio.h>
 2 #include <signal.h>
 3 #include <mpi.h>
 4 
 5 static void handler(int);
 6 static char str[MPI_MAX_PROCESSOR_NAME + 128];
 7 static int len;
 8 
 9 /* Setup a string to output */
10 void setup_catcher(void) {
11     char hostname[MPI_MAX_PROCESSOR_NAME];
12 
13     MPI_Get_processor_name(hostname, &len);
14     sprintf(str, "Seg fault: pid %d, host %s\n", 
15             getpid(), hostname);
16     len = strlen(str);
17     signal(SIGSEGV, handler);
18 }
19 
20 /* write() the string to stderr then block forever waiting for a
21    debugger to attach */
22 static void handler(int sig) {
23     write(1, str, len);
24     while (1 == 1);
25 }

Where to Go From Here?

Debugging in parallel is hard... but not impossible. Although it shares many of the characteristics of serial debugging, and although many of the same tools can be used (in creative ways), parallel debugging must be approached with a whole-system mindset. Remember that the bug(s) may span multiple processes - it is frequently not enough to examine a single process in a parallel job. And always always always use the right tool. printf is rarely the right tool. And don't forget there are three commercial suites which specialize in parallel debugging; the Distributed Debugging Tool (DDT) from Allinea, Fx2 from Absoft, and Totalview from Etnus (see Resources Sidebar).

Next column, we'll discuss some of the dynamic process models of MPI-2 - spawning new MPI processes.

Got any MPI questions you want answered? Wondering why one MPI does this and another does that? Send them to the MPI Monkey.

Resources
Allinea DDT http://www.allinea.com/
Absoft Fx2 http://www.absoft.com/Products/Debuggers/fx2/fx2_debugger.html
LAM/MPI FAQ (more information on debugging in parallel) http://www.lam-mpi.org/faq/
MPI Forum (MPI-1 and MPI-2 specifications documents) http://www.mpi-forum.org/
MPI - The Complete Reference: Volume 1, The MPI Core (2nd ed) (The MIT Press) By Marc Snir, Steve Otto, Steven Huss-Lederman, David Walker, and Jack Dongarra. ISBN 0-262-69215-5
MPI - The Complete Reference: Volume 2, The MPI Extensions (The MIT Press) By William Gropp, Steven Huss-Lederman, Andrew Lumsdaine, Ewing Lusk, Bill Nitzberg, William Saphir, and Marc Snir. ISBN 0-262-57123-4.
NCSA MPI tutorial http://webct.ncsa.uiuc.edu:8900/public/MPI/
The Tao of Programming By Geoffrey James. ISBN 0931137071
Etnus Totalview http://www.etnus.com/TotalView/
Valgrind Project http://www.valgrind.org/

This article was originally published in ClusterWorld Magazine. It has been updated and formatted for the web. If you want to read more about HPC clusters and Linux, you may wish to visit Linux Magazine.

Jeff Squyres is leading up Cisco's Open MPI efforts as part of the Server Virtualization Business Unit.

You have no rights to post comments

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.