[Beowulf] multi-threading vs. MPI
Robert G. Brown
rgb at phy.duke.edu
Wed Dec 12 17:28:37 EST 2007
On Wed, 12 Dec 2007, Gerry Creager wrote:
> Debates and differences aside, often-times, this forum *is* an authoritative
> source of information.
Not to mention the fun when people get all hot under the collar...;-)
Heat DOES make light, after all.
Seriously, I think that it has been a very productive thread. I've
certainly learned a lot. One very interesting part of which is that it
sounds like we're coming around the corner in a very, very long cycle to
where multiprocessor machines (e.g. quads and beyond) with large numbers
of processors (and cores per processor) in a single box with a single
large memory are going to once again be in vogue, be they CC-NUMA or
flat memory model boxes, and that this is likely to once again
significantly change the topology of the parallel computing landscape.
Or maybe not. MPI was originally created for big iron machines like
this way back when only PVM or raw sockets were providing beowulfish
clustering of COTS boxes (more or less -- some of the COST systems were
themselves supercomputers in the early days) on OTC networks. Yes, it
should continue to be a productive paradigm as the wheel comes around
again that actually HELPS coders understand the limitations of CPU/IPC
bottlenecks, whether they are the result of shared memory bottlenecks of
one sort or another or due to a real external network. It isn't just
about networking, even though on this list it has mostly been about
networking for some time.
I'm certainly interested in keeping an "open"(MP:-) mind, though, as the
hardware folks aren't exactly done turning the wheel, and it seems at
least possible that they'll be able to create hardware and associated
compilers and/or library support that permits the equally old shared
memory programming models come around again as well as efficient
paradigms. Many of the objections raised (e.g. processor affinity) SEEM
like they are in principle controllable by e.g. kernel and hardware
working in tandem, once a clear picture of what is required for
efficient operation emerges. In that case the winner may be (if I
understand the arguments thus far) determined by ease of programming, or
the fact that with ENOUGH low-level support MPI represents at best an
additional layer of call structures that can only slow code down, not
speed it up. Possibly trivially slow it down, allowing the MPI folks to
invoke ease of coding in the form of code portability the other way.
It is important to remember in both cases that not all parallel code
needs bleeding edge scaling -- all it needs is to scale "well enough"
across the available processors and be easy for the coder (whoever they
happen to be) to program. Or is anyone asserting that embarrassingly
parallel programs, or very coarse grained, master-slave type parallel
programs, are going to perform vastly better with one paradigm than with
the other? Surely there is a cut-off of sorts in IPC density below
which it really doesn't matter which one you use from a performance
point of view, just as there may or many not be tasks for which one or
the other is especially well suited beyond that threshold...
That's the debateable point I understand, but is it being asserted that
it is NEVER going to be sensible to use OpenMP in favor of MPI or just
that it is most LIKELY going to be smarter to use one or the other? Or
even weaker, that there are now known to be certain specific tasks for
which one is better than the other, and a vast unknown elsewhere...?
> Toon Knapen wrote:
>> On 12/12/07, *Joe Landman* <landman at scalableinformatics.com
>> <mailto:landman at scalableinformatics.com>> wrote:
>> This is why I note that talking about MPI vs OpenMP and other
>> pseudo-debates generates mostly heat and very little light. Reminds me
>> of editor battles, shell scripting battles ...
>> I agree that discussions like these easily degenerate.
>> That is actually one of the reasons why I'm looking for authoritive
>> documents discussion the difference between both approaches. Such documents
>> could come in handy when discussing the strategy to use concerning
>> parallelisation of a project to bring the discussion forward in an
>> objective way.
>> Beowulf mailing list, Beowulf at beowulf.org
>> To change your subscription (digest mode or unsubscribe) visit
Robert G. Brown
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Lulu Bookstore: http://stores.lulu.com/store.php?fAcctID=877977
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf