[Beowulf] Q: IB message rate & large core counts (per node) ?

Greg Lindahl lindahl at pbm.com
Sat Mar 6 18:36:07 EST 2010

On Fri, Feb 26, 2010 at 01:20:49PM -0500, Lawrence Stewart wrote:

> Personally, I believe our thinking about interconnects has been
> poisoned by thinking that NICs are I/O devices.  We would be better
> off if they were coprocessors.  Threads should be able to send
> messages by writing to registers, and arriving packets should
> activate a hyperthread that has full core capabilities for acting on
> them, and with the ability to interact coherently with the memory
> hierarchy from the same end as other processors.

I'm up for dedicating 1+ normal processor cores to doing the special
stuff.  Nodes have a lot of cores these days, and all-2-sided programs
don't have to dedicate a core & thus would pay nothing. In the MPI
1-sided model, you'd probably want to run all the cores on separate
programs and have the dedicated core get access to the appropriate
process' address space.

-- greg

Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

More information about the Beowulf mailing list