[Beowulf] NUMA info request

Mark Hahn hahn at mcmaster.ca
Mon Mar 24 16:24:15 EDT 2008


>   NUMA is an acronym meaning Non Uniform Memory Access. This is a hardware 
> constraint and is not a "performance" switch you turn on. Under the Linux

I don't agree.  NUMA is indeed a description of hardware.  I'm not sure 
what you meant by "constraint" - NUMA is not some kind of shortcoming.

> kernel there is an option that is meant to tell the kernel to be conscious 
> about that hardware fact and attempt to help it optimize the way it maps the 
> memory allocation to a task Vs the processor the given task will be using 
> (processor affinity, check out taskset (in recent util-linux implementations, 
> ie: 2.13+).

the kernel has had various forms of NUMA and socket affinity for a long time,
and I suspect most any distro will install kernel which has the appropriate 
support (surely any x86_64 kernel would have NUMA support).

I usually use numactl rather than taskset.  I'm not sure of the history of 
those tools.  as far as I can tell, taskset only addresses numactl --cpubind,
though they obviously approach things differently.  if you're going to use 
taskset, you'll want to set cpu affinity to multiple cpus (those local to 
a socket, or 'node' in numactl terms.)

>   In your specific case, you would have 4Gigs per CPU and would want to make 
> sure each task (assuming one per CPU) stays on the same CPU all the time and 
> would want to make sure each task fits within the "local" 4Gig.

"numactl --localalloc".

but you should first verify that your machines actually do have the 8GB
split across both nodes.  it's not that uncommon to see an inexperienced 
assembler fill up one node before going onto the next, and there have even
been some boards which provided no memory to the second node.
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

!DSPAM:47e80f03200951804284693!



More information about the Beowulf mailing list