Beowwulf Status Monitor on Scyld

Sean Dilda agrajag at scyld.com
Mon Nov 12 00:06:15 EST 2001


On Sun, 11 Nov 2001, german kogan wrote:

> > It means that 181M out of 251M are used, and that's approximately 72% of
> > the RAM.  When looking at this number, its important to remember that
> > the 181M is the RAM being used by processes on the system, as well as
> > any memory the kernel is using for buffers and cache (such as it uses
> > with filesystems to speed up repeated accesses).
> >
> 
> Thanks.
> But it seems that too much RAM is being ussed up. All I have done was boot
> up the slave nodes, and have not run anything on them. Or is this normal?

See what I wrote before.  That number includes memory the kernel might
be using for buffers and cache.  You might also want to try doing 'bpsh
<node> free' to see a breakdown of how the memory on the slave node is
being used. 


> 
> Also, another question is about mpi. I have ran a simple test code on
> the cluster, and some processes seem to run on the master node. What do I
> have to do to prevent this from happening? So that the processes only run on
> the slave nodes.

I'm assuming you're using -8.  When running your MPI job, set NO_LOCAL=1
just like you set the NP
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 232 bytes
Desc: not available
URL: <http://www.clustermonkey.net/pipermail/beowulf/attachments/20011112/72dd0484/attachment.sig>


More information about the Beowulf mailing list