[Beowulf] G5 cluster for testing

Suvendra Nath Dutta sdutta at deas.harvard.edu
Thu Feb 26 07:43:51 EST 2004


I fought for a while to get a OSX cluster up, precisely to test the G5
performance. I had lots of problems with setting up NFS and setting
up MPICH to use shared memory on the dual processors. I was able to take
advantage of the firewire networking built into OS X. We were taking the
harder route of staying away from all non-open source tools to do NFS
(NFSManager) or MPI (Pooch).

As was pointed out in another message, we are mostly keen on just testing
performance of three applications that we will run on our cluster rather
than HPL numbers. Finally we gave up the struggle. We are now working with
Apple to benchmark on an existing setup instead of us trying to set
everything up ourselves. Unfortunately there isn't a howto on doing this
yet.

I'll post numbers when we get it.

Suvendra.


On Tue, 24 Feb 2004, Bill Broadley wrote:

> On Tue, Feb 24, 2004 at 04:17:31PM -0700, Orion Poplawski wrote:
> > Anyone (vendors?) out there have a G5 cluster available for some
>
> For the most part I'm finding that cluster performance is mostly
> predictable by single node performance, and the scaling of the
> interconnect.  At least as an approximation, I'm going to use to find
> a good place to start for my next couple cluster designs.
>
> I'm current benchmarking:
> 	Dual G5
> 	Opteron duals (1.4, 1.8, and 2.2)
> 	Opteron quad 1.4
> 	Itanium dual 1.4 GHz
> 	Dual P4-3.0 GHz+HT
> 	Single P4-3.0 GHz+HT
>
> Alas, my single node performance testing on the G5 has been foiled by
> my inability to get MPICH, OSX, and ./configure --with-device=ch_shmem
> working.
>
> Anyone else have MPICH and shared memory working on OSX?  Or maybe a dual
> g5 linux account for an evening of benchmarking?
>
> Normally using ch_p4 and localhost wouldn't be to big a deal, but
> ping localhost on OSX is something like 40 times than linux, mpich with
> ch_p4 on OSX is around 20 times worse than linux with shared memory.
>
> > testing?  I've been charged with putting together a small cluster and
> > have been asked to look into G5 systems as well (I guess 64 bit powerPC
> > really....)
>
> Assuming all the applications and tools work under all environments your
> considering I'd figure out what interconnect you want to get first.
>
> --
> Bill Broadley
> Computational Science and Engineering
> UC Davis
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
>

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list