[Beowulf] G5 cluster for testing
bill at cse.ucdavis.edu
Wed Feb 25 02:10:39 EST 2004
On Tue, Feb 24, 2004 at 04:17:31PM -0700, Orion Poplawski wrote:
> Anyone (vendors?) out there have a G5 cluster available for some
For the most part I'm finding that cluster performance is mostly
predictable by single node performance, and the scaling of the
interconnect. At least as an approximation, I'm going to use to find
a good place to start for my next couple cluster designs.
I'm current benchmarking:
Opteron duals (1.4, 1.8, and 2.2)
Opteron quad 1.4
Itanium dual 1.4 GHz
Dual P4-3.0 GHz+HT
Single P4-3.0 GHz+HT
Alas, my single node performance testing on the G5 has been foiled by
my inability to get MPICH, OSX, and ./configure --with-device=ch_shmem
Anyone else have MPICH and shared memory working on OSX? Or maybe a dual
g5 linux account for an evening of benchmarking?
Normally using ch_p4 and localhost wouldn't be to big a deal, but
ping localhost on OSX is something like 40 times than linux, mpich with
ch_p4 on OSX is around 20 times worse than linux with shared memory.
> testing? I've been charged with putting together a small cluster and
> have been asked to look into G5 systems as well (I guess 64 bit powerPC
Assuming all the applications and tools work under all environments your
considering I'd figure out what interconnect you want to get first.
Computational Science and Engineering
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf