[Beowulf] The Walmart Compute Node?
john.leidel at gmail.com
Thu Nov 8 16:04:46 EST 2007
I have to second Douglas' motion on onboard ethernet devices. The
latest nvidia and broadcom chipsets do not carry wonderful driver
support quite yet from many open source distros. If one can afford it,
an Intel PCIe GigE card is certainly worth it.
On Thu, 2007-11-08 at 15:36 -0500, Douglas Eadline wrote:
> Having some experience with low cost hardware, If you are
> doing number crunching multi-core seems to provide the
> best bang for buck. The following is the HPL performance that
> you can get for $2500. The Kronos and Microwulf clusters
> are detailed on http://clustermonkey.net, Norbert is the subject
> of a November Linux Magazine article.
> Cluster Name Clock Release HPL
> Processor Speed (MHz) Date Performance
> Kronos/Sempron 2500+ (8) 1750 7/2004 14.90 GFLOPS (Atlas)
> Microwulf/Athlon64 X2 3800+ (4) 2000 8/2005 26.25 GFLOPS (Goto)
> Norbert/Core Duo E6550 (4) 2333 7/2007 45.55 GFLOPS (Goto)
> If you draw a line (3 points I know) you get to 80 GFLOPS
> by 2010. Actually with some tweaking I got Norbert
> up to 47.7 HPL GFLOPS. And, notice I qaulify the performance
> as "HPL GFLOPS" as YMMV.
> With really low cost systems one important aspect is the
> interconnect. The PCIe buses on low end motherboards allows
> one to use inexpensive PCIe (Intel) Ethernet cards vs
> 32 PCI. Some of the on-board GigE implementations are
> not very good.
> > Recently, probably you noticed, Walmart began selling a $200 linux PC.
> > (Apparently the OS is just Ubuntu 7.10 with a small xindow manager
> > instead of Gnome or KDE). Now Slashdot points to
> > http://www.linuxdevices.com/news/NS5305482907.html, the MB being sold
> > separately for $60 ("development board"). It has 1.5GHz CPU,
> > unpopulated memory (slots for 2GB), one 10/100 connection. Does this
> > look to y'all like fair FLOPS/$ for a kitchen project? I'm thinking 6
> > of them as compute nodes per 8 port router, with a bigger head node
> > for fileserving. (actually I'll use a spare room but you know what I
> > mean). An arrangement like this might be faster RAM access per core,
> > compared to multicore, since each core has no competition for is't own
> > memory, right?
> > Thanks,
> > Peter
> > _______________________________________________
> > Beowulf mailing list, Beowulf at beowulf.org
> > To change your subscription (digest mode or unsubscribe) visit
> > http://www.beowulf.org/mailman/listinfo/beowulf
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf