[Beowulf] The Walmart Compute Node?

Jeffrey B. Layton laytonjb at charter.net
Fri Nov 9 20:33:53 EST 2007


I was going to hold this back for after Thanksgiving, but I've been working
on a small project all year to track the costs of AMD and Intel systems.
I chose 3 AMD CPU models (4200+, 5000+, and 6000+) and 3 Intel
models (Cores 2: 2.33 GHz, 2.66 GHz and Quad-core 2.4 GHz). I then
chose a couple of memory configurations to see the effects of memory
fluctuations. The system configurations have everything - cases, DVD,
GigE switches (I used 2 GigE switches - one for computation and one
for storage), hard drives, cables, motherboards, etc. Only the head node
has hard drives and the compute nodes are all diskless.

The spreadsheet computes the total cost (including shipping) from the
component costs for systems from 1-8 nodes and a 16 node system
(I switched from 8-port GigE switches to 16-port GigE switches for
this last configuration).

I've been tracking the costs for these systems since about Feb. I've also
added the theoretical system performance (GFLOPS) so I can compute
$/GFLOPS as well as $/node, and $/core. I wish I had power estimates
though.

I've also been finding the $2,500 systems from this overall group. It's
fun to look at how many nodes you can get for $2,500 as the year
went along. I'm hoping to publish an article on ClusterMonkey later
this year with the data and the plots. It's actually quite interesting.
Think of it as finding the optimal system for $2,500.

The most interesting thing is that the best $/GFLOPS is the Intel
quad-core Q6600. But that shouldn't be too surprising since the
Intel chips have 4 ops/clock and the AMD's only have 2. On a
$/GFLOPS basis, the AMD's are about twice the cost of the cheapest
Intel system. But you generally get more nodes with AMD than
with Intel.

Doug thinks I'm nuts, but then again, I have to have a hobby :)
It's going to become even more interesting when the AMD quad-core
hits the desktop and when Penryn pops up.


Jeff

> Peter,
>
> Having some experience with low cost hardware, If you are
> doing number crunching multi-core seems to provide the
> best bang for buck. The following is the HPL performance that
> you can get for $2500. The Kronos and Microwulf clusters
> are detailed on http://clustermonkey.net, Norbert is the subject
> of a November Linux Magazine article.
>
>                                          CPU
> Cluster Name                  Clock      Release           HPL
>    Processor               Speed (MHz)   Date         Performance
> ---------------------------------------------------------------------
> Kronos/Sempron 2500+ (8)        1750   7/2004    14.90 GFLOPS (Atlas)
> Microwulf/Athlon64 X2 3800+ (4) 2000   8/2005    26.25 GFLOPS (Goto)
> Norbert/Core Duo E6550 (4)      2333   7/2007    45.55 GFLOPS (Goto)
>
>
> If you draw a line (3 points I know) you get to 80 GFLOPS
> by 2010. Actually with some tweaking I got Norbert
> up to 47.7 HPL GFLOPS. And, notice I qaulify the performance
> as "HPL GFLOPS" as YMMV.
>
> With really low cost systems one important aspect is the
> interconnect. The PCIe buses on low end motherboards allows
> one to use inexpensive PCIe (Intel) Ethernet cards vs
> 32 PCI. Some of the on-board GigE implementations are
> not very good.
>
> --
> Doug
>
>
>
>
>   
>> Recently, probably you noticed, Walmart began selling a $200 linux PC.
>> (Apparently the OS is just Ubuntu 7.10 with a small xindow manager
>> instead of Gnome or KDE). Now Slashdot points to
>> http://www.linuxdevices.com/news/NS5305482907.html, the MB being sold
>> separately for $60 ("development board"). It has 1.5GHz CPU,
>> unpopulated memory (slots for 2GB), one 10/100 connection. Does this
>> look to y'all like fair FLOPS/$ for a kitchen project? I'm thinking 6
>> of them as compute nodes per 8 port router, with a bigger head node
>> for fileserving. (actually I'll use a spare room but you know what I
>> mean). An arrangement like this might be faster RAM access per core,
>> compared to multicore, since each core has no competition for is't own
>> memory, right?
>> Thanks,
>> Peter
>> _______________________________________________
>> Beowulf mailing list, Beowulf at beowulf.org
>> To change your subscription (digest mode or unsubscribe) visit
>> http://www.beowulf.org/mailman/listinfo/beowulf
>>
>> 
>>
>>     
>
>
>   

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

!DSPAM:4733b96f66151446633523!



More information about the Beowulf mailing list