[Beowulf] Re: GPU boards and cluster servers.

Vincent Diepeveen diep at xs4all.nl
Fri Sep 5 14:23:58 EDT 2008


AFAIK only rich government departments do business with companies  
such as DELL.

If you're real big you HAVE to sign some sort of deal with a big  
store anyway.

DELL delivers very old junk for the price you can get newer junk  
usual. Big companies have big overhead,
maybe sometimes simply because they decided they WANT x% profit on  
all deals.

I remember at some company i worked for a few months, that i got from  
DELL hardware that wasn't getting sold even anymore.

Obviously that means the service contract in question is just wasting  
government money.

The big trick all salesman understand and civil servant type managers/ 
directors do not so well understand is that something
that's new this year, that if you deliver that 2 years from now, that  
it's total outdated. Not long ago i heard that a specific
company bought off a service contract delivering XT machines.

For those here who want to run artificial intelligent software they  
all have a big need for crunching power at a low price,
much in contradiction to the rest here which just wants
double precision AND big bandwidth AND big ram usually AND 100%  
reliability.

At 100% reliability and BIG ram AND big bandwidth to huge RAM, there  
is a big price.

Optimizations in that category tree searching software and such  
(encryption is just a subsection of it)
happen at a level which most guys here at this list will never  
understand. It is much better optimized than other software.

There happens sometimes the kind of optimizations that hardware  
engineers, not exactly layman,
sometimes say: "oh dear is that the case?",  when talking to them.

Suppose you search for the holy grail at a GPU  and 1 gpu's RAM is  
bad, so all your calculations there failed.

Heh, you won't even notice it soon, as you have no 'result' that is  
deterministically verifiable.

An example is a parameter optimization i want to perform for my  
chessprogram. I would need to write a new program for it
which is huge at a GPU, basically that program would only be the  
evaluation function of my program.

Such optimization runs are embarrassingly parallel.

What matters is simply how many instructions a cycle i can push through.

Having at home a few GPU's do that crunching work is very attractive.

Biggest problem of GPU's is that i have no money to buy a machine to put
a GPU in, let alone buy GPU's just to toy with it.

So that's why a friend of mine is hopefully gonna run it at a core or  
160 Xeons at TU-Delft,
when the machines are idle and not getting used by others. Probably  
the best project name for this is Ikarus,
wasn't it that it is already an existing chess programs name, as the  
final goal is to explore possibilities after auto recognizing
new patterns in a later phase.

Will run for years.

The big difference in this type of crunching power is that if  
something goes wrong, that's not a problem;
If a bit flips or whatever, it all doesn't matter. I just need the  
best parameter set it can find for me when searching for the holy grail.

I understand Bo Li there very well. He wants the maximum amount of  
crunching power and can do with 32 bits.

A good 1000 watt psu here is around a 100 euro. For under 500 euro  
you can assemble a great box, then only add the GPU's.

Getting 40% performance out of a videocard is very impressive by the  
way, especially if i consider that no one around me with
different types of software (from statistical software to monte carlo  
to multimedia encoding) doesn't get anywhere near that performance
out of it.

Yet the difference is, is that all persons here who are cheering for  
the GPU crunching power, are the same type of guys.
Though on paper the software is doing something total different, they  
all search for some sort of holy grail in an embarrassingly
parallel manner. The failed attempts are usually game tree searches  
that need to combine somehow results using hashtables
and/or FFT type tries.

Vincent

On Sep 5, 2008, at 5:29 PM, Robert G. Brown wrote:

> On Fri, 5 Sep 2008, Gerry Creager wrote:
>
>> At $6k US, and requiring me to get Vista, I'd rather build a  
>> system starting with, e.g., an Asus motherboard.  I save one-third  
>> the price and I don't have to file the environmental impact  
>> statement on the flawed OS.  I also get NICs I can easily set to  
>> accommodate Jumbo Frames.
>
> If you talk to a Dell rep, you can ALMOST invariably get any
> server-class system they sell without an operating system or with  
> Linux
> installed, especially if you are ordering in quantity.
>
> Just FYI -- otherwise I don't disagree with anything you said,
> especially Vista of Evil.  Although hey, it runs great on 4 GB and up
> systems, at least if you don't run large applications on it... or  
> so I'm
> told.
>
>    rgb
>
>>
>> gerry
>>
>> andrew holway wrote:
>>> The new Dell R5400 Rackmount workstation is ideal for this. You can
>>> slip two Xeons, 16GB ram and two chunky graphics cards in there.
>>> ta
>>> Andy
>>> On Fri, Sep 5, 2008 at 6:06 AM, Li, Bo <libo at buaa.edu.cn> wrote:
>>>> Hello,
>>>> It seems your platform is more suitable for a cluster. Great,  
>>>> and when are
>>>> the products available? And is there any software support from you?
>>>> Regards,
>>>> Li, Bo
>>>> ----- Original Message -----
>>>> From: Maurice Hilarius
>>>> To: Li, Bo
>>>> Cc: kus at free.net ; i.kozin at dl.ac.uk ; Beowulf Mailing List
>>>> Sent: Friday, September 05, 2008 9:36 AM
>>>> Subject: GPU boards and cluster servers.
>>>> Li, Bo wrote:
>>>> Hello,
>>>> Is it too expensive for the platform?
>>>> The easy solution is:
>>>> And X48 level motherboard with CF support, about $150
>>>> Q6600 Processor, about $170
>>>> Two 4870X2 $1,100
>>>> Two Seagate SATA Harddisk 500G for Raid1, about $140
>>>> 4*2G DDR2 RAM, about $150
>>>> PSU 1000W, about $200
>>>> A big box, about $100
>>>> That's all, in total, $2,010.
>>>> Regards,
>>>> Li, Bo
>>>> True, to a point.
>>>> Most people will not use a desktop board for a cluster.
>>>> Too I/O bound.
>>>> Finally the memory capacity of these desktop boards is pretty  
>>>> limiting.
>>>> Typically 8GB maximum.
>>>> Generally a XEON or Opteron chipset and CPUs will be the choice.
>>>> Also, for most GPU/FPU performance work, the memory bandwidth  
>>>> bottleneck on
>>>> the Intel product is too much of a negative factor.
>>>> Lastly, for clusters, most want a rackmount chassis.
>>>> We developed a 2U designed for a server board and 2 GPU boards.
>>>> The big challenge there is power.
>>>> We use dual 600W PSUs. One for motherboard, and one for dual GPU  
>>>> boards.
>>>> --
>>>> With our best regards,
>>>> Maurice W. Hilarius         Telephone: 01-780-456-9771
>>>> Hard Data Ltd.                FAX:          01-780-456-9772
>>>> 11060 - 166 Avenue         email:maurice at harddata.com
>>>> Edmonton, AB, Canada         http://www.harddata.com/
>>>>      T5X 1Y3
>>>> _______________________________________________
>>>> Beowulf mailing list, Beowulf at beowulf.org
>>>> To change your subscription (digest mode or unsubscribe) visit
>>>> http://www.beowulf.org/mailman/listinfo/beowulf
>>> _______________________________________________
>>> Beowulf mailing list, Beowulf at beowulf.org
>>> To change your subscription (digest mode or unsubscribe) visit  
>>> http://www.beowulf.org/mailman/listinfo/beowulf
>>
>>
>
> -- 
> Robert G. Brown                            Phone(cell): 1-919-280-8443
> Duke University Physics Dept, Box 90305
> Durham, N.C. 27708-0305
> Web: http://www.phy.duke.edu/~rgb
> Book of Lilith Website: http://www.phy.duke.edu/~rgb/Lilith/Lilith.php
> Lulu Bookstore: http://stores.lulu.com/store.php?fAcctID=877977
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit  
> http://www.beowulf.org/mailman/listinfo/beowulf
>

_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list