Jim.Morton at aventis.com
Jim.Morton at aventis.com
Tue Jun 19 13:54:39 EDT 2001
I remember seeing an on-line article which featured peltier systems on which
the hot side was cooled by freon which had been compressed and cooled with a
typical refridgerator system. The system was built entirely self-contained
within a very large server chassis.
I think we need to regain perspective.
In the not-so-distant past the way to get lots of computer power was to
build very specialized extreme high preformance CPUs tightly coupled to lots
of fast memory. The CPUs would often have specialized architectures such as
vector processing, instruction tailgating etc. These computers cost $20M
typically and required a dedicated staff of people to maintain the hardware
as well as a troupe of system people for the software. The computers were
pushing the limits of technology as defined by the speed of light - they
needed to be physically small so that the propigation time through the
system was less than the period of one clock cycle. On most circuit modules,
there were zig-zags in the copper traces to act as fine-tune delay lines so
signals would be concurrent within the system. It became clear to the
designers that they could only make the machines perhaps one or two orders
of magnetude faster before they were banging into very hard limits of
physics. The obvious extension was to have more processors. With the
advances in chip technology nipping at the heels of the custom expen$ive
vector processors, the designers realized that building a thousand $1,000
processing elements into a system was far more powerful for certain classes
of problems than the big vector machines. The cost-benefit ratio for THESE
CLASSES OF PROBLEMS was very high. For some classes of problems, the
interconnects between the processors was a limiting factor and the massive
parallel systems were not as efficient.
If you are building a beowulf, I think you should have determined that
it is an appropriate architecture for the problem you are solving. If it
is an appropriate architecture, then why make the nodes more complex when
adding more nodes is an efficient way to expand the power of the system ?
Remember that the things we generally do with the computers is complex
enough - building mechanical fragility and risk of failure into the system
may not buy enough benefit for the long-term cost. If you need the really
zippy fast processors and extreme high-bandwidth MPP interconnect , buy
a T3d, or get your project supported by an agency which can give you time
on one - a beowulf, no matter how many chrome go-fasters you put on it
will not be a good solution for you.
Please understand that I love to tinker too. I am thinking about
building a zippy do-dah evaporatively water cooled system myself - just for
the fun of playing with nifty hardware replete with pulsing liquid hoses and
maybe even some way cool blue leds in the case somewhere. But with my
present understanding of production requirements, I would not put such a
system into production for scientific research, just as I would not drive
a hot-rodded deuce coupe to work every day - maybe a trip once or twice a
year to blow the wax out of the PHBs ears, but not commute every day.
Respectfully bracing for shock
> d.bussenschutt at mailbox.gu.edu.au[SMTP:d.bussenschutt at mailbox.gu.edu.au]
> Sent: Monday, June 18, 2001 11:28 PM
> To: josip at icase.edu
> Cc: beowulf at beowulf.org
> Subject: Re: liquid cooling
> >Tom's Hardware idea of a water cooled CPU:
> For those that have looked at the link above, here's a few more ideas of
> my own for improvement on their CPU water-cooler.
> 1) Take the 'radiator' from the cooling system, remove those horrible
> noisy fans and put the radiator in the fridge.
> 2) Ok, so your computer room/office/bedroom/whatever isn't next to the
> kitchen? Well now you have the PERFECT excuse to go out and buy that bar
> fridge you always wanted next to the computer.
> 3) Oh if you live someplace COLD, don't bother with the fridge, just put
> the 'radiator' outside, run the hoses through the window (or wall) or
> 4) Is it just me or is this WAY off-the-topic for a beowulfery list?
> 5) :-)
> David Bussenschutt Email: D.Bussenschutt at mailbox.gu.edu.au
> Senior Computing Support Officer & Systems Administrator/Programmer
> Location: Griffith University. Information Technology Services
> Brisbane Qld. Aust. (TEN bldg. rm 1.33) Ph: (07)38757079
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
More information about the Beowulf