[Beowulf] Windows cluster

Andrew M.A. Cater amacater at galactic.demon.co.uk
Wed Sep 7 22:42:10 EDT 2005


On Sun, Sep 04, 2005 at 01:44:20PM +0100, rrankin wrote:
> We have just taken deliver of a windows cluster.

Bad luck :(  This list is _primarily_ a list of Linux and Unix folk, 
though many people have dealt with Windows: the level of expertise
on purely Microsoft Windows based clusters in general is lower because
many, many fewer of them have been built for serious number crunching. 

Microsoft Windows "clustering" tends to refer to High Availability [HA] 
work and failover of important servers. [Experienced Linux users may 
feel free to mentally substitute the words "Low" and "fallover" in the 
above sentence, since that is the experience that many of us have had 
when dealing with Windows systems :) ]

> 
> Master node connected to network and 8 64bit Xeon nodes on private 
> cluster network.
> 

Sounds sensible: this should fit in a small rack. If you have individual
desktop size machines and have to connect them ad hoc, that's more of a 
problem. The usual considerations of power and air conditioning apply: 
with Windows, you may also have to consider system reboots. EM64T / 64 
bit Xeons are a bit more power hungry than the corresponding Opteron 
chips: there are periodic comparisons here on the list for this sort of 
thing. Price/performance in "bang per buck", the Opterons probably run
slightly ahead in HPC terms and the use of Xeons is surprising but this 
depends on vendor - from Dell, for example, you'll only get Xeons in 
their PowerEdge machines because they "don't do" AMD at the moment.

Odd random considerations which apply to any "corporate" cluster [but
possibly more so for Windows :) ]

Network and security:

How are you running your inter-node "private" network - separate network 
switch? [If cluster goes beserk and fires off a network packet storm is 
it _really_ local?]  

Head node is firewalled from the main university network how ?
[Do you want the entire university undergraduate population using this?
What do you do about viruses?]

Health and safety and installation:

Who does the system maintenance and replacement of failed nodes? How
are they cabled? Can you get to reboot any machine easily?? - all the
usual stuff.

[Note to self: more than five machines on/under my desk at home is 
probably overkill and should be properly arranged with cable ties and 
provision of leg space :) ]
 
> While the system will be used to develop and run parallel programs,
> Fluent...., one of the primary reasons for setting up the system is to
> assist academics with low power systems on their desktop to run pacakages
> such as Matlab, Mathematica, SAS ...
> 

Wrong solution to inadequately specified problem. Nail the offending
requestor to the nearest ceiling :)  Windows is, primarily, not intended
for such use. If I recall correctly, Fluent has been ported to Windows 
but most of the CFD folk use Linux for this: implementing parallel 
programming and message passing algorithms and so on is significantly 
easier.  Licensing costs for Matlab, Mathematica, SAS on a multi-node
cluster and concurrent licence administration costs probably outweigh
the hardware costs by a factor of four or more to one here.

Something like Quantian [a specialist Linux live CD] and use of 
Free/Open Source equivalents of some of the above - Octave, R and so on, 
is very cost effective if you are doing this on the cheap :)
  
> In Linux clsuters the appliactions are held in one area. Does each node of a
> windows cluster need to have a copy of each application?
> 

You need to check with the vendors of each individual application as to 
their licensing policy. Some will be per node, some may be per CPU, some 
may be per seat. There are commercial licensing managers - FlexLM ?? - 
but this is now your headache. It may be that you will need one copy 
of each application per node to comply with licenses. Check for academic 
pricing. Per CPU also raises the interesting question of enabling 
HyperThreading on the Intels - is that one or two CPU's you have there :)

> How doe users access each node - terminal services?
> 

In the Linux world, almost certainly SSH. In the MS world, Terminal 
Services / Citrix are probably the answers. Check your CALs. Do you
have enough copies of Windows 2003 Server ?? Your headache - see nailing
the cluster specifier to the ceiling (above).

> Would be interested in access to a 'dummies' guide to establishing a windows
> cluster
> 

Who specified the requirements for the cluster and that it should run 
Windows?? You should be getting them to do this sort of thing 
as part of the initial design phase before it ever gets purchased.
Your friendly vendor should have been able to provide much of this 
information as well.

For anybody else who may find this as a result of a Google for "Windows
cluster HPC beowulf" or similar in the future - 

A) Get the requestor to lurk on the Beowulf list for a month or so 
_before_ buying the cluster and landing the task of researching 
it/building it/running it on some hapless system administrator / 
Principal Analyst who is already considerably overworked.

B) Just use Linux :)

> Ricky 
> 
> 
> __________________
> Ricky Rankin
> Principal Analyst
> Information Services
> Queen's University Belfast
>  
> Tel: 02890 974824
> Fax: 02890 335073
> email: r.rankin at qub.ac.uk
>  
> 
> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org
> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf



More information about the Beowulf mailing list