[Beowulf] centos5 as cluster os

Joe Landman landman at scalableinformatics.com
Fri Feb 15 14:45:45 EST 2008



Robert G. Brown wrote:

>> As few things as possible installed.  Keep it simple.  Fewer things 
>> means less of an attack surface, a smaller management base, and 
>> hopefully smaller emergent complexity.
> 
> Awwww, but then you don't have any fun!  And this last exploit merely

er ... ah ... we must have different definitions of what "fun" is.

I just spent yesterday discovering that Yahoo/ATT.net was the source of 
a DoS attack against us, and firewalling the offenders off made me 
happy.  Small attack surface.  Sort of like an electronic version of 
"300".  Channel em, and if you don't like em, well ...

> required the right binary built from source, not one on your system
> anyway.  Minimalism is again a matter of cost benefit.  Different people
> or organizations will have different comfort zones or goals.  Minimalism
> on the desktop means giving up a lot of possibly useful stuff.
> Minimalism in a cluster means having to spend more time putting stuff
> back when it turns out that you need it after all.  Both of these are

Anyone using their cluster for TeX?  I used it to write a thesis.  A 
distributed make environment (yeah, I built my thesis with a make file), 
would have helped ...

> costs; you have to balance them agains the perceived risk benefit, which
> in turn depends on your estimate of the risk of attack, the likely
> window of opportunity for an attack, your degree of vigilance, the cost
> of putting things right again.
> 
> I personally prefer high vigilance (as it has historically ALWAYS been
> the case for me that vigilance reveals cracking attempts or successes,
> and there are ALWAYS going to be holes I don't get closed, at least not
> right away or maybe in time) coupled with a robust and easily restored
> backup and installation system.  If a host gets cracked, reinstall it

Oddly enough we really are saying similar things.

a) we won't get everything
b) check the logs
c) apply the patches
d) prepare for the worst ...


> via kickstart/PXE and forget it.  No local data on a host.  Backup

Heh...  I think I may have gone past you on this one.  No os on the 
host.  PXE boot it.  No more installs.  Put up a unionfs/aufs and let em 
write all over / and /etc and ...  and then see what happened.  :)



> everything.  Protect the servers with far greater vigilance than nodes
> or clients.  Then don't worry so much about the periphery.

Nodes are expendible/disposable.  Its the rest of it that is hard to 
replace.  So make it easier.

> 
> But there are places where cracking has a much higher up-front cost, or
> a higher risk.  So I don't argue that this recipe is right for all.
> 
>> * I have been bashed/castigated in 2 fora recently for daring to 
>> suggest that some technology may have alternatives that one might wish 
>> to consider, or there may be known issues, or whatever.  Shooting the 
>> messenger.  Not a wise move.  You don't have to believe me, though I 
>> do recommend that you make a backup of your Rocks system if you do 
>> choose to run yum.  You can run yum safely on it, though it takes some 
>> work. And the Rocks folks have recently formed a user group to help 
>> make sure it is safe going forward (cudos to the Rocks folks for doing 
>> this).
> 
> What?  You said technology has alternatives?
> 
> Well no WONDER you got bashed.  I'd have bashed you here if only I'd
> known.   Look:
> 
> <bash>

[thaWackaaaaa]

  <owie!>

> 
> There.  Now it's three out of three;-)

D'oh!


-- 
Joseph Landman, Ph.D
Founder and CEO
Scalable Informatics LLC,
email: landman at scalableinformatics.com
web  : http://www.scalableinformatics.com
        http://jackrabbit.scalableinformatics.com
phone: +1 734 786 8423
fax  : +1 866 888 3112
cell : +1 734 612 4615
_______________________________________________
Beowulf mailing list, Beowulf at beowulf.org
To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf

!DSPAM:47b5ecbd243806491211187!



More information about the Beowulf mailing list