Call for Co-Author: Linux Compute Cluster book (Chander Kant)

Helms, Scott A HelmsScottA at
Fri Feb 7 15:25:55 EST 2003

Hey, if you need somebody really dumb to test the ease of use of your new book, I'll volunteer!

-----Original Message-----
From: beowulf-request at [mailto:beowulf-request at]
Sent: Friday, February 07, 2003 11:01 AM
To: beowulf at
Subject: Beowulf digest, Vol 1 #1206 - 6 msgs

Send Beowulf mailing list submissions to
	beowulf at

To subscribe or unsubscribe via the World Wide Web, visit
or, via email, send a message with subject or body 'help' to
	beowulf-request at

You can reach the person managing the list at
	beowulf-admin at

When replying, please edit your Subject line so it is more specific
than "Re: Contents of Beowulf digest..."

Today's Topics:

   1. Call for Co-Author: Linux  Compute Cluster book (Chander Kant)
   2. Fwd: [GE users] JGrid: an RMI-based Java interface for Grid Engine (Ron Chen)
   3. Re:Gateway problems in beowulf cluster (=?iso-8859-1?q?Angelos=20Molfetas?=)
   4. leaky capacitors killing motherboards (Ken Chase)
   5. Myrinet hardware reliability (Victoria Pennington)
   6. Question about custers (KNT)


Message: 1
Date: Thu, 6 Feb 2003 12:51:25 -0800 (PST)
Subject: Call for Co-Author: Linux  Compute Cluster book
From: "Chander Kant" <ck at>
To: <beowulf at>
Reply-To: ck at


I am currently looking for a co-author to join me
in an on-going project to write a book on Linux
compute clusters. (Yes, I know this will be yet
another book on this topic, but hopefully the
goals and outcomes of this one are a bit different
than others).

The current (reviewable) status of the project can
be seen at:

(Above has two chapters, and outline for rest of them. I
have other notes and materials which have not been cleaned
up for review).

Please let me know if you have the time, background
and interest to work with me on this.

I am looking for someone who has been involved with Linux
compute clusters extensively. Preferably from labs or
academia (though not necessarily). Also, preferably someone
with more of a cluster programming background (again not
necessary, but this would complement my more systems oriented



Chander Kant
ck at


Message: 2
Date: Thu, 6 Feb 2003 16:52:57 -0800 (PST)
From: Ron Chen <ron_chen_123 at>
Subject: Fwd: [GE users] JGrid: an RMI-based Java interface for Grid Engine
To: Beowulf <beowulf at>

--- Charu Chaubal <Charu.Chaubal at> wrote:
> Date: Wed, 05 Feb 2003 16:06:42 -0800
> From: Charu Chaubal <Charu.Chaubal at>
> To: users at,
> dev at,
>         announce at
> Subject: [GE users] JGrid: an RMI-based Java
> interface for Grid Engine
> A new package providing a prototype RMI-based Java
> interface for Grid Engine has been posted to the
> Grid Engine Project HOWTO page.  The HOWTO describes
> the prototype in detail, and provides a link from
> which you can download the packages.
> Find it on the HOWTO page, or directly link from
> this link:
> regards,
> 	Charu
> -- 
> # Charu V. Chaubal		# Phone: (650) 786-7672 (x87672)
>   #
> # Grid Computing Technologist	# Fax:   (650)
> 786-7323            #
> # Sun Microsystems, Inc.	# Email:
> charu.chaubal at     #
> To unsubscribe, e-mail:
> users-unsubscribe at
> For additional commands, e-mail:
> users-help at

Do you Yahoo!?
Yahoo! Mail Plus - Powerful. Affordable. Sign up now.


Message: 3
Date: Fri, 7 Feb 2003 14:57:51 +1100 (EST)
From: =?iso-8859-1?q?Angelos=20Molfetas?= <amolfetas at>
Subject: Re: Gateway problems in beowulf cluster
To: Mike Davis <jmdavis at>
Cc: beowulf at

 --- Mike Davis <jmdavis at> wrote: 
> It should work if you have ipforwarding setup. There
> shouldn't be a difficulty since the channel bonded
> interfaces have to route through a single interface
> to the larger network.  Why do you want to reach the

> outside from the beowulf? 

Our cluster is locked in a LAN room which has one KVM
enabled terminal and a dedicated A/C. The cluster will
be used by students who are doing parallel
programming, postgraduate students who are doing
cluster projects and various of our p.h.d's will
probably use it for their research. This means that
our beowulf cluster will have to have some limited
connectivity with the university network.

>Do you want a one way, or two way connection?

I was thinking of allowing ssh access to the cluster
from outside (ie. port forward ssh connections to the
master node). I will probably allow outgoing masq
connections since users may way want to access network
drives (for example, students may want to save work on
their network drives on our department's student

> For security purposes, I never forward ip from the
> beowulf to the 
> outside and tightly limit outside traffic to the
> gateway node.

I agree that one has to be very security conscious,
thats why I am thinking of allowing only ssh traffic
in the beginning. We are thinking of limiting access
to our university's IP address. These restrictions can
be relaxed if there are genuine reasons for doing so,
but at the moment there are not any. 



> Angelos Molfetas wrote:
> >Hello Everyone,
> >
> >I was wondering if anyone has had any problems with
> >getting Channel bonding working with iptables?
> >
> >I am currently trying to configure a linux box
> which
> >acts as a gateway between our Beowulf cluster
> (channel
> >bonded) and the university network (single fast
> >ethernet). I trying to join (using SNAT/DNAT) the
> >gateway's public IP address with the master private
> IP
> >address. This way users can just ssh to the gateway
> >and it will automatically connect them to master
> node.
> >
> >
> >I don't think the problem is with my iptables
> scripts
> >as they run properly when the beowulf cluster is
> >running in single NIC mode. As soon, as we switch
> >channel bonding on, it refuses to work.
> >
> >I suspect that the linux kernel has problems
> routing
> >packets between a channel bonded interfaces (bond0
> >[eth1 + eth2] for example) and a single NIC
> interfaces
> >(eth0 for example).
> >
> >I was wondering if anyone else has had a similar
> >problem in their beowulf building experience.
> >
> >Thanks,
> >
> >Angelos 
> >
> > - Yahoo! Movies
> >- What's on at your local cinema?
> >_______________________________________________
> >Beowulf mailing list, Beowulf at
> >To change your subscription (digest mode or
> unsubscribe) visit
> >
> >  
> >
> -- 
> Mike Davis                             Web and
> Research Computing Services
> Unix Systems Manager            Virginia
> Commonwealth University
> jmdavis at           804-828-3885 (fax:
> 804-828-9807)
> _______________________________________________
> Beowulf mailing list, Beowulf at
> To change your subscription (digest mode or
> unsubscribe) visit - Yahoo! Greetings
- Send some online love this Valentine's Day.


Message: 4
Date: Fri, 7 Feb 2003 02:35:34 -0500
From: Ken Chase <math at>
To: beowulf at
Subject: leaky capacitors killing motherboards


We've had one board die with wierd goo around the caps and burn marks on
it. Beware.

Ken Chase, math at  *  Velocet Communications Inc.  *  Toronto, CANADA 


Message: 5
Date: Fri, 7 Feb 2003 09:40:05 +0000
From: Victoria Pennington <v.pennington at>
To: beowulf at
Subject: Myrinet hardware reliability


We have a 113 node IBM x330 cluster with Myrinet 2000.  We're
experiencing very high failure rates on Myrinet switch ports
(average 3 per month) and on Myrinet NICs to a lesser extent
(about 1 per month).  Ports and NICs are fine one minute,
then one or the other just dies (for good).  Cables
(fibre, not copper) seem fine - one or two failures only in
nearly a year.

There is no pattern in the failures, and they are entirely
unrelated to usage levels; seldom used nodes are just as
likely to have failures as heavily used nodes.

We have another small IBM cluster with Myrinet 2000
(16 port switch with copper cables), and this has run solidly
for nearly 2 years with not one Myrinet hardware fault.

I'd be really interested to know of others' experiences with
Myrinet kit, especially in larger clusters.

Dr Victoria Pennington
Manchester Computing, Kilburn Building,
University of Manchester,
Oxford Road, Manchester M13 9PL
tel. 0161 275 6830, email: v.pennington at


Message: 6
Subject: Question about custers
From: KNT <zajcewl at>
To: beowulf at
Date: 07 Feb 2003 17:52:14 +0100

I wanted to ask if there's a way of theoreticaly calculating a cluster
power by a mathematical formula, basing on the nodes procesor type, ram,
etc.? Assuming also that the components of each node can be different.

						-Thanks from the above
~ zajcewl at ~ knt at ~
~                      ~ knt at        ~
~ GG:3418267           ~ ICQ: 99730430    ~
~ Registered Linux User: 300900           ~
~ Registered Linux Machine Number: 186168 ~


Beowulf mailing list
Beowulf at

End of Beowulf Digest
Beowulf mailing list, Beowulf at
To change your subscription (digest mode or unsubscribe) visit

More information about the Beowulf mailing list