Article Index

Clustering software makes it all work

In the first part of this article I showed you how to get the basic settings for our virtual cluster: we started with a fresh install of Xen; then we created five virtual machines (one master and four slaves), and then we configured the network and NFS, so that the users could share their home directories across the cluster. This is the basis for this second part of the article, in which we see how to install a number of packages that will allow us to run parallel programs on it and manage the cluster more efficiently. Concretely you will learn how to install the C3 command suite, the Modules package for easily switching environments, a version of MPICH for running parallel programs, and the Torque/Maui combination for job queue management. These packages (specially Torque/Maui) can be configured extensively according to your needs. For this virtual cluster we will use a minimal configuration, but this should be enough to get you started. If you are interested in just trying out the virtual cluster but don't want to perform all the steps yourself, you can grab a ready-made cluster image from the download:contrib:cluster page at Jailtime.org.

Creating a Snapshot of The Cluster So Far

In the first part of the article we did quite a lot of work, and getting this far again if something goes wrong later on would be quite tedious. So before we continue let's create a snapshot of the cluster so far. Doing it is really simple. We just make sure we stop every machine with the halt command and then create a new directory to keep all the files. For this snapshot, we will create a directory called cray-network.configuration:
 
angelv@yepes:~$ sudo mv /opt/xen/cray /opt/xen/cray-network.configuration
angelv@yepes:~$ sudo mkdir /opt/xen/cray
angelv@yepes:~$ sudo rsync -azv /opt/xen/cray-network.configuration/ /opt/xen/cray/

The configuration files are still pointing to /opt/xen/cray which could always be our working version of the cluster. Previous snapshots will be stored in different directories as shown above, and we could revert to a previous version anytime, by just copying the appropriate directory to /opt/xen/cray

We will also create a small script to help us start the cluster, which we will save as /etc/xen/cray/start-cluster.sh with the following contents:

  
#!/bin/sh
xm create /etc/xen/cray/master.cfg
echo "waiting for master to start ...."
sleep 10
xm create /etc/xen/cray/slave1.cfg
xm create /etc/xen/cray/slave2.cfg
xm create /etc/xen/cray/slave3.cfg
xm create /etc/xen/cray/slave4.cfg

We just have to make it executable and run the script to start the cluster. Note that the script uses the xm create command without the -c option, so that we will not be connected to the console. After the five machines have booted, we can connect to them by either the xm console command or by ssh-ing to the master node.

  
angelv@yepes:~$ sudo chmod 755 /etc/xen/cray/start-cluster.sh
angelv@yepes:~$ sudo /etc/xen/cray/start-cluster.sh

NOTE: sometimes the loops in the system remain busy, when they should not. If the system complains about a loop being busy or if we just want to test for this before starting the cluster and rectify it in case there are busy loops we can do it with:

  
angelv@yepes:~$ for file in /dev/loop[0-9]* ; do sudo losetup  $file ; done
angelv@yepes:~$ for file in /dev/loop[0-9]* ; do sudo losetup  -d $file ; done

Basic Cluster Configuration

A cluster should be something more than just a collection of nodes. The idea should be to make those nodes behave as close as possible as if it was just a single machine. To somewhat help us attain this we will install the C3 Cluster Command and Control tool suite and the Modules package, which will make the usage of different versions of software easier. We will also deal with something smaller but important, the configuration of the time zone.

C3 Installation

The Cluster Command and Control (C3) tool suite "implements a number of command line based tools that have been shown to increase system manager scalability by reducing time and effort to operate and manage the cluster". By reading the installation instructions, we see that we will need to configure rsync, perl, and rsh (in a production cluster you should probably consider using ssh instead, but for the moment we will configure it with rsh, even for the root account).

In the master node we install these packages and put the RPMs in the /cshare directory for the slaves to access (remember that we do not have Internet access from the slaves):

  
-bash-3.00# yum install rsync rsh rsh-server xinetd
-bash-3.00# cp /var/cache/yum/base/packages/rsync-2.6.3-1.i386.rpm /cshare/
-bash-3.00# cp /var/cache/yum/base/packages/rsh-* /cshare/
-bash-3.00# cp /var/cache/yum/base/packages/xinetd-2.3.13-4.4E.1.i386.rpm /cshare/

Then, in the slaves we first install these RPMs (with the command rpm -ivh /cshare/*rpm) and then we modify the files /etc/xinetd.d/rsh, /etc/hosts.equiv, /etc/securetty, and /etc/pam.d/rsh so that they contain the following (since we don't have editors in the slaves, you can edit them in the master, put them in the /cshare directory, and then copy them from the slaves to their destination):

  • In the file /etc/xinetd.d/rsh change the line disable = yes to disable = no
  • In the file /etc/securetty add a line containing rsh
  • In the file /etc/pam.d/rsh change the line auth required pam_rhosts_auth.so by adding at the end hosts_equiv_rootok

Also, create the file /etc/hosts.equiv, containing:

  
boldo
slave1
slave2
slave3
slave4

Then, (remember, only in the slaves) we start xinetd:

  
-bash-3.00# service xinetd start

At last, we are now ready to install C3 in the master node:

  
-bash-3.00# wget -nd http://www.csm.ornl.gov/torc/C3/Software/4.0.1/c3-4.0.1.tar.gz
-bash-3.00# tar -zxf c3-4.0.1.tar.gz
-bash-3.00# cd c3-4.0.1
-bash-3.00# ./Install-c3
-bash-3.00# ln -s /opt/c3-4/c[^0-9]* /usr/local/bin/

We create the configuration file /etc/c3.conf with the following contents:

  
cluster boldo {
  boldo:192.168.1.10
  slave[1-4]
}

And following the installation instructions I add the following line to the /etc/profile file in the master node:

  
export C3_RSH=rsh

After starting a new session, we can verify that cexec works fine, for user root as well as for user angelv, for example by running the command cexec uname -a

  
[angelv@boldo ~]$ cexec uname -a
************************* boldo *************************
--------- slave1---------
Linux slave1 2.6.12.6-xenU #2 SMP Thu Aug 17 10:30:05 WEST 2006 i686 i686 i386 GNU/Linux
--------- slave2---------
Linux slave2 2.6.12.6-xenU #2 SMP Thu Aug 17 10:30:05 WEST 2006 i686 i686 i386 GNU/Linux
--------- slave3---------
Linux slave3 2.6.12.6-xenU #2 SMP Thu Aug 17 10:30:05 WEST 2006 i686 i686 i386 GNU/Linux
--------- slave4---------
Linux slave4 2.6.12.6-xenU #2 SMP Thu Aug 17 10:30:05 WEST 2006 i686 i686 i386 GNU/Linux
[angelv@boldo ~]$ 

In order to make use of the ckill command, we need to get Perl installed in the slaves so we do (in the master node):

  
-bash-3.00# cp /var/cache/yum/base/packages/perl* /cshare/
-bash-3.00# cexec mkdir /opt/c3-4
-bash-3.00# cpush /opt/c3-4/ckillnode
-bash-3.00# cexec rpm -ivh /cshare/perl-*

Now, from the master node, we can verify that ckill works without problems:

  
[angelv@boldo ~]$ cexec sleep 120 &
[angelv@boldo ~]$ cexec ps -u angelv
[angelv@boldo ~]$ ckill sleep
[angelv@boldo ~]$ cexec ps -u angelv

For the moment we are not interested in cpushimage, so that's all we have to do for C3. It is a small set of tools, but very useful for daily maintenance, specially in large clusters in which you can create subsets of nodes on which to execute commands (see the documentation regarding the syntax of the c3.conf file if you are interested in this).

Configuration Of The Time Zone

The images downloaded from Jailtime.org have the timezone set to EDT. Chances are that your timezone is not that, so that you would like to change it. In order to do it, we can follow the steps in this Red Hat page, by doing this in the master node: First, we create the file /etc/sysconfig/clock with the following (obviously, you should adapt it to suit your location):

  
ZONE="Atlantic/Canary"
UTC=false
ARC=false

Next, we do (by using the C3 commands just installed):

  
-bash-3.00# ln -sf /usr/share/zoneinfo/Atlantic/Canary /etc/localtime
-bash-3.00# cpush /etc/sysconfig/clock
-bash-3.00# cexec ln -sf /usr/share/zoneinfo/Atlantic/Canary /etc/localtime

If we want now to restart our cluster, we can also make use of the C3 commands. To orderly stop all the nodes in the cluster we just have to do the following in the master node (remember this recipe for the future):

  
-bash-3.00# cshutdown t 0 -h
-bash-3.00# halt 

Note: In a real cluster you would like to install something like NTP in your nodes so that time is kept synchronized across the cluster, but with the Virtual Cluster with Xen this is not necessary, as all the virtual machines have the same time as the host machine.

Installation of Modules

The Environment Modules package "provides for the dynamic modification of a user's environment via modulefiles", which can prove very useful in a cluster, so that for example we can install different parallel programming libraries and we can change from using one to another with a simple command, without the need to manually modify environment variables. Installation is easy (we will need to install tcl/tk as well, since the scripts use it):

  
-bash-3.00# wget -nd http://kent.dl.sourceforge.net/sourceforge/modules/modules-3.2.3.tar.gz
-bash-3.00# yum install tcl tcl-devel 
-bash-3.00# tar -zxf modules-3.2.3.tar.gz
-bash-3.00# cd modules-3.2.3
-bash-3.00# ./configure 
-bash-3.00# make
-bash-3.00# make install
-bash-3.00# cd /usr/local/Modules
-bash-3.00# ln -s 3.2.3 default

Now, for the initial configuration of the package we need to copy some dot files, and we recreate the angelv user account to reflect the changes to the /etc/skel directory:

  
-bash-3.00# pwd
/root/modules-3.2.3
-bash-3.00# cp /etc/bashrc /etc/bashrc.OLD
-bash-3.00# cp /etc/profile /etc/profile.OLD
-bash-3.00# cp /etc/skel/.bash_profile /etc/skel/.bash_profile.OLD

-bash-3.00# cat etc/global/bashrc /etc/bashrc.OLD > /etc/bashrc

Apparently there is an error in the resulting /etc/bashrc, and in line 13 we should change \$MODULE_VERSION for /\$MODULE_VERSION (the leading / was missing). We continue:

  
-bash-3.00# cat etc/global/profile /etc/profile.OLD > /etc/profile
-bash-3.00# cp etc/global/profile.modules /etc/profile.modules
-bash-3.00# cat etc/skel/.profile /etc/skel/.bash_profile.OLD > /etc/skel/.bash_profile

-bash-3.00# cpush /etc/bashrc
-bash-3.00# cpush /etc/profile
-bash-3.00# cpush /etc/profile.modules

-bash-3.00# cp /var/cache/yum/base/packages/tc* /cshare/
-bash-3.00# cexec rpm -ivh /cshare/tc*rpm

-bash-3.00# cd /cshare/
-bash-3.00# tar -cPf modules-dist /usr/local/Modules
-bash-3.00# cexec tar -xPf /cshare/modules-dist

-bash-3.00# userdel -r angelv
-bash-3.00# useradd angelv
-bash-3.00# passwd angelv

OK, so that's all we need for the moment, but note that this is a very basic configuration, and later on we should look into the modified files (specially /etc/profile) to tailor them to our needs, although for the moment it should be enough. Note as well that we do not need to distribute the file /etc/skel/.bash_profile to other nodes as user accounts are always created in the master node. Now, we can verify that the modules package is working as intended (as user angelv) with the following commands:

 
[angelv@boldo ~]$ module load modules
[angelv@boldo ~]$ module avail

In the following section, when installing MPICH, we will see how to create a modulefile and how to instruct the users to change their environment by using modules instead of modifying the environment variables directly.

You have no rights to post comments

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.