Cluster Newbie
Don't know a thing about clusters. Not to worry. Prodigious cluster scrivener Robert Brown is here to present the basics in a clear and concise set of introductory columns (and then some). Welcome to High-tech hardball.
- Details
- Written by Robert G. Brown
- Hits: 12865
Ready for Real Parallel Computation, as if there was any other
In the last column we introduced the Parallel Virtual Machine (PVM) subroutine library, the original toolset that permitted users to convert a variety of computers on a single network into a "virtual supercomputer". We reviewed its history and discussed how it works, then turned our attention to what you might have to do to install it and make it work on your own cluster playground (which might well be a very simple Network of Workstations -- NOW cluster -- that are ordinary workstations on an ordinary local area network).
- Details
- Written by Robert G. Brown
- Hits: 22769
Pass The Messages Please
The idea of a homemade parallel supercomputer predates the actual Beowulf project by years if not decades. In this column (and the next), we explore "the" message passing library that began it all and learn some important lessons that extend our knowledge of parallelism and scaling. We will do "real" parallel computing,using the message passing library that made the creation of Beowulf-style compute clusters possible: PVM. PVM stands for "Parallel Virtual Machine", and that's just what this library does -- it takes a collection of workstations or computers of pretty much any sort on a TCP/IP network and lets you "glue" them together into a parallel supercomputer.
- Details
- Written by Robert G. Brown
- Hits: 23382
That Free Lunch You Wanted...
Clustering seems almost too good to be true. If you have work that needs to be done in a hurry, buy ten systems and get done in a tenth of the time. If only it worked with kids and the dishes. Alas, kids and dishes or cluster nodes and tasks, linear speedup on a divvied up task is too good to be true, according to Amdahl's Law, which strictly limits the speedup your cluster can hope to achieve.
In the first two columns we explored parallel speedup on a simple NOW(network of workstations) style cluster using the provided task and taskmaster program. In the last column, we observed that there were some fairly compelling relations between the amount of work that we were doing in parallel on the nodes, the amount of communications overhead associated with starting the jobs up and receiving their results, and the speedup of the job as we split it up on more and more nodes.
- Details
- Written by Robert G. Brown
- Hits: 19147
In our previous installment, we started out by learning how to use pretty much an arbitrary Linux LAN as the simplest sort of parallel compute cluster. In this column we continue our hands-on approach to learning about clusters and play with our archetypical parallel task on our starter cluster to learn when it runs efficiently and just as important, when it runs inefficiently.
If you've been following along, in last column we introduced cluster computing for the utter neophyte by "discovering" that nearly any Linux LAN can function as a cluster and do work in parallel. Following a few very general instructions, you were hopefully able to assemble (or realize that you already owned) a NOW (Network of Workstations) cluster, which is little more than a collection of unixoid workstations on a switched local area network (LAN). Using this cluster you ran a Genuine Parallel Task (tm).
- Details
- Written by Robert G. Brown
- Hits: 68388