Print
Hits: 9274

Much has changed in the supercomputing arena. Even you can get in the game!

Recently, Sebastian Anthony wrote an article for ExtremeTech entitled What Can You Do With A Supercomputer? His conclusion was "not much" and for many people he is largely correct. However, there is deeper understanding that may change the answer to "plenty."

He was mostly right when talking about the worlds largest supercomputers. Indeed, one very workable past definition of a supercomputer was "any computer that had at least a six digit price tag." In the past, that was largely true and created a rather daunting barrier to entry for those who needed to crunch numbers. The cost was due to an architectural wall between supercomputers and the rest of computing. These systems were designed to perform math very quickly using vector processors. It all worked rather well until the cost of fabrication made creating your own vector CPU prohibitively expensive.

The Commodity Juggernaut

The only way to justify today's high CPU fabrication costs is to sell a boat-load of processors. The traditional supercomputer market, measured in the hundreds of new systems per year, could not afford to keep spinning custom CPUs for such a small market. At the same time commodity x86 processors were getting faster due to competitive forces, which in-turn created higher volumes that justified the high fabrication costs.

The bottom line; commodity X86 processors got fast and cheap. Niche vector processors had trouble competing in this space and virtually all the traditional supercomputer companies that were still in existence began selling parallel designs based on a cluster of commodity CPUs. The basic components were a compute node (consisting of a commodity CPU, memory, and possibly a hard disk), an interconnect, and some type of global storage. The interconnects could vary from the cheapest (slowest), which in the beginning was Fast Ethernet, to the more expensive (fastest) like early Myrinet or QSnet. Parallel programing was done using the Message Passing Interface (MPI) library for Fortran, C, and C++. This scalable approach opened up a whole new era of commodity supercomputing. The trend is clearly evident in the figure below which shows the progression of processor family for each of the fastest 500 computers over the last 18 years (As ranked by the HPL benchmark).

The ability to use commodity off the shelf parts lowered the barrier of entry significantly and allowed scientists and engineers to buy high performance computing power to fit their budget. Clusters of all sizes starting showing up everywhere. The use of standard Linux distributions also made it possible to "roll your own" cluster with little more than some old x86 boxes. In effect, some level of "supercomputing" was available to the masses as the distinction between high-end and low-end was essential was how many processors you could buy.

Somewhere in the mix, the term "supercomputer" began to fade and the term High Performance Computing or HPC fell into favor. (HPTC or High Performance Technical Computing is sometimes used.) This transition was largely due to the dissolving barrier between the traditional supercomputer and the cluster of commodity hardware. Typically, the cluster lowered the price-to-performance ratio by a factory of ten and reduced the cost-of-entry by at least ten times.


What Is A Supercomputer

Some difficulty in defining what is a "supercomputer" remains today. A simple definition is; supercomputing provides performance in excess of what a single machine can deliver. This definition is a bit weak, however, because the addition of multiple GPU units to a single server creates a very powerful system. Further, the advent of multi-core has also allowed a substantial amount of cores to be placed in a single server case. Perhaps a more workable definition is any computing that improves the performance of a single application above the fastest single processor/co-processor that is currently available on the market. That is, if you combine multiple CPUs/GPUs to increase Floating Point Operations per Second (FLOPS), then you have ventured in to the world of HPC. Whether you have a "true supercomputer" seems to be fading because the performance is scalable.

Basing the definition on some threshold of the worlds fastest computers like those on the Top500 List, has been suggested, but this approach is somewhat tenuous, however, because the Top500 list is based on a single benchmark. Other machines, including those that can deliver large amounts I/O, may not fair as well on the HPL benchmark and yet deliver record breaking performance in other areas.

The definition of a supercomputer has become somewhat vague and as a result picking a few fast systems to represent the spectrum of HPC computing is a bit disingenuous. Some other notions that were presented in the article also need to be cleared up. First, not all high end HPC machines are water cooled. This may change, but it is certainly not the rule. There is no need for a custom version of Linux kernels on most systems. Linux distributions, and the kernels they include are more than adequate for most clusters. There is some additional software required, such as MPI libraries and administration tools, but for the most part, stock Linux distributions work just fine. Interconnects can vary, today the most popular high performance solution is InfiniBand although a large number of production clusters use Gigabit Ethernet. The use of GPU enhanced nodes is also becoming more common. The user can often pick and choose what works best for their "supercomputer." Of course, at the very high end, there are custom components that aid in delivering the fastest possible performance, but they are often not required to get into the HPC game.

What Can I Do With A Supercomputer

As to the question; "What can I do with a supercomputer?" The answer is; "Like many things, it depends on what you want to do." Back when the first PCs arrived on the scene, the same question was asked, and a common answer was given; "not much." Software was written, and new markets developed, including games like Crysis 2, which by the way would have required a supercomputer back in the day.

If you have a need for crunching numbers, like those who programed the first computers, then there is plenty you can do with HPC. A partial list is available along with many freely available HPC Applications and Programming Tools. Many of the applications are for specialized technical domains and will not be of interest to the average computer user. These applications are, however, of great importance to students, teachers, researchers, scientists, engineers who may have access to smaller moderate HPC resources. And, writing your own applications, while harder than most programming tasks, does not take an advanced degree. There are plenty of  resources on Cluster Monkey to help you get started.

Closer to your home, there are other interesting uses for HPC many of which bring Artificial Intelligence (AI) into everyday use. In case you missed it, IBM's Watson moved supercomputing from chess playing to a whole other cognitive level. Many supercomputers, like Watson, started as large costly custom machines that eventually paved the way for small affordable machines or devices. The "Cloud" may also offer some interesting chances to use supercomputers. Have you talked to Siri lately?

Finally, the barrier to entry for personal HPC is getting lower every day. Consider the The Limulus Project, which is an effort to bring true HPC capability into the desk-side market at the sub $5000 (US) price point. The project includes both hardware and software and allows individuals to run real HPC applications and do meaningful development. Given the state of HPC, the question everyone should be asked is, "What can we do with 16+ cores, 200 GFLOPS or performance, and several TBytes of storage next to our desks?"

Only you can answer that question. Have at it.

Desk-side Personal Supercomputer