Sledgehammer HPC
- Details
- Written by Douglas Eadline
- Hits: 8968
 
HPC without coding in MPI is possible, but only if your problem fits into one of several high level frameworks.
[Note: The following updated article was originally published in Linux Magazine in June 2009. The background presented in this article has recently become relevant due to the resurgence of things like genetic algorithms and the rapid growth of MapReduce (Hadoop) . It does not cover deep learning.]
Not all HPC applications are created in the same way. There are applications like Gromacs, Amber, OpenFoam, etc. that allow domain specialist to input their problem into an HPC framework. Although there is some work required to "get the problem into the application", these are really application specific solutions that do not require the end user to write a program. At the other end of the spectrum are the user written applications. The starting points for these problems include a compiler (C/C++ or Fortran), an MPI library, and other programming tools. The work involved can range form small to large as the user must concern themselves with the "parallel aspects of the problem". Note: all application software started out at this point some time in the past.
Answering The Nagging Apache Hadoop/Spark Question
- Details
- Written by Douglas Eadline
- Hits: 9490
(or How to Avoid the Trough of Disillusionment)
 
A recent blog post, Why not so Hadoop?, is worth reading if you are interested in big data analytics, Hadoop, Spark, and all that. The article contains the 2015 Gartner Hype Cycle. The 2016 version is worth examining as well. Some points similar to the blog can be made here:
- Big data was at the "Trough of Disillusionment" stage in 2014, but is not seen in the 2015/16 Hype cycle.
- The "Internet of Things" (a technology that is expected to fill the big data pipeline) was on the peak for two years and now has been given "platform status."
The Ignorance is Bliss Approach To Parallel Computing
- Details
- Written by Douglas Eadline
- Hits: 4875
from the random thoughts department
[Note: The following updated article was originally published in Linux Magazine in June 2006 and offers some further thoughts on the concept of dynamic execution.]
In a previous article, I talked about programming large numbers of cluster nodes. By large, I mean somewhere around 10,000. To recap quickly, I pointed out that dependence on large numbers of things increase the chance that one of them will fail. I then proposed that it would be cheaper to develop software that can live with failure than try to engineer hardware redundancy. Finally, I concluded that adapting to failure requires dynamic software. As opposed to statically scheduled programs, dynamic software adapts at run-time. The ultimate goal is to make cluster programming easier: focus more on the problem and less on the minutiae of message passing. (Not that there is anything wrong with message passing or MPI. At some level messages (memory) needs to be transferred between cluster nodes.)
Return of the Free Lunch (sort of)
- Details
- Written by Douglas Eadline
- Hits: 5844
 
From the do-you-want-fries-with-that department
As processor frequencies continue to level off and mainstream processors keep sprouting cores, the end of the "frequency free lunch" has finally arrived. That is, in the good old days each new generation of processor would bring with it a faster system clock that would result in a "free" performance bump for many software applications -- no reprogramming needed. Can we ever get back to the good old days?
You (Still) Can't Always Get What You Want
- Details
- Written by Douglas Eadline
- Hits: 4724
You can't always get what you want. But that doesn't stop me from asking. Besides, Buddha's got my back. Here's my HPC wish list.
[Note: A recent article by Al Geist, How To Kill A Supercomputer: Dirty Power, Cosmic Rays, and Bad Solder reminds us that statistics at scale represents a big challenge for HPC. The following updated article was originally published in Linux Magazine in May 2006 and offers some further thoughts on this important issue. ]
Twenty five years ago I wrote a short article in a now defunct parallel computing magazine (Parallelogram) entitled "How Will You Program 1000 Processors?" Back then, it was a good question that had no easy answer. Today, it's still a good question with no easy answer, except now it seems a bit more urgent as we step into the "multi-core" era. Indeed, when I originally wrote the article, using 1,000 processors was a far off, but real possibility. Today, 1,000 processors are a reality for many practitioners of high-performance computing (HPC). And as dual-cores (and now 18-core processors) hit the server room, effectively doubling processor counts, many more people will be joining the 1,000P club very soon.
So let's get adventurous and ask, "How will you program 10,000 processors?" As I realized twenty five years ago, such a question may never really have a complete answer. In the history of computers, no one has ever answered such a question to my liking — even when considering ten processors. Of course, there are plenty of methods and ideas like threads, messages, barrier synchronization, and so on, but when I have to think more about the computer than about my problem, something is wrong.
Search
Login And Newsletter
Feedburner
Who's Online
We have 174 guests and no members online
Latest Stories/News
Popular
InsideHPC
- 
            
                                SemiQon Adapting Quantum Technology for Deployment in Space                
                            
                        Oct 30, 2025 | 19:39 pm
- 
            
                                ADNOC and Microsoft Report: 88% Say AI Essential to Energy Transformation                
                            
                        Oct 30, 2025 | 18:11 pm
- 
            
                                NVIDIA and General Atomics with Partners Build AI-Enabled Fusion Reactor Digital Twin                
                            
                        Oct 30, 2025 | 17:52 pm
 
