Select News

The news in this category has been selected by us because we thought it would be interestingto hard core cluster geeks. Of course, you don't have to be a cluster geek to read the news stories.

The garage sale (or should I say tag sale given the New England location) that is SiCortex is disheartening. I worked with SiCortex enough to know that it was "SiCortex" not "SciCortex" as so many were apt to write. I also knew that these were smart people. They were doing the good work. There was no smoke and mirrors, no shining up yet another mass market turd, just bright people delivering on a daring idea. We need more of this not less.

The idea was simple. Instead of doing HPC with clusters employing faster and hotter processors, use many power efficient processors and a great interconnect. Oh yea, and use Linux from the ground up. Keep it open, keep it right. And, it worked quite well.

I have in the past talked with marketing people who seemed clueless about what they were really selling. Not SiCortex. They lived and breathed their technological value proposition. I could tell when I wrote a white paper for them. Theirs was not a "me to" product, nor was it another 1U server or blade with the latest x86 platform in it.

I suspect that the demise of SiCortex is more about the inability of the venture firms to fund the company than their ability to sell supercomputers or push the envelope. They had not yet turned a profit, but seemed to be on their way. I wish the employees of SiCortex a good transition and thanks for being brave.


A few pieces of news crossed my way recently. First, AMD has released the x86 Open64 Compiler Suite (binary and source). This is a free as in beer and in speech compiler suite that is the basis for the PathScale Compiler. AMD also provides a collection of libraries and HPC applications that can be built with the compiler (instructions on how to build the packages are provided.)

While we are talking about compilers. I also found a nice bullet point overview of OpenCL (pdf). If you recall OpenCL is a new language that is designed to be portable across GPU and CPU architectures. It even has a simple FFT example. As I have said in the past, things like CUDA, OpenCL, and BrookGPU are nice, but they don't cover the cluster computing model. And, it is step in the right direction.

Finally, here are two papers (pdf) that discuss using the cloud for HPC. They even include benchmarks! Take a look at Benchmarking Amazon EC2 for High-performance Scientific Computing and Can Cloud Computing Reach The TOP500?. Don't sell your cluster just yet.

Let's give a warm welcome to HPC Community. You may have noticed the web feeds from HPC Community on the right side of the main page. The simians here at ClusterMonkey are working with HPC Community to help build a bigger/stronger community and nothing builds community like free software! HPC Community is the home to a pile of cool software projects. The two most notable are Kusu and Lava. Of course, they have to be good because in addition to open code they have cool names and logos.

kuso logoKusu is the foundation for Platform Cluster Manager (previously known as Platform Open Cluster Stack OCS 5), is a standardized approach to easily build, manage and use Linux clusters and is a freely available cluster distribution!

lava iconPlatform Lava is an open source entry-level workload scheduler designed to meet a wide range of workload scheduling needs for clusters up to 512-nodes.

Check out both projects and more at the HPC community site. And if you are wondering what the name Kusu and the little turtle are about, just ask Why the Turtle?

Recently, fellow Cluster Monkey Jeff Layton and I participated in a Cluster Planing pod-cast over at Research Computing and Engineering (RCE-Cast). Jeff and I offer up a few tips that may help you navigate your way through the maze of HPC cluster options and methods. Of course, the word "um" is not to be confused with some kind of cluster feature. Thanks to Brock Palen and Jeff Squyres for putting up with us.

A few tidbits that may help brighten your HPC day

A few links that you may find interesting. First, Dr. Dobbs is running an article on the new Larrabee API. If you have not heard of Larrabee, it as a new architecture from Intel aimed at the GPU market. The design involves many x86 cores (Pentium P54C) and vector processing. All cores will be cache coherent. It is a kind of a cross between a multi-core processor and GP-GPU from NVidia or AMD/ATI. Of course HPC guys would never use a GPU for crunching numbers.

While we are talking about number crunching GP-GPU's, NVidia has released CUDA version 2.2 with the CUDA GDB style debugger. That is right, for those rare programmers that might just need a debugger, a CUDA debugger with GDB interface is available with all those features you know an love including breakpoints, watch variables, inspect state, etc., as well as additional functions for CUDA-specific features. The is also an update of the Visual Profiler for the GPU that supports, among other things, full measurement of memory bandwidth within a kernel. There are some other improvements as well.

Finally, our very own Jeff Layton has created a Nehalem Memory Cheat Sheet. Thanks to Dell for helping Jeff create this and thanks to Jeff for providing good clear information at a product launch.


Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.


Creative Commons License
©2005-2019 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 License. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.