Print
Hits: 5979

Ah, we meet again at the Supercomputer Conference (SC). This year it's in lovely and warm Tampa Florida. I wanted to start my SC06 blog by talking about some of the pre-conference press releases that I have seen that I think are significant. I did bring a camera this year so I hope to also post some pictures from the floor (or the bar). More importantly, however, I have managed to locate about a half-dozen or so Starbuck's around my hotel and the conference area. This is in stark contrast to SC05 in Seattle when I had an extremely difficult time locating a Starbucks anywhere in Seattle :)

Location, location, location...

This year's SuperComputing conference is in lovely Tampa Florida. The weather has been and should be perfect. It's in the low 80's and sunny and it's right on the water. But so far these are the good things I can say about the location :)

The show floor, which is one of the main reasons for going, is smaller than the show floor(s) at SC05 which was in Seattle. This has hurt a few people that I have spoken with. In fact I've seen some booths being set up in the hall outside of the exhibit floor (I don't know if this is intentional or not).

In addition, there aren't enough hotels nearby to hold everyone. So the hotels are distributed all over downtown Tampa and a bunch of people are staying near the airport and even further away. But the SC06 committee has been doing a good job in providing buses to run everyone around to their hotels.

So needless to say, I'm not a big fan of the arrangements for the show, but Tampa is nice and warm. So for now I'll give the committee an 8 for effort and a 4 for results. We'll see if my scores improve the rest of the week.

Pre-SC06 Press Releases/Announcements

There are usually a ton of press releases at any SC show. But in the last few years, companies have started to put out press releases before the show. It helps keep their press releases from being drowned in the sea of releases at the show. It also makes my like a bit easier because I can review some of them before the show kicks into high gear. So let's take a look at some of what I consider to be the more interesting ones.

Let's go back about a week when the releases started coming. Panasas was one of the first companies to make some significant announcements (IMHO). In this releases they announced Version 3.0 of their operating environment, ActiveScale. The new version has new predictive self-management capabilites that scans the media and file system and proactively corrects media defects. Furthermore, they improved the performance of ActiveScale by a factor of 3 to over 500 MB/s per client and up to 10 GB/s in aggregate.

More over, in this press release Pansas announced two new products - ActiveScale 3000 and ActiveScale 5000. The Activescale 5000 is targeted at mixed cluster (batch) and workstation environments (interactive) that desire a single storage fabric. It can scale to 100 of TBs. The Activescale 3000 is targeted at cluster (batch) environments with the ability to scale to 100 TB in a single rack and combining multiple racks allows you to scale to Petabytes.

To me, what is significant is that Panasas is rapidly gaining popularity for high performance storage for Linux clusters. Part of the reasons for the popularity is that Panasas has very good performance while still being a very easy to deploy, management, and maintain storage. Plus it is very scalable. Be sure to watch Panasas

People have been saying that one of the barriers to making clusters more pervasive is the lack of good programming tools. Intel announced some new products to help the HPC community. First and foremost, they will launch the Intel Xeon 5300 series of processor, code-name "Clovertown." This is the first commodity quad-core processor. They also announced the Cluster Toolkit 3.0 with the Intel MPI Library, Intel Math Kernel Library Cluster Edition, and Intel Trace Analyzer and Collector. They also announced Cluster OpenMP for Intel compilers. It is a new product that "... extends OpenMP to be applicable to distributed memory clusters, helping OpenMP become a programming method that works well for dual-core and quad-core processors as well as clusters."

Also last week, SiCortex formally announced their new cluster. Their focus has been on reducing the power required for clusters while keeping the performance as high as possible and keeping a balance between compute and communication. They have used a 64-bit MIPS processor as a a basis for a new chip that has six 64-bit processor cores, multiple memory controllers, a new high performance cluster interconnect and a PCI-express connection to storage and internetworking. According to the press release, "A complete SiCortex cluster node with DDR-2 memory consumes 15 watts of power, an order of magnitude less than the 250 watts used in a conventional cluster node." The SiCortex systems will use Linux as their OS. The company claims that, "Current Linux application software will operate on SiCortex systems without modification."

SiCortex is introducing two models. The first one, the SC5832 is an enterprise class system with 5,832 cores, 8 TB of memory and 2.1 terabits per second of I/O capability. The company claims that it will deliver up to 5.8 TFLOPS of performance in a low power cabinet (that means each core delivers about 100 MFLOPS). The second model, the SC648 is targeted at deparmental users (smaller number of users) and offers up to 648 GFLOPS of performance using up to 864 GB of memory and 250 gigabits/s of I/O capability. The really interesting part is that all of this fits into less than half a rack (less than 21U) and uses so little power that it can be plugged into a single 110 Volt wall socket. Perhaps this will be the holy grail of personal clusters that people, myself included, have been seeking for years.

The last announcement that I thought was significant was from NVIDIA. They announced the first C compiler environment for GPUs (Graphics Processing Units). Called CUDA, the environment allows developers to build applications for GPUs using the traditional C language rather than specialized languages such as Brook, Cg, or Sh. CUDA works on Geforce 8800 or future graphics cards and offers some unique features. For example, you can use CUDA enable GPUs to create a Parallel Data Cache, that allows up to 128, 1.35 GHz processor cores to cooperate with each other while computing (however I don't know if these cores have to be in a single node or multiple nodes).

This announcement is significant because with CUDA you can know, perhaps, utilize GPUs for computation in addition to using for playing those interesting, ah, "educational" applications. GPUs have the potential for great levels of performance but have required re-thinking algorithms to take advantage of the processing power. CUDA will now allow programmers to use the GPUs with the standard C language. However, I'm curious if you have to re-think your algorithms to take advantage of the GPUs or if CUDA helps in that regard. Hmm... Looks like I have a good excuse to get the Geforce 8800 card for Christmas.