Select News

The news in this category has been selected by us because we thought it would be interestingto hard core cluster geeks. Of course, you don't have to be a cluster geek to read the news stories.

Some help for the multi-core world: KNEM

The Open MPI Team has announced the release of Open MPI version 1.5. This release represents over a year of research, development, and testing. Open MPI uses a dual release cycle that includes a "super stable" (currently 1.4 series) and the recent "feature release" (1.5 series). The Open MPI release methodology is explained as follows:

  • Even minor release numbers are part of "super-stable" release series (e.g., v1.4.0). Releases in super stable series are well-tested, time-tested, and mature. Such releases are recommended for production sites. Changes between subsequent releases in super stable series are expected to be fairly small.
  • Odd minor release numbers are part of "feature" release series (e.g., v1.5.0). Releases in feature releases are well-tested, but they are not necessarily time-tested or as mature as super stable releases. Changes between subsequent releases in feature series may be large.

According to the team, The v1.5 series will eventually morph into the next "super stable" series, v1.6 at which time, they will start a new "feature" series (v1.7).

One feature of note in the 1.5 series is the inclusion of KNEM a Linux kernel module enabling high-performance intra-node MPI communication for large messages (i.e. to improve large message performance within a single multi-core node). KNEM is also used in MPICH2 (since version 1.1.1).

You can find both 1.4 and 1.5 series and full change log on the Open MPI website.

Busy, Busy, Busy, been testing GP-GPU, a 6-core Gulftown processor, and finalizing a new Limulus case. I'll have more benchmarks posted real soon.

I wanted to mention one thing, head over to Joe Landman's blog and read about the utter failure of Corsair SSD's. If you are in the market for SSD's I would be looking at other companies. According to Joe, they had chances to make it right, but have have not responded.

Expect some cool news out of the NVidia GPU Conference next week. Unfortunately, I won't be attending. I will probably attend the one day event HPC Financial Markets in New York City. This show used to be called "High Performance on Wall Street." It is basically the same event, which has a small, but free exhibit. Maybe I'll see you there. Intel Developer Forum is going on as well. There is news about the new Sandy Bridge architecture.

Finally, you may not know but I am on twitter. After some hesitation, I think it has value posting and tracking headlines and thoughts from people with similar interests. I don't "tweet" that much, but when I do I try and make worthwhile posts. My personal life is pretty boring so I'll stick with HPC.

Is it time to dump your cluster?

An interesting recent announcement from Werner Vogels, CTO at Today, Amazon Web Services took very an important step in unlocking the advantages of cloud computing for a very important application area. Cluster Computer Instances for Amazon EC2 are a new instance type specifically designed for High Performance Computing applications. HPC resources will be available in "Cluster Compute Instances". The current instance type is:

  • 23 GB of memory
  • 33.5 EC2 Compute Units (2 x Intel Xeon X5570, quad-core “Nehalem” architecture)
  • 1690 GB of instance storage
  • 64-bit platform
  • I/O Performance: Very High (10 Gigabit Ethernet)

There is a default usage limit for this instance type of 8 instances (providing 64 cores). You can request more instances from Amazon. As far as pricing, the reported price is set at $1.60 per single instance per hour or $12.80 per hour for a cluster instance. Not a bad price if you want to run a few jobs (even overflow work) without investing in new hardware. As I recall, Grid Engine can even be used to submit jobs to your EC2 account.

In terms of performance, LINPACK (HPL) results are on par with similar clusters built with 10 GigE. See Amazon's High Performance Computing (HPC) page for more information.

From the start updating department

One of my favorite projects, Open-MX just announced the final release of version 1.3.0. There are only some very minor changes since the last release candidate, mostly a fix for a latency regression. The 1.2.x branch is officially no longer maintained and upgrade to 1.3.0 is strongly recommended.

What is Open-MX You Ask?

Open-MX is a high-performance implementation of the Myrinet Express message-passing stack over generic Ethernet networks. It provides application-level with wire-protocol compatibility with the native MXoE (Myrinet Express over Ethernet) stack.

The following middleware are known to work flawlessly on Open-MX using their native MX backend thanks to the ABI and API compatibility: Open MPI, Argonne's MPICH2/Nemesis, Myricom's MPICH-MX and MPICH2-MX, PVFS2, Intel MPI (using the new TMI interface), Platform MPI (formerly known as HP-MPI), and NewMadeleine.

As soon as time allows, I plan on doing some testing on the new Open-MX version. I have used previous versions and they work quite well.


Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.


Creative Commons License
©2005-2019 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 License. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.