Select News

The news in this category has been selected by us because we thought it would be interestingto hard core cluster geeks. Of course, you don't have to be a cluster geek to read the news stories.

Busy, Busy, Busy, been testing GP-GPU, a 6-core Gulftown processor, and finalizing a new Limulus case. I'll have more benchmarks posted real soon.

I wanted to mention one thing, head over to Joe Landman's blog and read about the utter failure of Corsair SSD's. If you are in the market for SSD's I would be looking at other companies. According to Joe, they had chances to make it right, but have have not responded.

Expect some cool news out of the NVidia GPU Conference next week. Unfortunately, I won't be attending. I will probably attend the one day event HPC Financial Markets in New York City. This show used to be called "High Performance on Wall Street." It is basically the same event, which has a small, but free exhibit. Maybe I'll see you there. Intel Developer Forum is going on as well. There is news about the new Sandy Bridge architecture.

Finally, you may not know but I am on twitter. After some hesitation, I think it has value posting and tracking headlines and thoughts from people with similar interests. I don't "tweet" that much, but when I do I try and make worthwhile posts. My personal life is pretty boring so I'll stick with HPC.

Is it time to dump your cluster?

An interesting recent announcement from Werner Vogels, CTO at Amazon.com: Today, Amazon Web Services took very an important step in unlocking the advantages of cloud computing for a very important application area. Cluster Computer Instances for Amazon EC2 are a new instance type specifically designed for High Performance Computing applications. HPC resources will be available in "Cluster Compute Instances". The current instance type is:

  • 23 GB of memory
  • 33.5 EC2 Compute Units (2 x Intel Xeon X5570, quad-core “Nehalem” architecture)
  • 1690 GB of instance storage
  • 64-bit platform
  • I/O Performance: Very High (10 Gigabit Ethernet)

There is a default usage limit for this instance type of 8 instances (providing 64 cores). You can request more instances from Amazon. As far as pricing, the reported price is set at $1.60 per single instance per hour or $12.80 per hour for a cluster instance. Not a bad price if you want to run a few jobs (even overflow work) without investing in new hardware. As I recall, Grid Engine can even be used to submit jobs to your EC2 account.

In terms of performance, LINPACK (HPL) results are on par with similar clusters built with 10 GigE. See Amazon's High Performance Computing (HPC) page for more information.

From the start updating department

One of my favorite projects, Open-MX just announced the final release of version 1.3.0. There are only some very minor changes since the last release candidate, mostly a fix for a latency regression. The 1.2.x branch is officially no longer maintained and upgrade to 1.3.0 is strongly recommended.

What is Open-MX You Ask?

Open-MX is a high-performance implementation of the Myrinet Express message-passing stack over generic Ethernet networks. It provides application-level with wire-protocol compatibility with the native MXoE (Myrinet Express over Ethernet) stack.

The following middleware are known to work flawlessly on Open-MX using their native MX backend thanks to the ABI and API compatibility: Open MPI, Argonne's MPICH2/Nemesis, Myricom's MPICH-MX and MPICH2-MX, PVFS2, Intel MPI (using the new TMI interface), Platform MPI (formerly known as HP-MPI), and NewMadeleine.

As soon as time allows, I plan on doing some testing on the new Open-MX version. I have used previous versions and they work quite well.

Qlogic QLogic Introduces Automated InfiniBand Fabric Optimization with New HPC Management Tools Virtual Fabrics, Adaptive Routing and Dispersive Routing Top Industry-Leading List of Features that Ensure Maximum Efficiency and Performance in New InfiniBand Fabric Software Release ALISO VIEJO, Calif., May 19, 2010-Leveraging its unique, system-level understanding of communications fabrics, QLogic Corp. (Nasdaq: QLGC) has leapfrogged its competition with InfiniBand® Fabric Suite (IFS) 6.0, a new version of its fabric management software package that enables users to obtain the highest fabric performance, the highest communications efficiency, and the lowest management costs for high performance computing (HPC) clusters of any size. The exceptional price/performance delivered by HPC clusters has enabled companies to build increasingly larger systems to handle their complex simulation requirements. But larger clusters have unique management challenges: as the cluster scales, communications among applications and nodes become more complex and the potential for inefficiency, bottlenecks, and failures grows dramatically. IFS 6.0 incorporates Virtual Fabrics configurations with application-specific Class-of-Service (CoS), Adaptive Routing and Dispersive Routing, performance-enhanced versions of vendor-specific MPI libraries, and support for torus and mesh network topologies. The result is a comprehensive suite of advanced features that network managers can use to maximize throughput, eliminate the effects of path congestion, and automate Quality-of-Service (QoS) on a per-application basis, and, unlike other fabric managers on the market, IFS 6.0's unique Adaptive and Dispersive Routing capabilities actually increase network routing intelligence as the number of nodes and switches scales. "Effective fabric management has become the most important factor in maximizing performance in an HPC cluster investment and as clusters scale, issues like congestion mitigation and QoS can make a big difference in whether the fabric performs up to its potential," said Addison Snell, president of InterSect 360 Research. "With IFS 6.0, QLogic has addressed all of the major fabric management issues in a product that in many ways goes beyond what others are offering." IFS offers the following key features: * Virtual Fabrics combined with application-specific CoS, which automatically dedicates classes of service within the fabric to ensure the desired level of bandwidth and appropriate priority, is applied to each application. In addition, the Virtual Fabrics capability helps eliminate manual provisioning of application services across the fabric, significantly reducing management time and costs. * Adaptive Routing continually monitors application messaging patterns and selects the optimum path for each traffic flow, eliminating slowdowns caused by pathway bottlenecks. * Dispersive Routing load-balances traffic among multiple pathways and uses QLogic® Performance Scaled Messaging (PSM) to automatically ensure that packets arrive at their destination for rapid processing. Dispersive Routing leverages the entire fabric to ensure maximum communications performance for all jobs, even in the presence of other messaging-intensive applications. * Full leverage of vendor-specific message passing interface (MPI) libraries maximizes MPI application performance. All supported MPIs can take advantage of IFS's pipelined data transfer mechanism, which was specifically designed for MPI communication semantics, as well as additional enhancements such as Dispersive Routing. * Full support for additional HPC network topologies, including torus and mesh as well as fat tree, with enhanced capabilities for failure handling. Alternative topologies like torus and mesh help users reduce networking costs as clusters scale beyond a few hundred nodes, and IFS 6.0 ensures that these users have full access to advanced traffic management features in these complex networking environments. This unique combination of features ensures that HPC customers will obtain maximum performance and efficiency from their cluster investments while simplifying management. "With our long history as a key developer of connectivity solutions for large-scale Fibre Channel, Ethernet and InfiniBand networks, QLogic uniquely appreciates the importance of fabric management in HPC environments," said Jesse Parker, vice president and general manager, Network Solutions Group, QLogic. "By automating congestion management, load balancing, and class-of-service assignments, IFS 6.0 is the first product that delivers the tools network managers need to get the most out of their InfiniBand fabric and compute cluster investments." "Clients using high-performance computing clusters need more efficient systems and better resource utilization, while reducing operating costs," said Glenn Keels, director of marketing, Scalable Computing and Infrastructure organization, HP. "The combination of HP Cluster Platforms and QLogic InfiniBand Fabric Suite enables our clients to achieve the best results possible for their most demanding high-performance computing applications."

From the in your neighborhood department

The Hardware Locality (hwloc) team, which is affiliated with the nearby OpenMPI team, has announced the release of version 1.0 of hwloc.

hwloc provides command line tools and a C API to obtain the hierarchical map of key computing elements, such as: NUMA memory nodes, shared caches, processor sockets, processor cores, and processor "threads". hwloc also gathers various attributes such as cache and memory information, and is portable across a variety of different operating systems and platforms.

The hwloc team considers version 1.0 to be the first production-quality release that is suitable for widespread adoption. Please send your feedback on hwloc experiences to our mailing lists (see the web site, above).

Thanks team, keep up the good work, and don't go anywhere.

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.