The news in this category has been selected by us because we thought it would be interestingto hard core cluster geeks. Of course, you don't have to be a cluster geek to read the news stories.
- Written by Douglas Eadline
- Hits: 4677
From the start updating department
One of my favorite projects, Open-MX just announced the final release of version 1.3.0. There are only some very minor changes since the last release candidate, mostly a fix for a latency regression. The 1.2.x branch is officially no longer maintained and upgrade to 1.3.0 is strongly recommended.
What is Open-MX You Ask?Open-MX is a high-performance implementation of the Myrinet Express message-passing stack over generic Ethernet networks. It provides application-level with wire-protocol compatibility with the native MXoE (Myrinet Express over Ethernet) stack.
The following middleware are known to work flawlessly on Open-MX using their native MX backend thanks to the ABI and API compatibility: Open MPI, Argonne's MPICH2/Nemesis, Myricom's MPICH-MX and MPICH2-MX, PVFS2, Intel MPI (using the new TMI interface), Platform MPI (formerly known as HP-MPI), and NewMadeleine.As soon as time allows, I plan on doing some testing on the new Open-MX version. I have used previous versions and they work quite well.
- Written by Administrator
- Hits: 1308
Qlogic/h3> QLogic Introduces Automated InfiniBand Fabric Optimization with New HPC Management Tools Virtual Fabrics, Adaptive Routing and Dispersive Routing Top Industry-Leading List of Features that Ensure Maximum Efficiency and Performance in New InfiniBand Fabric Software Release ALISO VIEJO, Calif., May 19, 2010-Leveraging its unique, system-level understanding of communications fabrics, QLogic Corp. (Nasdaq: QLGC) has leapfrogged its competition with InfiniBandÂ® Fabric Suite (IFS) 6.0, a new version of its fabric management software package that enables users to obtain the highest fabric performance, the highest communications efficiency, and the lowest management costs for high performance computing (HPC) clusters of any size. The exceptional price/performance delivered by HPC clusters has enabled companies to build increasingly larger systems to handle their complex simulation requirements. But larger clusters have unique management challenges: as the cluster scales, communications among applications and nodes become more complex and the potential for inefficiency, bottlenecks, and failures grows dramatically. IFS 6.0 incorporates Virtual Fabrics configurations with application-specific Class-of-Service (CoS), Adaptive Routing and Dispersive Routing, performance-enhanced versions of vendor-specific MPI libraries, and support for torus and mesh network topologies. The result is a comprehensive suite of advanced features that network managers can use to maximize throughput, eliminate the effects of path congestion, and automate Quality-of-Service (QoS) on a per-application basis, and, unlike other fabric managers on the market, IFS 6.0's unique Adaptive and Dispersive Routing capabilities actually increase network routing intelligence as the number of nodes and switches scales. "Effective fabric management has become the most important factor in maximizing performance in an HPC cluster investment and as clusters scale, issues like congestion mitigation and QoS can make a big difference in whether the fabric performs up to its potential," said Addison Snell, president of InterSect 360 Research. "With IFS 6.0, QLogic has addressed all of the major fabric management issues in a product that in many ways goes beyond what others are offering." IFS offers the following key features: * Virtual Fabrics combined with application-specific CoS, which automatically dedicates classes of service within the fabric to ensure the desired level of bandwidth and appropriate priority, is applied to each application. In addition, the Virtual Fabrics capability helps eliminate manual provisioning of application services across the fabric, significantly reducing management time and costs. * Adaptive Routing continually monitors application messaging patterns and selects the optimum path for each traffic flow, eliminating slowdowns caused by pathway bottlenecks. * Dispersive Routing load-balances traffic among multiple pathways and uses QLogicÂ® Performance Scaled Messaging (PSM) to automatically ensure that packets arrive at their destination for rapid processing. Dispersive Routing leverages the entire fabric to ensure maximum communications performance for all jobs, even in the presence of other messaging-intensive applications. * Full leverage of vendor-specific message passing interface (MPI) libraries maximizes MPI application performance. All supported MPIs can take advantage of IFS's pipelined data transfer mechanism, which was specifically designed for MPI communication semantics, as well as additional enhancements such as Dispersive Routing. * Full support for additional HPC network topologies, including torus and mesh as well as fat tree, with enhanced capabilities for failure handling. Alternative topologies like torus and mesh help users reduce networking costs as clusters scale beyond a few hundred nodes, and IFS 6.0 ensures that these users have full access to advanced traffic management features in these complex networking environments. This unique combination of features ensures that HPC customers will obtain maximum performance and efficiency from their cluster investments while simplifying management. "With our long history as a key developer of connectivity solutions for large-scale Fibre Channel, Ethernet and InfiniBand networks, QLogic uniquely appreciates the importance of fabric management in HPC environments," said Jesse Parker, vice president and general manager, Network Solutions Group, QLogic. "By automating congestion management, load balancing, and class-of-service assignments, IFS 6.0 is the first product that delivers the tools network managers need to get the most out of their InfiniBand fabric and compute cluster investments." "Clients using high-performance computing clusters need more efficient systems and better resource utilization, while reducing operating costs," said Glenn Keels, director of marketing, Scalable Computing and Infrastructure organization, HP. "The combination of HP Cluster Platforms and QLogic InfiniBand Fabric Suite enables our clients to achieve the best results possible for their most demanding high-performance computing applications."
- Written by Douglas Eadline
- Hits: 4592
From the in your neighborhood department
hwloc provides command line tools and a C API to obtain the hierarchical map of key computing elements, such as: NUMA memory nodes, shared caches, processor sockets, processor cores, and processor "threads". hwloc also gathers various attributes such as cache and memory information, and is portable across a variety of different operating systems and platforms.
The hwloc team considers version 1.0 to be the first production-quality release that is suitable for widespread adoption. Please send your feedback on hwloc experiences to our mailing lists (see the web site, above).
Thanks team, keep up the good work, and don't go anywhere.
- Written by Douglas Eadline
- Hits: 4525
Read about three new processors hitting the market and nothing about basketball
I waited to make a joint news story about the new processors that were due to arrive this month. In this way, you could click from here to some good reviews and insights rather than hunting on the Internet. We are now permanent residents of multi-core-ville. Remember those single core processors? Seems like a while ago. Although, I bet you still use hardware that has some of these "old" processors. Of course the Intel and AMD announcements are pointed toward servers, but the goodness always trickles down. Other than making the Internet faster, I am still trying to figure out what Joe or Jane Sixpack will do with 6-cores in their bargain of the week lap/desktop from the big box store.
- Written by Douglas Eadline
- Hits: 5317
In case you were not paying attention
I usually don't post things vendors send to ClusterMoney because we are not a big time news site (nor to I care for marketing piffle). I will at times summarize some important news and events with enough links to send you on your way to HPC enlightenment. Each Year NVidia sends me a year in review which is a good summary of Tesla HPC events -- complete with many URLs so readers can explore further. [A note to vendors: Company news (aka press releases) with URLs and good background and no jargon may get posted here.] The NVidia round-up begins below:
NVIDIA Tesla - 2009 Year in ReviewGPU Computing had a ground breaking year in 2009. In just two and a half years from its launch, the Tesla brand has truly established itself in the HPC community. This wouldn't have happened without the efforts of real GPU Computing pioneers such as Prof. Wen-mei Hwu at University of Illinois who taught the very first courses in parallel programming on the GPU and Prof. Satoshi Matsuoka at Tokyo Institute of Technology who put the first Tesla GPU-enabled supercomputer onto the Top 500 (Top 30 in fact), just one and half years after we launched the brand.
A "tipping point" is defined as a level at which momentum for change becomes unstoppable - we genuinely believe that we are witnessing the tipping point for GPUs in the high performance computing space and the SC09 conference in Portland, Ore. in November cemented that belief.....but we'll come to that in due course :)
Login And Newsletter