Select News

The news in this category has been selected by us because we thought it would be interestingto hard core cluster geeks. Of course, you don't have to be a cluster geek to read the news stories.

From the Pass the Messages Please department

The Open MPI team has been working hard on the version 2 release and it is here!. While Cluster Monkey usually does not track software releases (maybe we should!), this is a significant upgrade to the venerable Open MPI project. The most important aspect of the new release is that it is not ABI compatible with the v1.10 series. That means v1.1 applications will not work with v2.0 of Open MPI. Applications will need to be re-compiled using v2.0.

The Open MPI v2.0 announcement is reproduced below. Thanks and good work Open MPI team!

The Open MPI Team, representing a consortium of research, academic, and industry partners, is pleased to announce the release of Open MPI version 2.0.0.

v2.0.0 is a major new release series containing many new features and bug fixes. As a community, the Open MPI Team is incredibly thankful and appreciative of all the time, effort, and downright hard work contributed by its members and all of its users. Thank you all! We couldn't have done this without you!

From the "who knew" department

Some of the HPC mavens at CSC in Finland (CSC - IT Center for Science Ltd. is a non-profit, state-owned company administered by the Ministry of Education) got together and made a list of why they love Linux for Supercomputing. From the article:

Initially, few experts believed in the competitiveness of the Linux systems among the most powerful computing systems in the world. All doubts were, however, washed away at least by year 2008 when Roadrunner, built by IBM, reached the number one position on the supercomputing Top500 list.

The full article (and list) is here and I think many readers would agree with the list (and have some more points). Also the authors point out that the list is not exhaustive, they just picked the top five. Of course some of us have known this for a while now.

And finally, is operated by CSC and is where Linux was originally unleashed on the world. Nice how it all works out.

From the bad-play-on-words department

For those using Python to calculate asymptotes and other science and mathematical things, Intel ® has added its speedy MKL (Math Kernel Library) to the mix. Called Intel ® Distribution for Python* 2017 Beta, The beta release gives Python a big boost by using MKL and other libraries. From the web page "The Beta product adds new Python packages like scikit-learn, mpi4py, numba, conda, tbb (Python interfaces to Intel Threading Building Blocks) and pyDAAL (Python interfaces to Intel Data Analytics Acceleration Library). The Beta also delivers performance improvements for NumPy/SciPy through linking with performance libraries like Intel MKL, Intel Message Passing Interface (Intel MPI), Intel TBB and Intel DAAL."

Beta users can look forward to the following features.

  • Includes NumPy, SciPy, scikit-learn, numba, Cython, pyDAAL
  • Performance accelerations via Intel® MKL, Intel MPI, Intel® TBB, Intel® DAAL
  • Easy, out-of-the-box access to performance
  • Free to download
  • Supports Python versions 2.7 and 3.5
  • Available on Windows*, Linux, and Mac OS

An Intel blog provide more information. There is also a Python profiling tool (beta) available.

A recent article on has announced a breakthrough in quantum computing. The article, Crucial hurdle overcome in quantum computing, describes how a team at University of New South Wales (UNSW) in Sydney Australia has created a working quantum gate in silicon. This process paves the way for quantum computing to become a reality in the years to come. Background on quantum computing can be found in this Cluster Monkey article: A Smidgen of Quantum Computing

According to Dr. Menno Veldhorst, a UNSW Research Fellow and the lead author of the Nature paper:

"We've morphed those silicon transistors into quantum bits by ensuring that each has only one electron associated with it. We then store the binary code of 0 or 1 on the 'spin' of the electron, which is associated with the electron's tiny magnetic field."

From the best acronym of the day (BAD) department

The Adept project is bringing some metrics and tools to help optimize energy-efficient use of parallel technologies. According the web site, "Adept builds on the expertise of software developers from high-performance computing (HPC) to exploit parallelism for performance, and on the expertise of Embedded systems engineers in managing energy usage. Adept is developing a tool that can guide software developers and help them to model and predict the power consumption and performance of parallel software and hardware."

Recently, Adapt released a benchmarks suite to help understand and measure power usage for HPC and embedded systems The benchmark suite consists of a wide range of benchmarks including both high-performance embedded and high-performance technical computing. The benchmarks are designed to characterize the efficiency (both in terms of performance and energy) of computer systems, from the hardware and system software stack to the compilers and programming models. More information about the benchmark suite can found on the EPCC Blog Page

Hopefully the ClusterMonkey crew will carve out some time to play with these tools and report back on their experiences.


Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.


Share The Bananas

Creative Commons License
©2005-2016 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 License. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.