Select News

The news in this category has been selected by us because we thought it would be interestingto hard core cluster geeks. Of course, you don't have to be a cluster geek to read the news stories.

From the Open MPI Team -- go team

The Open MPI Team, representing a consortium of research, academic, and industry partners, is pleased to announce the release of Open MPI version 3.0.0.

v3.0.0 is the start of a new release series for Open MPI. Open MPI 3.0.0 enables MPI_THREAD_MULTIPLE by default, so a build option to Open MPI is no longer required to enable thread support. Additionally, the embedded PMIx runtime has been updated to 2.1.0 and the embedded hwloc has been updated to 1.11.7. There have been numerous other bug fix and performance improvements. Version 3.0.0 can be downloaded from the main Open MPI web site.

From the "who knew" department

Some of the HPC mavens at CSC in Finland (CSC - IT Center for Science Ltd. is a non-profit, state-owned company administered by the Ministry of Education) got together and made a list of why they love Linux for Supercomputing. From the article:

Initially, few experts believed in the competitiveness of the Linux systems among the most powerful computing systems in the world. All doubts were, however, washed away at least by year 2008 when Roadrunner, built by IBM, reached the number one position on the supercomputing Top500 list.

The full article (and list) is here and I think many readers would agree with the list (and have some more points). Also the authors point out that the list is not exhaustive, they just picked the top five. Of course some of us have known this for a while now.

And finally, is operated by CSC and is where Linux was originally unleashed on the world. Nice how it all works out.

From the Pass the Messages Please department

The Open MPI team has been working hard on the version 2 release and it is here!. While Cluster Monkey usually does not track software releases (maybe we should!), this is a significant upgrade to the venerable Open MPI project. The most important aspect of the new release is that it is not ABI compatible with the v1.10 series. That means v1.1 applications will not work with v2.0 of Open MPI. Applications will need to be re-compiled using v2.0.

The Open MPI v2.0 announcement is reproduced below. Thanks and good work Open MPI team!

The Open MPI Team, representing a consortium of research, academic, and industry partners, is pleased to announce the release of Open MPI version 2.0.0.

v2.0.0 is a major new release series containing many new features and bug fixes. As a community, the Open MPI Team is incredibly thankful and appreciative of all the time, effort, and downright hard work contributed by its members and all of its users. Thank you all! We couldn't have done this without you!

From the language-not-the-movie(s) department

Two important Julia Language updates. First, a great interview over at RCE-Cast (while you are there listen to the Singularity interview as well). Head over an take a listen.

What is Julia you ask? A good answer comes from the Julia team:

Julia is the open source programming language for data science and numerical computing that is taking many diverse areas such finance, central banking, insurance, engineering, robotics, artificial intelligence, astrophysics, life sciences and many others by storm. Julia combines the functionality of quantitative environments such as Python and R with the speed of production languages like C++, Fortran and Java to solve big data and analytics problems. Julia delivers dramatic improvements in simplicity, speed, capacity and productivity for data scientists, quants and researchers who need to solve massive computation problems quickly and accurately. The number of Julia users has grown dramatically during the last five years – doubling every 9 months. Julia is taught at MIT, Stanford and dozens of universities worldwide, including MOOCs on Coursera and EdX.

Update: Intel releases ParallelAccelerator v0.2 for Julia 0.5

From the bad-play-on-words department

For those using Python to calculate asymptotes and other science and mathematical things, Intel ® has added its speedy MKL (Math Kernel Library) to the mix. Called Intel ® Distribution for Python* 2017 Beta, The beta release gives Python a big boost by using MKL and other libraries. From the web page "The Beta product adds new Python packages like scikit-learn, mpi4py, numba, conda, tbb (Python interfaces to Intel Threading Building Blocks) and pyDAAL (Python interfaces to Intel Data Analytics Acceleration Library). The Beta also delivers performance improvements for NumPy/SciPy through linking with performance libraries like Intel MKL, Intel Message Passing Interface (Intel MPI), Intel TBB and Intel DAAL."

Beta users can look forward to the following features.

  • Includes NumPy, SciPy, scikit-learn, numba, Cython, pyDAAL
  • Performance accelerations via Intel® MKL, Intel MPI, Intel® TBB, Intel® DAAL
  • Easy, out-of-the-box access to performance
  • Free to download
  • Supports Python versions 2.7 and 3.5
  • Available on Windows*, Linux, and Mac OS

An Intel blog provide more information. There is also a Python profiling tool (beta) available.


Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.


Creative Commons License
©2005-2019 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 License. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.