From the Pass the Messages Please department

The Open MPI team has been working hard on the version 2 release and it is here!. While Cluster Monkey usually does not track software releases (maybe we should!), this is a significant upgrade to the venerable Open MPI project. The most important aspect of the new release is that it is not ABI compatible with the v1.10 series. That means v1.1 applications will not work with v2.0 of Open MPI. Applications will need to be re-compiled using v2.0.

The Open MPI v2.0 announcement is reproduced below. Thanks and good work Open MPI team!

The Open MPI Team, representing a consortium of research, academic, and industry partners, is pleased to announce the release of Open MPI version 2.0.0.

v2.0.0 is a major new release series containing many new features and bug fixes. As a community, the Open MPI Team is incredibly thankful and appreciative of all the time, effort, and downright hard work contributed by its members and all of its users. Thank you all! We couldn't have done this without you!

Increasing the major release number to "2" is indicative the magnitude of the changes in this release: v2.0.0 is effectively a new generation of Open MPI compared to the v1.10 series (see versions for a description of Open MPI's versioning scheme). Many of the changes are visible to users, but equally importantly, there are many changes "under the hood" that add stability and performance improvements to the inner workings of Open MPI.

Note that this release also retires support for some legacy systems, and is not ABI compatible with the v1.10 series. Users will need to recompile their MPI applications to use Open MPI v2.0.0.

As with any new major series, while the Open MPI community has tested the v2.0.0 release extensively, production users are encouraged to test thoroughly when upgrading from a prior version of Open MPI. After reading the "Changes in behavior compared to prior versions" and "Known issues" sections below, please be sure to report any issues that you find on Github or the Open MPI user's mailing list.

Please note: although the v1.10 series is still supported -- and will be for quite some time -- the main focus of Open MPI development is on v2.0.1, v2.1.x, and beyond. The v1.10 series is effectively "frozen" at this point, and will have no new features added (only bug fixes applied as necessary).

Here are a list of the major new features in Open MPI v2.0.0:

  • Open MPI is now MPI-3.1 compliant.
  • Many enhancements to MPI RMA. Open MPI now maps MPI RMA operations on to native RMA operations for those networks which support this capability.
  • Greatly improved support for MPI_THREAD_MULTIPLE (when configured with --enable-mpi-thread-multiple).
  • Enhancements to reduce the memory footprint for jobs at scale. A new MCA parameter, "mpi_add_procs_cutoff", is available to set the threshold for using this feature.
  • Completely revamped support for memory registration hooks when using OS-bypass network transports.
  • Significant OMPIO performance improvements and many bug fixes.
  • Add support for PMIx
  • Process Management Interface for Exascale. Version 1.1.2 of PMIx is included internally in this release.
  • Add support for PLFS file systems in Open MPI I/O.
  • Add support for UCX transport.
  • Simplify build process for Cray XC systems. Add support for using native SLURM.
  • Add a --tune mpirun command line option to simplify setting many environment variables and MCA parameters.
  • Add a new MCA parameter "orte_default_dash_host" to offer an analogue to the existing "orte_default_hostfile" MCA parameter.
  • Add the ability to specify the number of desired slots in the mpirun --host option.
Changes in behavior compared to prior versions:
  • In environments where mpirun cannot automatically determine the number of slots available (e.g., when using a hostfile that does not specify "slots", or when using --host without specifying a ":N" suffix to hostnames), mpirun now requires the use of "-np N" to specify how many MPI processes to launch.
  • The MPI C++ bindings -
  • which were removed from the MPI standard in v3.0 -
  • are no longer built by default and will be removed in some future version of Open MPI. Use the --enable-mpi-cxx-bindings configure option to build the deprecated/removed MPI C++ bindings.
  • ompi_info now shows all components, even if they do not have MCA parameters. The prettyprint output now separates groups with a dashed line.
  • OMPIO is now the default implementation of parallel I/O, with the exception for Lustre parallel filesystems (where ROMIO is still the default). The default selection of OMPI vs. ROMIO can be controlled via the "--mca io ompi|romio" command line switch to mpirun.
  • Per Open MPI's versioning scheme (see the README), increasing the major version number to 2 indicates that this version is not ABI-compatible with prior versions of Open MPI. You will need to recompile MPI and OpenSHMEM applications to work with this version of Open MPI.
  • Removed checkpoint/restart code due to loss of maintainer. :-(
  • Change the behavior for handling certain signals when using PSM and PSM2 libraries. Previously, the PSM and PSM2 libraries would trap certain signals in order to generate tracebacks. The mechanism was found to cause issues with Open MPI's own error reporting mechanism. If not already set, Open MPI now sets the IPATH_NO_BACKTRACE and HFI_NO_BACKTRACE environment variables to disable PSM/PSM2's handling these signals.

Removed legacy support:

  • Removed support for OS X Leopard.
  • Removed support for Cray XT systems.
  • Removed VampirTrace.
  • Removed support for Myrinet/MX.
  • Removed legacy collective module: ML.
  • Removed support for Alpha processors.
  • Removed --enable-mpi-profiling configure option.

You have no rights to post comments

Search

Login And Newsletter

Create an account to access exclusive content, comment on articles, and receive our newsletters.

Feedburner


This work is licensed under CC BY-NC-SA 4.0

©2005-2023 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.