Open MPI 1.1 Released

The Open MPI Team, representing a consortium of research, academic, and industry partners, is pleased to announce the release of Open MPI version 1.1. This release contains many new features, performance enhancements, and stability bug fixes. Version 1.1 can be downloaded from the main Open MPI web site or any of its mirrors (mirrors will be updating shortly).

We strongly recommend that all users upgrade to the version 1.1 series, if possible. The 1.0 series will likely have one more bug fix release (v1.0.3), but is generally considered deprecated in favor of the new 1.1 series.

Here are a list of changes in v1.1 as compared to v1.0.x:

  • Various MPI datatype fixes, optimizations.
  • Fixed various problems on the SPARC architecture (e.g., not correctly aligning addresses within structs).
  • Improvements in various run-time error messages to be more clear about what they mean and where the errors are occurring.
  • Various fixes to mpirun's handling of --prefix.
  • Updates and fixes for Cray/Red Storm support.
  • Major improvements to the Fortran 90 MPI bindings:
    • General improvements in compile/linking time and portability between different F90 compilers.
    • Addition of "trivial", "small" (the default), and "medium" Fortran 90 MPI module sizes (v1.0.x's F90 module was equivalent to "medium"). See the README file for more explanation.
    • Fix various MPI F90 interface functions and constant types to match. Thanks to Michael Kluskens for pointing out the problems to us.
  • Allow short messages to use RDMA (vs. send/receive semantics) to a limited number peers in both the mvapi and openib BTL components. This reduces communication latency over IB channels.
  • Numerous performance improvements throughout the entire code base.
  • Many minor threading fixes.
  • Add a define OMPI_SKIP_CXX to allow the user to skip the mpicxx.h from being included in mpi.h. It allows the user to compile C code with a CXX compiler without including the CXX bindings.
  • PERUSE support has been added. In order to activate it add --enable-peruse to the configure options. All events described in the PERUSE 2.0 draft are supported, plus one Open MPI extension. PERUSE_COMM_REQ_XFER_CONTINUE allow to see how the data is segmented internally, using multiple interfaces or the pipeline engine. However, this version only support one event of each type simultaneously attached to a communicator.
  • Add support for running jobs in heterogeneous environments. Currently supports environments with different endianness and different representations of C++ bool and Fortran LOGICAL. Mismatched sizes for other datatypes is not supported.
  • Open MPI now includes an implementation of the MPI-2 One-Sided Communications specification.
  • Open MPI is now configurable in cross-compilation environments. Several Fortran 77 and Fortran 90 tests need to be pre-seeded with results from a config.cache-like file.
  • Add --debug option to mpirun to generically invoke a parallel debugger.

    Search

    Feedburner

    Login Form

    Share The Bananas


    Creative Commons License
    ©2005-2012 Copyright Seagrove LLC, Some rights reserved. Except where otherwise noted, this site is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 2.5 License. The Cluster Monkey Logo and Monkey Character are Trademarks of Seagrove LLC.