Open MPI

Open MPI
Stable release
5.0.9[1]  / 30 October 2025 (30 October 2025)
Operating systemUnix, Linux, macOS, FreeBSD[2]
PlatformCross-platform
TypeLibrary
LicenseNew BSD License
Websitewww.open-mpi.org
Repository

Open MPI is a Message Passing Interface (MPI) library project combining technologies and resources from several other projects (FT-MPI, LA-MPI, LAM/MPI, and PACX-MPI). It is used by many TOP500 supercomputers including Roadrunner, which was the world's fastest supercomputer from June 2008 to November 2009,[3] and K computer, the fastest supercomputer from June 2011 to June 2012.[4][5]

Overview

Open MPI represents the merger between three well-known MPI implementations:

with contributions from the PACX-MPI team at the University of Stuttgart. These four institutions comprise the founding members of the Open MPI development team.[7]

The Open MPI developers selected these MPI implementations as excelling in one or more areas. Open MPI aims to use the best ideas and technologies from the individual projects and create an open-source MPI implementation that integrates technologies from participating projects.[7] The Open MPI project specifies several top-level goals:

Code modules

The Open MPI code has 3 major code modules:

  • OMPI - MPI code
  • ORTE - the Open Run-Time Environment
  • OPAL - the Open Portable Access Layer

Commercial implementations

  • Sun HPC Cluster Tools - beginning with version 7, Sun switched to Open MPI
  • Bullx MPI—In 2010 Bull announced the release of bullx MPI, based on Open MPI[12]

Consortium

Open MPI development is performed within a consortium of many industrial and academic partners. The consortium also covers several other software projects such as the hwloc (Hardware Locality) library which takes care of discovering and modeling the topology of parallel platforms.

See also

References

  1. ^ "Release 5.0.9". 30 October 2025. Retrieved 30 October 2025.
  2. ^ "FreshPorts -- net/Openmpi2: High Performance Message Passing Library".
  3. ^ Jeff Squyres. "Open MPI: 10^15 Flops Can't Be Wrong" (PDF). Open MPI Project. Retrieved 2011-09-27.
  4. ^ "Programming on K computer" (PDF). Fujitsu. Retrieved 2012-01-17.
  5. ^ "Open MPI powers 8 petaflops". Cisco Systems. Archived from the original on 2011-06-28. Retrieved 2011-09-27.
  6. ^ Gabriel, Edgar (2004). "Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation". Recent Advances in Parallel Virtual Machine and Message Passing Interface. Lecture Notes in Computer Science. Vol. 3241. Springer. pp. 97–104. doi:10.1007/978-3-540-30218-6_19.
  7. ^ a b c Juha-Pekka P. Koskinen (2012). Parallel Programming Models and Tools (Thesis). Aalto University. Retrieved 11 March 2026.
  8. ^ Balaji, Pavan (2016). "MPI on Modern Hardware". Proceedings of the International Conference on High Performance Computing. IEEE.
  9. ^ Preventing forking is a goal; how will you enforce that?
  10. ^ Doerfler, David (2005). "MPI: Past and Present". Proceedings of the IEEE. 93 (2): 339–355.
  11. ^ Hoefler, Torsten (2013). "Performance and Scalability of MPI Implementations". Journal of Parallel and Distributed Computing. 73: 1454–1464.
  12. ^ Aurélie Negro. "Bull launches bullx supercomputer suite". Bull SAS. Archived from the original on 2014-04-21. Retrieved 2013-09-27.