Red Hat Bugzilla – Bug 463308
[LTC 6.0 FEAT] 201403:OpenMPI support for POWER
Last modified: 2010-11-11 09:52:01 EST
Emily J. Ratliff <firstname.lastname@example.org> - 2008-09-16 18:27 EDT
1. Feature Overview:
Feature Id: 
a. Name of Feature: OpenMPI support for POWER
b. Feature Description
Support OpenMPI on POWER based system. This will consist mainly of changes in two areas.
1) Largepage issues
Currently cannot use large pages with OpenMPI since the default build uses its own allocator.
Create an allocator that is libhugetlbfs aware.
2) Compiler Support
With current OpenMPI builds it defaults to using the same compiler when mpicc (and friends mpic++
mpif*). Add and mpi-selector that will allow clients that have other compilers available to use
those compilers from mpicc.
2. Feature Details:
Sponsor: PPC - P
Arch Specificity: Purely Arch Specific Code
Affects Core Kernel: Yes
Delivery Mechanism: Direct from community
Request Type: Package - Update Version
d. Upstream Acceptance: In Progress
Sponsor Priority 1
f. Severity: High
IBM Confidential: no
Code Contribution: IBM code
g. Component Version Target: Open MPI release v1.3
3. Business Case
This is needed to support the HPC stack
4. Primary contact at Red Hat:
5. Primary contacts at Partner:
Project Management Contact:
Michael Hohnbaum, email@example.com, 503-578-5486
Tony Breeds, firstname.lastname@example.org
Anton Blanchard, email@example.com
Pat Gaughen, firstname.lastname@example.org
Changing OpenHPI to use a different allocator based on hugetlbfs sounds like a large change. Has this been discussed upstream?
(In reply to comment #4)
> ------- Comment From email@example.com 2008-10-02 11:27:31 EDT-------
> Changing OpenHPI to use a different allocator based on hugetlbfs sounds like a
> large change. Has this been discussed upstream?
The default allocator will be unchanged, we're talking about adding another allocator (selectable at runtime) that will use hugepages if available. As I understand it the idea of the change has been discussed upstream (in project teleconfs), we're looking to have the code reviewable to discuss which version(s) of OpenMPI will take the change.
please pick up version 1.3.1 or later
We have version 1.3.3 at the moment. If that includes the allocator, then we are already good. As far as multiple compilers is concerned, we've made it easy for people to build their own packages with other compilers, but we obviously don't ship those ourselves.
------- Comment From firstname.lastname@example.org 2009-09-24 10:05 EDT-------
Open MPI 1.3.3 will be ok
------- Comment From email@example.com 2010-05-24 15:44 EDT-------
Brad Benton reports: tested the two scenarios with various versions of Open MPI, including 1.4.1 which, currently, is what's included in the latest RHEL6 snaps. The integration with the xl compilers is already in place at RiceU and the libhugetlbfs integration has been tested both by me and by members of the performance team. So, from my perspective, it is VERIFIED.
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.