Bug 463308 - [LTC 6.0 FEAT] 201403:OpenMPI support for POWER
[LTC 6.0 FEAT] 201403:OpenMPI support for POWER
Status: CLOSED CURRENTRELEASE
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: openmpi (Show other bugs)
6.0
ppc64 All
high Severity high
: alpha
: 6.0
Assigned To: Doug Ledford
Red Hat Kernel QE team
: FutureFeature
Depends On:
Blocks: 356741 464193 554559
  Show dependency treegraph
 
Reported: 2008-09-22 17:10 EDT by IBM Bug Proxy
Modified: 2010-11-11 09:52 EST (History)
6 users (show)

See Also:
Fixed In Version: openmpi-1.3.3-1
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2010-11-11 09:52:01 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description IBM Bug Proxy 2008-09-22 17:10:36 EDT
=Comment: #0=================================================
Emily J. Ratliff <emilyr@us.ibm.com> - 2008-09-16 18:27 EDT
1. Feature Overview:
Feature Id:	[201403]
a. Name of Feature:	OpenMPI support for POWER
b. Feature Description
Support OpenMPI on POWER based system. This will consist mainly of changes in two areas.  
1)  Largepage issues
Currently cannot use large pages with OpenMPI since the default build uses its own allocator. 
Create an allocator that is libhugetlbfs aware.    
2) Compiler Support
With current OpenMPI builds it defaults to using the same compiler when mpicc (and friends mpic++
mpif*).  Add and mpi-selector that will allow clients that have other compilers available to use
those compilers from mpicc.

2. Feature Details:
Sponsor:	PPC - P
Architectures:
ppc64

Arch Specificity: Purely Arch Specific Code
Affects Core Kernel: Yes
Delivery Mechanism: Direct from community
Category:	Power
Request Type:	Package - Update Version
d. Upstream Acceptance:	In Progress
Sponsor Priority	1
f. Severity: High
IBM Confidential:	no
Code Contribution:	IBM code
g. Component Version Target:	Open MPI release v1.3

3. Business Case
This is needed to support the HPC stack

4. Primary contact at Red Hat: 
John Jarvis
jjarvis@redhat.com

5. Primary contacts at Partner:
Project Management Contact:
Michael Hohnbaum, hbaum@us.ibm.com, 503-578-5486

Technical contact(s):
Tony Breeds, tbreeds@au1.ibm.com
Anton Blanchard, antonb@au1.ibm.com

IBM Manager:
Pat Gaughen, gaughen@us.ibm.com
Comment 1 Bill Nottingham 2008-10-02 11:27:31 EDT
Changing OpenHPI to use a different allocator based on hugetlbfs sounds like a large change. Has this been discussed upstream?
Comment 2 IBM Bug Proxy 2008-10-08 10:34:22 EDT
(In reply to comment #4)
> ------- Comment From notting@redhat.com 2008-10-02 11:27:31 EDT-------
> Changing OpenHPI to use a different allocator based on hugetlbfs sounds like a
> large change. Has this been discussed upstream?
>

The default allocator will be unchanged, we're talking about adding another allocator (selectable at runtime) that will use hugepages if available.  As I understand it the idea of the change has been discussed upstream (in project teleconfs), we're looking to have the code reviewable to discuss which version(s) of OpenMPI will take the change.
Comment 3 IBM Bug Proxy 2009-03-02 13:30:52 EST
please pick up version 1.3.1 or later
http://www.open-mpi.org/software/ompi/v1.3/
Comment 4 Doug Ledford 2009-09-15 09:58:29 EDT
We have version 1.3.3 at the moment.  If that includes the allocator, then we are already good.  As far as multiple compilers is concerned, we've made it easy for people to build their own packages with other compilers, but we obviously don't ship those ourselves.
Comment 5 IBM Bug Proxy 2009-09-24 10:11:21 EDT
------- Comment From mjwolf@us.ibm.com 2009-09-24 10:05 EDT-------
Open MPI 1.3.3 will be ok
Comment 7 IBM Bug Proxy 2010-05-24 15:50:49 EDT
------- Comment From hohnbaum@us.ibm.com 2010-05-24 15:44 EDT-------
Brad Benton reports: tested the two scenarios with various versions of Open MPI, including 1.4.1 which, currently, is what's included in the latest RHEL6 snaps.  The integration with the xl compilers is already in place at RiceU and the libhugetlbfs integration has been tested both by me and by members of the performance team.  So, from my perspective, it is VERIFIED.
Comment 9 releng-rhel@redhat.com 2010-11-11 09:52:01 EST
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.

Note You need to log in before you can comment on or make changes to this bug.