RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 463308 - [LTC 6.0 FEAT] 201403:OpenMPI support for POWER
Summary: [LTC 6.0 FEAT] 201403:OpenMPI support for POWER
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: openmpi
Version: 6.0
Hardware: ppc64
OS: All
high
high
Target Milestone: alpha
: 6.0
Assignee: Doug Ledford
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On:
Blocks: 356741 464193 554559
TreeView+ depends on / blocked
 
Reported: 2008-09-22 21:10 UTC by IBM Bug Proxy
Modified: 2010-11-11 14:52 UTC (History)
6 users (show)

Fixed In Version: openmpi-1.3.3-1
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-11-11 14:52:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description IBM Bug Proxy 2008-09-22 21:10:36 UTC
=Comment: #0=================================================
Emily J. Ratliff <emilyr.com> - 2008-09-16 18:27 EDT
1. Feature Overview:
Feature Id:	[201403]
a. Name of Feature:	OpenMPI support for POWER
b. Feature Description
Support OpenMPI on POWER based system. This will consist mainly of changes in two areas.  
1)  Largepage issues
Currently cannot use large pages with OpenMPI since the default build uses its own allocator. 
Create an allocator that is libhugetlbfs aware.    
2) Compiler Support
With current OpenMPI builds it defaults to using the same compiler when mpicc (and friends mpic++
mpif*).  Add and mpi-selector that will allow clients that have other compilers available to use
those compilers from mpicc.

2. Feature Details:
Sponsor:	PPC - P
Architectures:
ppc64

Arch Specificity: Purely Arch Specific Code
Affects Core Kernel: Yes
Delivery Mechanism: Direct from community
Category:	Power
Request Type:	Package - Update Version
d. Upstream Acceptance:	In Progress
Sponsor Priority	1
f. Severity: High
IBM Confidential:	no
Code Contribution:	IBM code
g. Component Version Target:	Open MPI release v1.3

3. Business Case
This is needed to support the HPC stack

4. Primary contact at Red Hat: 
John Jarvis
jjarvis

5. Primary contacts at Partner:
Project Management Contact:
Michael Hohnbaum, hbaum.com, 503-578-5486

Technical contact(s):
Tony Breeds, tbreeds.com
Anton Blanchard, antonb.com

IBM Manager:
Pat Gaughen, gaughen.com

Comment 1 Bill Nottingham 2008-10-02 15:27:31 UTC
Changing OpenHPI to use a different allocator based on hugetlbfs sounds like a large change. Has this been discussed upstream?

Comment 2 IBM Bug Proxy 2008-10-08 14:34:22 UTC
(In reply to comment #4)
> ------- Comment From notting 2008-10-02 11:27:31 EDT-------
> Changing OpenHPI to use a different allocator based on hugetlbfs sounds like a
> large change. Has this been discussed upstream?
>

The default allocator will be unchanged, we're talking about adding another allocator (selectable at runtime) that will use hugepages if available.  As I understand it the idea of the change has been discussed upstream (in project teleconfs), we're looking to have the code reviewable to discuss which version(s) of OpenMPI will take the change.

Comment 3 IBM Bug Proxy 2009-03-02 18:30:52 UTC
please pick up version 1.3.1 or later
http://www.open-mpi.org/software/ompi/v1.3/

Comment 4 Doug Ledford 2009-09-15 13:58:29 UTC
We have version 1.3.3 at the moment.  If that includes the allocator, then we are already good.  As far as multiple compilers is concerned, we've made it easy for people to build their own packages with other compilers, but we obviously don't ship those ourselves.

Comment 5 IBM Bug Proxy 2009-09-24 14:11:21 UTC
------- Comment From mjwolf.com 2009-09-24 10:05 EDT-------
Open MPI 1.3.3 will be ok

Comment 7 IBM Bug Proxy 2010-05-24 19:50:49 UTC
------- Comment From hohnbaum.com 2010-05-24 15:44 EDT-------
Brad Benton reports: tested the two scenarios with various versions of Open MPI, including 1.4.1 which, currently, is what's included in the latest RHEL6 snaps.  The integration with the xl compilers is already in place at RiceU and the libhugetlbfs integration has been tested both by me and by members of the performance team.  So, from my perspective, it is VERIFIED.

Comment 9 releng-rhel@redhat.com 2010-11-11 14:52:01 UTC
Red Hat Enterprise Linux 6.0 is now available and should resolve
the problem described in this bug report. This report is therefore being closed
with a resolution of CURRENTRELEASE. You may reopen this bug report if the
solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.