Bug 1262063 - [RFE] Upgrade a Ceph cluster
[RFE] Upgrade a Ceph cluster
Status: CLOSED CURRENTRELEASE
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director (Show other bugs)
11.0 (Ocata)
Unspecified Unspecified
unspecified Severity unspecified
: ga
: 11.0 (Ocata)
Assigned To: Angus Thomas
Yogev Rabl
: FutureFeature
Depends On:
Blocks: 1291943
  Show dependency treegraph
 
Reported: 2015-09-10 14:26 EDT by Neil Levine
Modified: 2016-10-19 12:55 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-10-19 12:55:09 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Neil Levine 2015-09-10 14:26:03 EDT
OSP-Director should be able to upgrade a running Ceph cluster; updating the MONs first and then the OSDs.
Comment 3 Neil Levine 2015-12-15 17:49:38 EST
Jarda,

Is this now supported for OSP-8? 

N
Comment 8 Giulio Fidente 2016-10-11 08:05:41 EDT
This has been implemented in OSPd 10 with the upgrade of Ceph from Hammer to Jewel
Comment 9 Sean Cohen 2016-10-19 12:55:09 EDT
(In reply to Giulio Fidente from comment #8)
> This has been implemented in OSPd 10 with the upgrade of Ceph from Hammer to
> Jewel

Agreed,
Closing current release.
Sean
Comment 10 Sean Cohen 2016-10-19 12:55:31 EDT
(In reply to Giulio Fidente from comment #8)
> This has been implemented in OSPd 10 with the upgrade of Ceph from Hammer to
> Jewel

Agreed,
Closing current release.
Sean

Note You need to log in before you can comment on or make changes to this bug.