Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1396956 - Failure to upgrade 1.3->2.0 with different rgw_region_root_pool
Failure to upgrade 1.3->2.0 with different rgw_region_root_pool
Status: CLOSED ERRATA
Product: Red Hat Ceph Storage
Classification: Red Hat
Component: RGW (Show other bugs)
2.0
Unspecified Unspecified
high Severity high
: rc
: 2.3
Assigned To: Orit Wasserman
vidushi
Erin Donnelly
: Reopened
Depends On:
Blocks: 1412948 1437916
  Show dependency treegraph
 
Reported: 2016-11-21 04:37 EST by Orit Wasserman
Modified: 2017-07-30 11:47 EDT (History)
22 users (show)

See Also:
Fixed In Version: RHEL: ceph-10.2.7-2.el7cp Ubuntu: ceph_10.2.7-3redhat1xenial
Doc Type: Bug Fix
Doc Text:
.ceph-radosgw starts as expected after upgrading from 1.3 to 2 when a non-default value is used for rgw_region_root_pool and rgw_zone_root_pool Previously, the `ceph-radosgw` service did not start after upgrading the Ceph Object Gateway from 1.3 to 2, when the Gateway used non-default values for the `rgw_region_root_pool` and `rgw_zone_root_pool` parameters. This bug has been fixed and the `ceph-radosgw` now starts as expected.
Story Points: ---
Clone Of: 1343189
Environment:
Last Closed: 2017-06-19 09:27:40 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
1396956 (29.13 KB, text/plain)
2017-05-23 05:26 EDT, vidushi
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2960961 None None None 2017-03-09 08:48 EST
Ceph Project Bug Tracker 17963 None None None 2016-11-21 04:37 EST
Red Hat Product Errata RHBA-2017:1497 normal SHIPPED_LIVE Red Hat Ceph Storage 2.3 bug fix and enhancement update 2017-06-19 13:24:11 EDT

  None (edit)
Comment 2 Orit Wasserman 2016-11-21 04:38:55 EST
upstream fix: https://github.com/ceph/ceph/pull/12076
Comment 3 Orit Wasserman 2016-11-21 08:59:18 EST
If the rgw_region_root_pool and rgw_zone_root_pool are the same, we have a simple workaround:
The user need to add rgw_zonegroup_root_pool (rgw zonegroup root pool) to his configuration with the same value as rgw_region_root_pool.
Comment 4 Orit Wasserman 2016-12-02 09:55:22 EST
For the workaround:
The user should also set rgw_realm_root_pool and rgw_period_root_pool.
Comment 5 Ken Dreyer (Red Hat) 2017-01-03 16:58:37 EST
Orit, would you please check Loic's backport to jewel upstream and approve it
if it properly fixes this issue?
Comment 6 Orit Wasserman 2017-01-04 14:19:13 EST
(In reply to Ken Dreyer (Red Hat) from comment #5)
> Orit, would you please check Loic's backport to jewel upstream and approve it
> if it properly fixes this issue?

The backport looks good but it still needs an additional config change for succefull upgrade. I added a known issue
Comment 57 vidushi 2017-05-23 05:26 EDT
Created attachment 1281409 [details]
1396956
Comment 60 John Poelstra 2017-05-24 11:17:42 EDT
discussed at program meeting, development believes what QE is seeing is not a bug.  Harish believe credentials to this machine are the "usual".  will investigate today.
Comment 65 errata-xmlrpc 2017-06-19 09:27:40 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1497

Note You need to log in before you can comment on or make changes to this bug.