Bug 1934589
| Summary: | [cephadm][RGW]: RGW creation fails on the secondary site of a multisite | ||
|---|---|---|---|
| Product: | [Red Hat Storage] Red Hat Ceph Storage | Reporter: | Tejas <tchandra> |
| Component: | Cephadm | Assignee: | Daniel Pivonka <dpivonka> |
| Status: | CLOSED ERRATA | QA Contact: | Tejas <tchandra> |
| Severity: | high | Docs Contact: | Karen Norteman <knortema> |
| Priority: | unspecified | ||
| Version: | 5.0 | CC: | kdreyer, sewagner, vereddy |
| Target Milestone: | --- | ||
| Target Release: | 5.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | ceph-16.1.0-997.el8cp | Doc Type: | No Doc Update |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-08-30 08:28:49 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Tejas
2021-03-03 14:37:22 UTC
Please specify the severity of this bug. Severity is defined here: https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity. upstream fix PR: https://github.com/ceph/ceph/pull/39877/files * The ``ceph orch apply rgw`` syntax and behavior have changed. RGW services can now be arbitrarily named (it is no longer forced to be `realm.zone`). The ``--rgw-realm=...`` and ``--rgw-zone=...`` arguments are now optional, which means that if they are omitted, a vanilla single-cluster RGW will be deployed. When the realm and zone are provided, the user is now responsible for setting up the multisite configuration beforehand--cephadm no longer attempts to create missing realms or zones. multisite example setup commands: 2 clusters with osds needed Cluster 1: radosgw-admin realm create --default --rgw-realm=gold radosgw-admin zonegroup create --rgw-zonegroup=us --master --default --endpoints=http://<ip vm-00>:80 radosgw-admin zone create --rgw-zone=us-east --master --rgw-zonegroup=us --endpoints=http://<ip vm-00>:80 --access-key=1234567 --secret=098765 --default radosgw-admin period update --rgw-realm=gold --commit radosgw-admin user create --uid=repuser --display-name="Replication_user" --access-key=1234567 --secret=098765 --system ceph orch apply rgw rgwserviceid1 gold us-east --placement=vm-00 Cluster 2: radosgw-admin realm pull --rgw-realm=gold --url=http://<ip vm-00>:80 --access-key=1234567 --secret=098765 --default radosgw-admin period pull --url=http://<ip vm-00>:80 --access-key=1234567 --secret=098765 radosgw-admin zone create --rgw-zone=us-west --rgw-zonegroup=us --endpoints=http://<ip vm-03>:80 --access-key=1234567 --secret=098765 radosgw-admin period update --rgw-realm=gold --commit ceph orch apply rgw rgwserviceid1 gold us-west --placement=vm-03 Sage backported PR 39877 to pacific in https://github.com/ceph/ceph/pull/40135. This will be in the next weekly rebase I build downstream (March 22nd). Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294 |