Description of problem: As of now RGW configuration is blocked if the cluster health is in warn state. Version-Release number of selected component (if applicable): Latest How reproducible: Always Steps to Reproduce: 1. Configure a 5.x cluster 2. Induce cluster health warn state 3. Try to add an RGW Actual results: events: - 2021-02-19T13:02:54.452416Z service:rgw.india [INFO] "service was created" - '2021-02-19T13:03:03.666083Z service:rgw.india [ERROR] "Failed to apply: Health not ok, will try again when health ok"' Expected results: RGW configuration shouldn't get blocked if the cluster health is in WARN state Additional info: (even in RHCS 4.x and prior, RGW could be configured when cluster was in warn state)
upstream fix PR: https://github.com/ceph/ceph/pull/39877/files * The ``ceph orch apply rgw`` syntax and behavior have changed. RGW services can now be arbitrarily named (it is no longer forced to be `realm.zone`). The ``--rgw-realm=...`` and ``--rgw-zone=...`` arguments are now optional, which means that if they are omitted, a vanilla single-cluster RGW will be deployed. When the realm and zone are provided, the user is now responsible for setting up the multisite configuration beforehand--cephadm no longer attempts to create missing realms or zones
Sage backported PR 39877 to pacific in https://github.com/ceph/ceph/pull/40135. This will be in the next weekly rebase I build downstream (March 22nd).
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:3294