Bug 1931386

Summary: [RFE] Allow RGW configuration when cluster is in warning state
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Vasishta <vashastr>
Component: CephadmAssignee: Juan Miguel Olmo <jolmomar>
Status: CLOSED ERRATA QA Contact: Tejas <tchandra>
Severity: high Docs Contact: Karen Norteman <knortema>
Priority: unspecified    
Version: 5.0CC: dpivonka, kdreyer, sewagner, vereddy
Target Milestone: ---Keywords: FutureFeature
Target Release: 5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-16.1.0-997.el8cp Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2021-08-30 08:28:20 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1820257    

Description Vasishta 2021-02-22 10:39:31 UTC
Description of problem:
As of now RGW configuration is blocked if the cluster health is in warn state. 

Version-Release number of selected component (if applicable):
Latest

How reproducible:
Always

Steps to Reproduce:
1. Configure a 5.x cluster 
2. Induce cluster health warn state
3. Try to add an RGW

Actual results:
events:
- 2021-02-19T13:02:54.452416Z service:rgw.india [INFO] "service was created"
- '2021-02-19T13:03:03.666083Z service:rgw.india [ERROR] "Failed to apply: Health
  not ok, will try again when health ok"'

Expected results:
RGW configuration shouldn't get blocked if the cluster health is in WARN state

Additional info:
(even in RHCS 4.x and prior, RGW could be configured when cluster was in warn state)

Comment 1 Daniel Pivonka 2021-03-11 21:41:26 UTC
upstream fix PR: https://github.com/ceph/ceph/pull/39877/files

* The ``ceph orch apply rgw`` syntax and behavior have changed.  RGW
  services can now be arbitrarily named (it is no longer forced to be
  `realm.zone`).  The ``--rgw-realm=...`` and ``--rgw-zone=...``
  arguments are now optional, which means that if they are omitted, a
  vanilla single-cluster RGW will be deployed.  When the realm and
  zone are provided, the user is now responsible for setting up the
  multisite configuration beforehand--cephadm no longer attempts to
  create missing realms or zones

Comment 2 Ken Dreyer (Red Hat) 2021-03-19 18:23:59 UTC
Sage backported PR 39877 to pacific in https://github.com/ceph/ceph/pull/40135. This will be in the next weekly rebase I build downstream (March 22nd).

Comment 8 errata-xmlrpc 2021-08-30 08:28:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat Ceph Storage 5.0 bug fix and enhancement), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:3294