Bug 1761474

Summary: HEALTH_OK is reported with no managers (or OSDs) in the cluster
Product: [Red Hat Storage] Red Hat Ceph Storage Reporter: Alfredo Deza <adeza>
Component: RADOSAssignee: Neha Ojha <nojha>
Status: CLOSED ERRATA QA Contact: Manohar Murthy <mmurthy>
Severity: high Docs Contact: Aron Gunn <agunn>
Priority: medium    
Version: 4.0CC: agunn, ceph-eng-bugs, ceph-qe-bugs, dzafman, hyelloji, jdurgin, kchai, kdreyer, knortema, nojha, ratamir, tserlin
Target Milestone: rcFlags: hyelloji: needinfo-
Target Release: 4.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: ceph-14.2.8-3.el8, ceph-14.2.8-3.el7 Doc Type: Bug Fix
Doc Text:
.A health warning status is reported when no Ceph Managers or OSDs are in the storage cluster In previous {storage-product} releases, the storage cluster health status was `HEALTH_OK` even though there were no Ceph Managers or OSDs in the storage cluster. With this release, this health status has changed, and will report a health warning if a storage cluster is not set up with Ceph Managers, or if all the Ceph Managers go down. Because {storage-product} heavily relies on the Ceph Manager to deliver key features, it is not advisable to run a Ceph storage cluster without Ceph Managers or OSDs.
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-05-19 17:31:11 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1816167    

Description Alfredo Deza 2019-10-14 13:12:33 UTC
Description of problem: When a cluster has no OSDs or no managers health is reported as HEALTH_OK

  cluster:
    id:     97ce8ce8-811c-46ce-9682-ce535d9859ab
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum a,b,c (age 11m)
    mgr: no daemons active
    osd: 0 osds: 0 up, 0 in

  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     


Version-Release number of selected component (if applicable): 14.2.X (any including the latest 14.2.4)


How reproducible: all the time


Steps to Reproduce: 
1. Deploy Ceph with no managers or OSDs
2.
3.

Actual results: report is HEALTH_OK


Expected results: report is HEALTH_WARN or HEALTH_ERR




Additional info:

Comment 1 RHEL Program Management 2019-10-14 13:12:39 UTC
Please specify the severity of this bug. Severity is defined here:
https://bugzilla.redhat.com/page.cgi?id=fields.html#bug_severity.

Comment 2 Yaniv Kaul 2019-10-14 13:44:56 UTC
Severity?

Comment 5 Boris Ranto 2019-10-14 16:07:44 UTC
This is more of a core ceph/rados thing, re-targetting.

Comment 6 Yaniv Kaul 2020-01-08 14:42:25 UTC
Has anyone looked at this? I assume it's because the mgr is not available?

Comment 7 Alfredo Deza 2020-01-08 15:08:29 UTC
I don't know if anyone has looked at this and I am not sure why this is happening.

Comment 8 Josh Durgin 2020-01-08 15:40:07 UTC
This was by design when ceph-mgr was created - the idea at the time was to avoid spurious warnings during cluster setup. and at that point ceph-mgr was not necessary for much functionality. At this point, ceph-mgr is doing much more. Currently the health status is only affected if there was ever a mgr running - it seems removing this condition so you get an error after there's no mgr for some time would resolve this.

Moving to 4.1 since this is not a blocker for 4.0 (same behavior as 3.x).

Comment 10 Josh Durgin 2020-03-23 20:45:40 UTC
included in 14.2.8 rebase

Comment 21 errata-xmlrpc 2020-05-19 17:31:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:2231