Bug 2125123

Summary: [TRACKER] ceph osd tree shows some OSDs as up when all the OSDs in the cluster are scaled down
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Rachael <rgeorge>
Component: rookAssignee: Travis Nielsen <tnielsen>
Status: CLOSED NOTABUG QA Contact: Neha Berry <nberry>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.10CC: aeyal, dbindra, fbalak, madam, mmuench, nberry, ocs-bugs, odf-bz-bot, omitrani
Target Milestone: ---Flags: rgeorge: needinfo? (nberry)
fbalak: needinfo? (nberry)
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 2072900 Environment:
Last Closed: 2022-10-17 15:09:50 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2072900    
Bug Blocks:    

Comment 2 Travis Nielsen 2022-09-08 17:41:11 UTC
The OSD up/down status is updated by OSDs performing health checks on each other. If all of the OSDs are down, then the OSD up/down status will no longer be updated and is expected to be inaccurate.

This has always been the behavior for Ceph, and out of Rook's control. What really is the scenario? Are you trying to rely on the OSD up/down status? If so, you'll need to find another monitoring check for OSDs being down.

Comment 4 Travis Nielsen 2022-09-26 15:18:28 UTC
Any feedback on the scenario or shall we close this?

Comment 7 Travis Nielsen 2022-10-17 15:09:50 UTC
Please reopen if there are more details on the requirement.