Bug 2120601

Summary: [GSS] [4.10.z-Clone] ceph cluster unresponsive when 2 nodes of same zone is down in stretch cluster
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Sunil Kumar Acharya <sheggodu>
Component: rookAssignee: Travis Nielsen <tnielsen>
Status: CLOSED ERRATA QA Contact: Mahesh Shetty <mashetty>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.10CC: bkunal, bniver, etamir, hnallurv, jfindysz, kramdoss, madam, mashetty, mhackett, mmuench, muagarwa, ocs-bugs, odf-bz-bot, olakra, pdhange, pdhiran, racpatel, sarora, srai, tdesala, tnielsen, vkolli, vumrao
Target Milestone: ---   
Target Release: ODF 4.10.6   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Previously, the Ceph cluster would become unresponsive when two nodes of same zone are down in a stretch cluster. If the operator restarts in the middle of a mon failover, then multiple mons may get started on the same node, reducing the mon quorum availability. Thus, two mons could end up on the same node instead of being spread across unique nodes. With this update, the operator can now cancel the mon failover when the mon failover times out. And in the event that an extra mon is started during an operator restart, the extra mon will be removed based on topology to ensure these extra mons are not running on the same node or in the same zone, to maintain optimal topology spread.
Story Points: ---
Clone Of: 2113062 Environment:
Last Closed: 2022-09-21 17:29:37 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2113062    
Bug Blocks: 2120598    

Comment 15 errata-xmlrpc 2022-09-21 17:29:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.10.6 Bug Fix Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:6675