Bug 2187656

Summary: [GSS] After adding 3 new osds the old ones are down being unable to read the superblock
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: amansan <amanzane>
Component: cephAssignee: Radoslaw Zarzynski <rzarzyns>
ceph sub component: RADOS QA Contact: Elad <ebenahar>
Status: CLOSED INSUFFICIENT_DATA Docs Contact:
Severity: high    
Priority: high CC: bhubbard, bhull, bkunal, bmcmurra, bniver, hnallurv, khover, mduasope, muagarwa, odf-bz-bot, rzarzyns, sostapov, tnielsen, vumrao
Version: 4.10   
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-09-18 13:08:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description amansan 2023-04-18 11:02:47 UTC
Description of problem (please be detailed as possible and provide log snippests):

After adding 3 new osds the old ones went into CLBO

Version of all relevant components (if applicable):

ODF 4.10

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

Yes, the information is in the old osds and failed before relocated the information

Is there any workaround available to the best of your knowledge?

No


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

3


Actual results:

The osds are in CLBO

Expected results:

The osds being running