Bug 2189925 - [GSS] Problems recovering the environement in a double ODF node failure
Summary: [GSS] Problems recovering the environement in a double ODF node failure
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: rook
Version: 4.10
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Travis Nielsen
QA Contact: Neha Berry
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-04-26 13:14 UTC by amansan
Modified: 2023-08-09 17:03 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-06-15 06:59:19 UTC
Embargoed:


Attachments (Terms of Use)

Description amansan 2023-04-26 13:14:29 UTC
Description of problem (please be detailed as possible and provide log snippests):

The customer is doing some test to have a procedure to recover the environment in a two ODF nodes failure.

Version of all relevant components (if applicable):

ODF 4.10

Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

The environment has to go into production but due this problem they can´t 

Is there any workaround available to the best of your knowledge?

No, we have to power on again one more node to recover the access, but this is not a workaround because in a real situation we won’t be able to do that

Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?

2

Can this issue reproducible?

In customer environment yes.

Steps to Reproduce:
1. Install OCP / ODF 4.10
2. Shutdown 2 of the 3 ODF nodes at the same time
3. Replace one of the nodes
4. Try to follow mon single-quorum procedure


Actual results:

The procedure fails all the time


Expected results:

Be able to follow the procedure to recover the environment.


Note You need to log in before you can comment on or make changes to this bug.