Bug 2276694

Summary: Expose the upgrade setting for a longer timeout waiting for healthy OSDs before continuing
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Travis Nielsen <tnielsen>
Component: ocs-operatorAssignee: Nikhil Ladha <nladha>
Status: CLOSED ERRATA QA Contact: Petr Balogh <pbalogh>
Severity: urgent Docs Contact:
Priority: unspecified    
Version: 4.14CC: bkunal, muagarwa, nberry, nladha, odf-bz-bot, pbalogh
Target Milestone: ---   
Target Release: ODF 4.16.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: 4.16.0-118 Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of:
: 2276824 (view as bug list) Environment:
Last Closed: 2024-07-17 13:20:50 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2276824    

Description Travis Nielsen 2024-04-23 18:34:39 UTC
Description of problem (please be detailed as possible and provide log
snippests):

During upgrades, ODF currently waits up to 10 minutes while upgrading OSDs to verify they are healthy before continuining. If the PGs are all healthy within 10 minutes, the upgrade continues without any issue. If the PGs are still unhealthy after 10 minutes of upgrading an OSD, Rook will continue with the upgrade of the next OSD. If multiple OSDs end up being down at the same time this can cause data availability issues temporarily.

Rook has an option to increase the 10 minute timeout. We need to expose this to give the customer flexibility over this sensitivity. The setting is waitTimeoutForHealthyOSDInMinutes.

For more discussion see https://issues.redhat.com/browse/RHSTOR-5734.


Version of all relevant components (if applicable):

All ODF versions have this 10 min timeout currently.


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

No

Is there any workaround available to the best of your knowledge?

No, except by editing the CephCluster directly and disabling the cluster reconcile (a very heavy-handed approach with other side effects).


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?

Yes

Can this issue reproduce from the UI?

NA

If this is a regression, please provide more details to justify this:

NA

Steps to Reproduce:
1. Install ODF
2. Upgrade ODF while PGs are unhealthy
3. See IO pause while multiple OSDs may be down.


Actual results:

IO pause during upgrade if OSDs are not healthy as expected.


Expected results:

Full data availability during upgrades.


Additional info:

Comment 24 Travis Nielsen 2024-06-03 18:06:59 UTC
Thanks Petr!

Comment 28 errata-xmlrpc 2024-07-17 13:20:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Red Hat OpenShift Data Foundation 4.16.0 security, enhancement & bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2024:4591