Bug 2277926 - [4.13.z clone] Expose the upgrade setting for a longer timeout waiting for healthy OSDs before continuing
Summary: [4.13.z clone] Expose the upgrade setting for a longer timeout waiting for he...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: ODF 4.13.9
Assignee: Nikhil Ladha
QA Contact: Elad
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2024-04-30 06:05 UTC by Nikhil Ladha
Modified: 2024-06-12 11:48 UTC (History)
7 users (show)

Fixed In Version: 4.13.9-3
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2024-06-12 11:48:37 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 2590 0 None open Bug 2277926: [release-4.13] Add waitTimeoutForHealthyOSDInMinutes field in the storagecluster CR 2024-05-02 15:11:18 UTC
Github red-hat-storage ocs-operator pull 2642 0 None open Bug 2277926: [release-4.13] Remove default value of waitTimeoutForHealthyOSDInMinutes 2024-05-30 15:45:26 UTC
Red Hat Product Errata RHBA-2024:3865 0 None None None 2024-06-12 11:48:41 UTC

Description Nikhil Ladha 2024-04-30 06:05:06 UTC
This bug was initially created as a copy of Bug #2276694


Description of problem (please be detailed as possible and provide log
snippests):

During upgrades, ODF currently waits up to 10 minutes while upgrading OSDs to verify they are healthy before continuining. If the PGs are all healthy within 10 minutes, the upgrade continues without any issue. If the PGs are still unhealthy after 10 minutes of upgrading an OSD, Rook will continue with the upgrade of the next OSD. If multiple OSDs end up being down at the same time this can cause data availability issues temporarily.

Rook has an option to increase the 10 minute timeout. We need to expose this to give the customer flexibility over this sensitivity. The setting is waitTimeoutForHealthyOSDInMinutes.

For more discussion see https://issues.redhat.com/browse/RHSTOR-5734.


Version of all relevant components (if applicable):

All ODF versions have this 10 min timeout currently.


Does this issue impact your ability to continue to work with the product
(please explain in detail what is the user impact)?

No

Is there any workaround available to the best of your knowledge?

No, except by editing the CephCluster directly and disabling the cluster reconcile (a very heavy-handed approach with other side effects).


Rate from 1 - 5 the complexity of the scenario you performed that caused this
bug (1 - very simple, 5 - very complex)?


Can this issue reproducible?

Yes

Can this issue reproduce from the UI?

NA

If this is a regression, please provide more details to justify this:

NA

Steps to Reproduce:
1. Install ODF
2. Upgrade ODF while PGs are unhealthy
3. See IO pause while multiple OSDs may be down.


Actual results:

IO pause during upgrade if OSDs are not healthy as expected.


Expected results:

Full data availability during upgrades.


Additional info:

Comment 15 errata-xmlrpc 2024-06-12 11:48:37 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.13.9 Bug Fix Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:3865


Note You need to log in before you can comment on or make changes to this bug.