Bug 2115558 - [RFE] the upgrade should set noout, nodeep-scrub and noscrub and unset when upgrade will complete
Summary: [RFE] the upgrade should set noout, nodeep-scrub and noscrub and unset when u...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: rook
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Subham Rai
QA Contact: Neha Berry
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-08-04 22:17 UTC by Vikhyat Umrao
Modified: 2023-08-09 17:03 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-02-23 03:41:49 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github rook rook issues 10619 0 None open In Ceph - the upgrade should set noout, nodeep-scrub and noscrub and unset when upgrade will complete 2022-08-04 22:17:01 UTC

Description Vikhyat Umrao 2022-08-04 22:17:01 UTC
Description of problem (please be detailed as possible and provide log
snippets):


[RFE] the upgrade should set noout, nodeep-scrub, and noscrub and unset when the upgrade will complete

Version of all relevant components (if applicable):
ODF 4.11

Upstream feature ticket - https://github.com/rook/rook/issues/10619

Comment 3 Travis Nielsen 2022-09-26 15:24:12 UTC
Moving this RFE to 4.13.

Comment 5 Travis Nielsen 2023-01-10 15:08:09 UTC
Vikhyat What is the priority on this? How often is the issue seen? It seems to keep moving down the priority list.

Comment 6 Travis Nielsen 2023-01-24 15:05:58 UTC
Moving out to 4.14

Comment 9 Vikhyat Umrao 2023-02-23 03:41:49 UTC
(In reply to Travis Nielsen from comment #5)
> Vikhyat What is the priority on this? How often is the issue seen? It seems
> to keep moving down the priority list.

Maybe we can close this one because I have seen cephadm does not use it and we have been good in large clusters also recently in 5.x release, I think we should be good. In special cases, we can instruct customers/users to go with these additional cases when we troubleshoot issues.


Note You need to log in before you can comment on or make changes to this bug.