Bug 1959171 - [GSS] manually repairing inconsistent objects in OCS
Summary: [GSS] manually repairing inconsistent objects in OCS
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ceph
Version: 4.6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: kelwhite
QA Contact: Harish NV Rao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-05-10 20:06 UTC by kelwhite
Modified: 2023-08-09 16:37 UTC (History)
19 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-01-31 16:32:38 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 6523031 0 None None None 2021-12-15 10:46:14 UTC

Comment 3 Josh Durgin 2021-05-14 22:28:03 UTC
As discussed with support, fixing these kinds of issues requires using ceph-objectstore-tool with the osd disk while the osd is offline. Sebastien, is there a way to do this in this version of OCS?

Comment 4 Sébastien Han 2021-05-17 08:51:00 UTC
Yes, we need to:

* remove the livenessprobe with: oc patch deployment rook-ceph-osd-<OSD_ID>  --type='json' -p '[{"op":"remove", "path":"/spec/template/spec/containers/0/livenessProbe"}]'
* change the osd container command with sleep: oc patch deployment rook-ceph-osd-<OSD_ID> -p '{"spec": {"template": {"spec": {"containers": [{"name": "osd", "command": ["sleep", "infinity"], "args": []}]}}}}'
* exec into the container: oc exec -ti dpeloy/rook-ceph-osd-<OSD_ID> -- bash
* run the "ceph-objectstore-tool" command against the OSD block dev
* once maintenance is done, restart the rook-ceph operator, the OSD deployment changes will be reverted

Thanks.

Comment 13 Scott Ostapovicz 2021-09-07 14:03:37 UTC
Still waiting for an update.


Note You need to log in before you can comment on or make changes to this bug.