Bug 2094272 - [DDF] Add an example here, because it's not clear what is the syntax of "failed-osd-id1". It's the number only of the
Summary: [DDF] Add an example here, because it's not clear what is the syntax of "fail...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: documentation
Version: 4.9
Hardware: All
OS: All
unspecified
unspecified
Target Milestone: ---
: ODF 4.9.z Async
Assignee: Agil Antony
QA Contact: Oded
URL:
Whiteboard:
Depends On: 1995530 2094271 2128435
Blocks: 2094273
TreeView+ depends on / blocked
 
Reported: 2022-06-07 10:25 UTC by Agil Antony
Modified: 2023-08-09 16:43 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 2094271
: 2094273 (view as bug list)
Environment:
Last Closed: 2023-03-09 12:47:11 UTC
Embargoed:


Attachments (Terms of Use)

Comment 7 Oded 2022-11-20 13:44:53 UTC
Doc fixed

https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.9/html-single/replacing_devices/index#replacing-operational-or-failed-storage-devices-on-vmware-infrastructure_rhodf
https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.9/html-single/replacing_nodes/index#replacing-an-operational-aws-node-upi_rhodf

Remove the old OSD from the cluster.

$ oc process -n openshift-storage ocs-osd-removal \
-p FAILED_OSD_IDS=<failed_osd_id> FORCE_OSD_REMOVAL=false | oc create -n openshift-storage -f -
<failed_osd_id>
Is the integer in the pod name immediately after the rook-ceph-osd prefix. You can add comma separated OSD IDs in the command to remove more than one OSD, for example, FAILED_OSD_IDS=0,1,2.


Note You need to log in before you can comment on or make changes to this bug.