Bug 2099601

Summary: [DDF] - Add steps similar to Step 17 in doc [1].
Product: [Red Hat Storage] Red Hat OpenShift Data Foundation Reporter: Direct Docs Feedback <ddf-bot>
Component: documentationAssignee: Kusuma <kbg>
Status: VERIFIED --- QA Contact: Sidhant Agrawal <sagrawal>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 4.10CC: aaaggarw, agantony, aivaras.laimikis, asriram, kbg, kramdoss, odf-bz-bot, oviner, sagrawal
Target Milestone: ---Keywords: ZStream
Target Release: ODF 4.10.13   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 2125210 2125212 2125213 (view as bug list) Environment:
Last Closed: Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2125210, 2125213, 2125214, 2125212, 2128436    

Description Direct Docs Feedback 2022-06-21 10:12:21 UTC
- Add steps similar to Step 17 in doc [1].


Current: 
$ oc process -n openshift-storage ocs-osd-removal \
-p FAILED_OSD_IDS=failed-osd-id1,failed-osd-id2 | oc create -f -


Required:

Identify the PVC as afterwards we need to delete PV associated with that specific PVC.

$ osd_id_to_remove=1
$ oc get -n openshift-storage -o yaml deployment rook-ceph-osd-${osd_id_to_remove} | grep ceph.rook.io/pvc
where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-1.

Example output:

ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmc
    ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmc
In this example, the PVC name is ocs-deviceset-localblock-0-data-0-g2mmc.

Remove the failed OSD from the cluster.

$ oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=${osd_id_to_remove} |oc create -f -
You can remove more than one OSD by adding comma separated OSD IDs in the command. (For example: FAILED_OSD_IDS=0,1,2)



[1]https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/4.8/html-single/replacing_nodes/index#replacing-storage-nodes-on-ibm-power-infrastructure_ibm-power

Reported by: rhn-support-prpandey

https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.10/html/replacing_nodes/openshift_data_foundation_deployed_using_local_storage_devices#annotations:bca9554b-0452-48a6-90d5-4c4ff9c6486d

Comment 2 Aaruni Aggarwal 2022-07-01 07:29:47 UTC
@prpandey, The required content which you are requesting for adding to the IBM Power guide (https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/4.8/html-single/replacing_nodes/index#replacing-storage-nodes-on-ibm-power-infrastructure_ibm-power) already exists in Step 19. 

Could you please let me know, in which guide you want me to make the change?

Comment 4 Aaruni Aggarwal 2022-07-06 11:14:13 UTC
@asriram, as Priya mentioned that changes need to be done in the Baremetal guide, so the assignee for this bug will be someone else.