- Add steps similar to Step 17 in doc [1]. Current: $ oc process -n openshift-storage ocs-osd-removal \ -p FAILED_OSD_IDS=failed-osd-id1,failed-osd-id2 | oc create -f - Required: Identify the PVC as afterwards we need to delete PV associated with that specific PVC. $ osd_id_to_remove=1 $ oc get -n openshift-storage -o yaml deployment rook-ceph-osd-${osd_id_to_remove} | grep ceph.rook.io/pvc where, osd_id_to_remove is the integer in the pod name immediately after the rook-ceph-osd prefix. In this example, the deployment name is rook-ceph-osd-1. Example output: ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmc ceph.rook.io/pvc: ocs-deviceset-localblock-0-data-0-g2mmc In this example, the PVC name is ocs-deviceset-localblock-0-data-0-g2mmc. Remove the failed OSD from the cluster. $ oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=${osd_id_to_remove} |oc create -f - You can remove more than one OSD by adding comma separated OSD IDs in the command. (For example: FAILED_OSD_IDS=0,1,2) [1]https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/4.8/html-single/replacing_nodes/index#replacing-storage-nodes-on-ibm-power-infrastructure_ibm-power Reported by: rhn-support-prpandey https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/4.10/html/replacing_nodes/openshift_data_foundation_deployed_using_local_storage_devices#annotations:bca9554b-0452-48a6-90d5-4c4ff9c6486d
@prpandey, The required content which you are requesting for adding to the IBM Power guide (https://access.redhat.com/documentation/en-us/red_hat_openshift_container_storage/4.8/html-single/replacing_nodes/index#replacing-storage-nodes-on-ibm-power-infrastructure_ibm-power) already exists in Step 19. Could you please let me know, in which guide you want me to make the change?
@asriram, as Priya mentioned that changes need to be done in the Baremetal guide, so the assignee for this bug will be someone else.