I will try to check this bug on Monday
Currently I don't have a cluster with vSphere LSO 4.6, so I can't test it today.
If someone else has a cluster with the conf above let me know.
If not, I will try to deploy a new cluster tomorrow.
I used a Vsphere LSO 4.6 to check the bug.
Steps I did to reproduce the bug:
1. I scale down the deployment of the osd-0 by executing this command:
$oc scale deployment rook-ceph-osd-0 --replicas=0 -n openshift-storage
2. Check that only 2 OSD's are up instead of 3.
3. Go to Compute-Nodes
4. Click on Disks tab
5. Click on the kebab action of disk with "Not responding" status -> Start disk replacement
6. Click on Replace and verify that I see the "rebalancing is in progress" warning,
but still, I can perform the replacement(added a screenshot).
7. Try to click on the replace button and see that the disk can be replaced(added a screenshot).
Client Version: 4.3.8
Server Version: 4.6.0-0.nightly-2020-10-20-172149
Kubernetes Version: v1.19.0+d59ce34
ocs-operator.v4.6.0-134.ci OpenShift Container Storage 4.6.0-134.ci Succeeded
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.6.0-0.nightly-2020-10-20-172149 True False 20h Cluster version is 4.6.0-0.nightly-2020-10-20-172149
ceph version 14.2.8-111.el8cp (2e6029d57bc594eceba4751373da6505028c2650) nautilus (stable)
Created attachment 1723412 [details]
Created attachment 1723413 [details]
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.