I tested it with AWS 4.9 cluster. I tested it with the following steps: 1. Switch to openshift-storage project: $ oc project openshift-storage Now using project "openshift-storage" on server "https://api.ikave-aws6-49.qe.rh-ocs.com:6443". 2. Run the ocs-osd-removal job: $ oc process FAILED_OSD_IDS=1 ocs-osd-removal | oc create -f - job.batch/ocs-osd-removal-job created 3. Check that the job was created successfully: oc get jobs NAME COMPLETIONS DURATION AGE ocs-osd-removal-job 1/1 6s 8s rook-ceph-osd-prepare-ocs-deviceset-0-data-0z45qj 1/1 22s 4h15m rook-ceph-osd-prepare-ocs-deviceset-1-data-0jvch5 1/1 23s 4h15m rook-ceph-osd-prepare-ocs-deviceset-2-data-0pjkj8 1/1 22s 4h15m 4. Check the ocs-osd-removal job logs: oc logs ocs-osd-removal-job-swfpt 2023-02-09 16:18:12.821764 I | op-flags: failed to set flag "logtostderr". no such flag -logtostderr 2023-02-09 16:18:12.822018 I | rookcmd: starting Rook v4.9.13-2 with arguments '/usr/local/bin/rook ceph osd remove --osd-ids=1 --force-osd-removal false' From the logs, we can see that the rook version is v4.9.13-2 without the master tag. Additional info: Cluster versions: OC version: Client Version: 4.10.24 Server Version: 4.9.0-0.nightly-2023-02-03-045848 Kubernetes Version: v1.22.15+c763d11 OCS verison: ocs-operator.v4.9.13 OpenShift Container Storage 4.9.13 ocs-operator.v4.9.12 Succeeded Cluster version NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.9.0-0.nightly-2023-02-03-045848 True False 4h54m Cluster version is 4.9.0-0.nightly-2023-02-03-045848 Rook version: 2023-02-09 16:49:32.636077 I | op-flags: failed to set flag "logtostderr". no such flag -logtostderr rook: v4.9.13-2 go: go1.16.12 Ceph version: ceph version 16.2.0-152.el8cp (e456e8b705cb2f4a779689a0d80b122bcb0d67c9) pacific (stable)
I tested it with AWS 4.9 cluster. I tested it with the following steps: 1. Switch to the openshift-storage project: $ oc project openshift-storage Now using project "openshift-storage" on server "https://api.ikave-aws3-49.qe.rh-ocs.com:6443". 2. Run the ocs-osd-removal job: $ oc process FAILED_OSD_IDS=0 ocs-osd-removal | oc create -f - job.batch/ocs-osd-removal-job created 3. Check that the job was created successfully: $ oc get jobs NAME COMPLETIONS DURATION AGE ocs-osd-removal-job 1/1 5s 8s rook-ceph-osd-prepare-ocs-deviceset-0-data-0hjs88 1/1 22s 14m rook-ceph-osd-prepare-ocs-deviceset-1-data-0hwgsr 1/1 22s 14m rook-ceph-osd-prepare-ocs-deviceset-2-data-0lsxds 1/1 21s 14m 4. Check the ocs-osd-removal job logs: $ oc logs ocs-osd-removal-job-rlm84 2023-03-09 15:01:52.392962 I | op-flags: failed to set flag "logtostderr". no such flag -logtostderr 2023-03-09 15:01:52.393191 I | rookcmd: starting Rook v4.9.14-1 with arguments '/usr/local/bin/rook ceph osd remove --osd-ids=0 --force-osd-removal false' From the logs, we can see that the rook version is v4.9.14-1 without the master tag. Additional info: Cluster versions: OC version: Client Version: 4.10.24 Server Version: 4.9.0-0.nightly-2023-02-28-091622 Kubernetes Version: v1.22.15+c763d11 OCS verison: ocs-operator.v4.9.14 OpenShift Container Storage 4.9.14 ocs-operator.v4.9.13 Succeeded Cluster version NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.9.0-0.nightly-2023-02-28-091622 True False 24m Cluster version is 4.9.0-0.nightly-2023-02-28-091622 Rook version: 2023-03-09 15:04:00.094276 I | op-flags: failed to set flag "logtostderr". no such flag -logtostderr rook: v4.9.14-1 go: go1.16.12 Ceph version: ceph version 16.2.0-152.el8cp (e456e8b705cb2f4a779689a0d80b122bcb0d67c9) pacific (stable)
I think you can move to modify. since we can see ``` 2023-03-09 15:01:52.393191 I | rookcmd: starting Rook v4.9.14-1 with arguments '/usr/local/bin/rook ceph osd remove --osd-ids=0 --force-osd-removal false' ``` it is not using the master tag Also, no doc text is required so have the flag set already `-`. @sheggodu
I think we can move it to Verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenShift Data Foundation 4.9.14 bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:1354