Bug 2135636 - Do not use rook master tag in job template [4.9.z]
Summary: Do not use rook master tag in job template [4.9.z]
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.9
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: ODF 4.9.14
Assignee: Subham Rai
QA Contact: Itzhak
URL:
Whiteboard:
Depends On: 2135626 2135736
Blocks: 2135631 2135632
TreeView+ depends on / blocked
 
Reported: 2022-10-18 06:23 UTC by Subham Rai
Modified: 2023-08-09 17:00 UTC (History)
10 users (show)

Fixed In Version: 4.9.12
Doc Type: No Doc Update
Doc Text:
Clone Of: 2135626
Environment:
Last Closed: 2023-03-20 16:32:22 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 1859 0 None open Bug 2135636: [release-4.9] Don't use rook master tag in Job 2022-11-17 13:24:15 UTC
Red Hat Product Errata RHBA-2023:1354 0 None None None 2023-03-20 16:32:28 UTC

Comment 9 Itzhak 2023-02-09 16:51:10 UTC
I tested it with AWS 4.9 cluster.

I tested it with the following steps:

1. Switch to openshift-storage project:
$ oc project openshift-storage 
Now using project "openshift-storage" on server "https://api.ikave-aws6-49.qe.rh-ocs.com:6443".

2. Run the ocs-osd-removal job: 
$ oc process FAILED_OSD_IDS=1 ocs-osd-removal | oc create -f -
job.batch/ocs-osd-removal-job created

3. Check that the job was created successfully:
oc get jobs
NAME                                                COMPLETIONS   DURATION   AGE
ocs-osd-removal-job                                 1/1           6s         8s
rook-ceph-osd-prepare-ocs-deviceset-0-data-0z45qj   1/1           22s        4h15m
rook-ceph-osd-prepare-ocs-deviceset-1-data-0jvch5   1/1           23s        4h15m
rook-ceph-osd-prepare-ocs-deviceset-2-data-0pjkj8   1/1           22s        4h15m

4. Check the ocs-osd-removal job logs:
oc logs ocs-osd-removal-job-swfpt 
2023-02-09 16:18:12.821764 I | op-flags: failed to set flag "logtostderr". no such flag -logtostderr
2023-02-09 16:18:12.822018 I | rookcmd: starting Rook v4.9.13-2 with arguments '/usr/local/bin/rook ceph osd remove --osd-ids=1 --force-osd-removal false'


From the logs, we can see that the rook version is v4.9.13-2 without the master tag.

Additional info:

Cluster versions: 

OC version:
Client Version: 4.10.24
Server Version: 4.9.0-0.nightly-2023-02-03-045848
Kubernetes Version: v1.22.15+c763d11

OCS verison:
ocs-operator.v4.9.13   OpenShift Container Storage   4.9.13    ocs-operator.v4.9.12   Succeeded

Cluster version
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.9.0-0.nightly-2023-02-03-045848   True        False         4h54m   Cluster version is 4.9.0-0.nightly-2023-02-03-045848

Rook version:
2023-02-09 16:49:32.636077 I | op-flags: failed to set flag "logtostderr". no such flag -logtostderr
rook: v4.9.13-2
go: go1.16.12

Ceph version:
ceph version 16.2.0-152.el8cp (e456e8b705cb2f4a779689a0d80b122bcb0d67c9) pacific (stable)

Comment 16 Itzhak 2023-03-09 15:18:30 UTC
I tested it with AWS 4.9 cluster.

I tested it with the following steps:

1. Switch to the openshift-storage project:
$ oc project openshift-storage 
Now using project "openshift-storage" on server "https://api.ikave-aws3-49.qe.rh-ocs.com:6443".

2. Run the ocs-osd-removal job: 
$ oc process FAILED_OSD_IDS=0 ocs-osd-removal | oc create -f -
job.batch/ocs-osd-removal-job created

3. Check that the job was created successfully:
$ oc get jobs
NAME                                                COMPLETIONS   DURATION   AGE
ocs-osd-removal-job                                 1/1           5s         8s
rook-ceph-osd-prepare-ocs-deviceset-0-data-0hjs88   1/1           22s        14m
rook-ceph-osd-prepare-ocs-deviceset-1-data-0hwgsr   1/1           22s        14m
rook-ceph-osd-prepare-ocs-deviceset-2-data-0lsxds   1/1           21s        14m

4. Check the ocs-osd-removal job logs:
$ oc logs ocs-osd-removal-job-rlm84
2023-03-09 15:01:52.392962 I | op-flags: failed to set flag "logtostderr". no such flag -logtostderr
2023-03-09 15:01:52.393191 I | rookcmd: starting Rook v4.9.14-1 with arguments '/usr/local/bin/rook ceph osd remove --osd-ids=0 --force-osd-removal false'


From the logs, we can see that the rook version is v4.9.14-1 without the master tag.

Additional info:

Cluster versions: 

OC version:
Client Version: 4.10.24
Server Version: 4.9.0-0.nightly-2023-02-28-091622
Kubernetes Version: v1.22.15+c763d11

OCS verison:
ocs-operator.v4.9.14   OpenShift Container Storage   4.9.14    ocs-operator.v4.9.13   Succeeded

Cluster version
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.9.0-0.nightly-2023-02-28-091622   True        False         24m     Cluster version is 4.9.0-0.nightly-2023-02-28-091622

Rook version:
2023-03-09 15:04:00.094276 I | op-flags: failed to set flag "logtostderr". no such flag -logtostderr
rook: v4.9.14-1
go: go1.16.12

Ceph version:
ceph version 16.2.0-152.el8cp (e456e8b705cb2f4a779689a0d80b122bcb0d67c9) pacific (stable)

Comment 18 Subham Rai 2023-03-09 15:54:19 UTC
I think you can move to modify. since we can see 
```
2023-03-09 15:01:52.393191 I | rookcmd: starting Rook v4.9.14-1 with arguments '/usr/local/bin/rook ceph osd remove --osd-ids=0 --force-osd-removal false'

```
it is not using the master tag

Also, no doc text is required so have the flag set already `-`. @sheggodu

Comment 20 Itzhak 2023-03-09 16:30:15 UTC
I think we can move it to Verified.

Comment 25 errata-xmlrpc 2023-03-20 16:32:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.9.14 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1354


Note You need to log in before you can comment on or make changes to this bug.