Bug 2135632 - Do not use rook master tag in job template [4.10.z]
Summary: Do not use rook master tag in job template [4.10.z]
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.9
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: ODF 4.10.10
Assignee: Subham Rai
QA Contact: Itzhak
URL:
Whiteboard:
Depends On: 2135626 2135636 2135736
Blocks: 2135631
TreeView+ depends on / blocked
 
Reported: 2022-10-18 06:19 UTC by Subham Rai
Modified: 2023-08-09 17:00 UTC (History)
6 users (show)

Fixed In Version: 4.10.10-1
Doc Type: No Doc Update
Doc Text:
Clone Of: 2135626
Environment:
Last Closed: 2023-02-20 15:40:44 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 1858 0 None open Bug 2135632: [release-4.10] Don't use rook master tag in Job 2022-12-17 04:17:34 UTC
Red Hat Product Errata RHBA-2023:0827 0 None None None 2023-02-20 15:40:49 UTC

Comment 5 Subham Rai 2022-12-28 07:41:21 UTC
Removing needinfo since pr is bp and bz and moved to modified

Comment 12 Itzhak 2023-02-07 16:06:37 UTC
I tested it with AWS 4.10 cluster.

I tested it with the following steps:

1. Switch to openshift-storage project:
$ oc project openshift-storage 
Now using project "openshift-storage" on server "https://api.ikave-aws3-410.qe.rh-ocs.com:6443".

2. Run the ocs-osd-removal job: 
$ oc process FAILED_OSD_IDS=1 ocs-osd-removal | oc create -f -
job.batch/ocs-osd-removal-job created

3. Check that the job was created successfully:
$ oc get jobs
NAME                                                COMPLETIONS   DURATION   AGE
ocs-osd-removal-job                                 1/1           5s         5s
rook-ceph-osd-prepare-ocs-deviceset-0-data-0ld4lt   1/1           44s        17m
rook-ceph-osd-prepare-ocs-deviceset-1-data-0krksz   1/1           27s        17m
rook-ceph-osd-prepare-ocs-deviceset-2-data-0k588k   1/1           29s        17m

4. Check the ocs-osd-removal job logs:
$ oc logs ocs-osd-removal-job-db9pq 
2023-02-07 14:57:51.450651 I | rookcmd: starting Rook v4.10.10-0.e9e0b595040ada0e85f3601e569b051715270742 with arguments '/usr/local/bin/rook ceph osd remove --osd-ids=1 --force-osd-removal false'
2023-02-07 14:57:51.450719 I | rookcmd: flag values: --force-osd-removal=false, --help=false, --log-level=DEBUG, --operator-image=, --osd-ids=1, --preserve-pvc=false, --service-account=


We can see that in the first line, the rook version is v4.10.10-0 without the master tag.


Cluster versions: 

OC version:
Client Version: 4.10.24
Server Version: 4.10.0-0.nightly-2023-02-07-072557
Kubernetes Version: v1.23.12+8a6bfe4

OCS version:
ocs-operator.v4.10.10              OpenShift Container Storage   4.10.10   ocs-operator.v4.10.9              Succeeded

Cluster version
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.10.0-0.nightly-2023-02-07-072557   True        False         35m     Cluster version is 4.10.0-0.nightly-2023-02-07-072557

Rook version:
rook: v4.10.10-0.e9e0b595040ada0e85f3601e569b051715270742
go: go1.16.12

Ceph version:
ceph version 16.2.7-126.el8cp (fe0af61d104d48cb9d116cde6e593b5fc8c197e4) pacific (stable)

Comment 14 Itzhak 2023-02-07 16:14:08 UTC
According to the two comments above, I am moving the bug to Verified.

Comment 19 errata-xmlrpc 2023-02-20 15:40:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.10.10 Bug Fix Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:0827


Note You need to log in before you can comment on or make changes to this bug.