Bug 2135631 - Do not use rook master tag in job template [4.11.z]
Summary: Do not use rook master tag in job template [4.11.z]
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenShift Data Foundation
Classification: Red Hat Storage
Component: ocs-operator
Version: 4.9
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: ODF 4.11.5
Assignee: Subham Rai
QA Contact: Itzhak
URL:
Whiteboard:
Depends On: 2135626 2135632 2135636 2135736
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-10-18 06:17 UTC by Subham Rai
Modified: 2023-08-09 17:00 UTC (History)
7 users (show)

Fixed In Version: odf-4.11.5-8
Doc Type: No Doc Update
Doc Text:
Clone Of: 2135626
Environment:
Last Closed: 2023-02-14 16:58:10 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github red-hat-storage ocs-operator pull 1857 0 None open Bug 2135631: [release-4.11] Don't use rook master tag in Job 2022-12-13 14:43:28 UTC
Red Hat Product Errata RHBA-2023:0764 0 None None None 2023-02-14 16:58:30 UTC

Comment 13 Itzhak 2023-02-07 17:19:40 UTC
I tested it with AWS 4.10 cluster.

I tested it with the following steps:

1. Switch to openshift-storage project:
$ oc project openshift-storage 
Now using project "openshift-storage" on server "https://api.ikave-aws5-411.qe.rh-ocs.com:6443".

2. Run the ocs-osd-removal job: 
$ oc process FAILED_OSD_IDS=2 ocs-osd-removal | oc create -f -
Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "operator" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "operator" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "operator" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "operator" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")
job.batch/ocs-osd-removal-job created


3. Check that the job was created successfully:
$ oc get jobs
NAME                                                COMPLETIONS   DURATION   AGE
ocs-osd-removal-job                                 1/1           5s         18s
rook-ceph-osd-prepare-ocs-deviceset-0-data-0mvx4c   1/1           31s        27m
rook-ceph-osd-prepare-ocs-deviceset-1-data-0k5fsg   1/1           28s        27m
rook-ceph-osd-prepare-ocs-deviceset-2-data-078jg7   1/1           29s        27m


4. Check the ocs-osd-removal job logs:
$ oc logs ocs-osd-removal-job-xgvjt 
2023-02-07 16:17:16.088985 I | rookcmd: starting Rook v4.11.5-0.d4bc197c9a967840c92dc0298fbd340b75a21836 with arguments '/usr/local/bin/rook ceph osd remove --osd-ids=2 --force-osd-removal false'
2023-02-07 16:17:16.089038 I | rookcmd: flag values: --force-osd-removal=false, --help=false, --log-level=DEBUG, --operator-image=, --osd-ids=2, --preserve-pvc=false, --service-account=


We can see that in the first line, the rook version is v4.11.5-0 without the master tag.


Cluster versions:

OC version:
Client Version: 4.10.24
Server Version: 4.11.0-0.nightly-2023-02-06-192157
Kubernetes Version: v1.24.6+263df15

OCS verison:
ocs-operator.v4.11.5              OpenShift Container Storage   4.11.5    ocs-operator.v4.11.4              Succeeded

Cluster version
NAME      VERSION                              AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.11.0-0.nightly-2023-02-06-192157   True        False         63m     Cluster version is 4.11.0-0.nightly-2023-02-06-192157

Rook version:
rook: v4.11.5-0.d4bc197c9a967840c92dc0298fbd340b75a21836
go: go1.17.12

Ceph version:
ceph version 16.2.10-94.el8cp (48ce8ed67474ea50f10c019b9445be7f49749d23) pacific (stable)

Comment 15 Itzhak 2023-02-07 17:21:30 UTC
According to the two comments above, I am moving the bug to Verified.

Comment 16 Itzhak 2023-02-07 17:25:04 UTC
Onw correction - about comment https://bugzilla.redhat.com/show_bug.cgi?id=2135631#c13, 
I tested it with AWS 4.11 cluster and not 4.10.

Comment 20 errata-xmlrpc 2023-02-14 16:58:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenShift Data Foundation 4.11.5 Bug Fix Update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:0764


Note You need to log in before you can comment on or make changes to this bug.