Hide Forgot
Description of problem: Currently the openshift-cluster-version-operator namespace is setting `run-level 1`: https://github.com/openshift/cluster-version-operator/blob/d887ce0e59b504b78306a6837b814f490452c10e/install/0000_00_cluster-version-operator_00_namespace.yaml#L11 It was originally added here: https://github.com/openshift/cluster-version-operator/pull/24 But after testing it seems like it's not longer a requirement given the improvements to bootstrapping since 4.6. The run-level definition actually prevents any Security Context Constraint (SCC) from being applied to pods within that namespace. Also brings in line with: https://bugzilla.redhat.com/show_bug.cgi?id=1805488 Actual results: $ oc get ns openshift-cluster-version -o yaml | grep run-level openshift.io/run-level: "1" $ oc get pods -n openshift-cluster-version cluster-version-operator-784958c578-zfqzq -o yaml | grep scc Expected results: $ oc get ns openshift-cluster-version -o yaml | grep run-level $ oc get pods -n openshift-cluster-version cluster-version-operator-78cd94bc48-dv8bx -o yaml | grep scc openshift.io/scc: hostaccess Interesting that it gets admitted with `hostaccess`, so slightly better than `privileged`.
I'll attach to your existing PR, where I've asked some questions [1]. Do you have suggestions for how QE should validate this? Is it just "yup, we still install, and it doesn't take much longer without the label [2]"? [1]: https://github.com/openshift/cluster-version-operator/pull/623#issuecomment-960440037 [2]: https://github.com/openshift/enhancements/blob/f6da5f4f7011765da2c928473113bfe913b543a9/CONVENTIONS.md#runlevels
+1 I think the "yup, it didn't break anything and it doesn't take a significantly longer time without the label". The only thing I'd add is also a "yup, the operator pod gets an SCC label applied" which in my testing looks like: apiVersion: v1 kind: Pod metadata: annotations: openshift.io/scc: hostaccess creationTimestamp: "2021-11-04T02:30:31Z" generateName: cluster-version-operator-78cd94bc48- ...
Reproduced on 4.10.0-0.nightly-2021-11-29-191648 # ./oc get ns openshift-cluster-version -o yaml | grep run-level openshift.io/run-level: "1" # ./oc -n openshift-cluster-version get po cluster-version-operator-68bd8df9d7-rss6q -oyaml|grep scc Verified on 4.10.0-0.nightly-2021-12-03-213835 Fresh installation: # ./oc get ns openshift-cluster-version -o yaml | grep run-level openshift.io/run-level: "" # ./oc -n openshift-cluster-version get po cluster-version-operator-7c48855887-ls9ws -oyaml|grep scc openshift.io/scc: hostaccess Do upgrade from v4.9 to v4.10, before upgrade, # ./oc -n openshift-cluster-version get po cluster-version-operator-7456cb5bff-nl4tl -oyaml|grep scc # ./oc get ns openshift-cluster-version -o yaml | grep run-level openshift.io/run-level: "1" after upgrade, # ./oc get ns openshift-cluster-version -o yaml | grep run-level openshift.io/run-level: "" # ./oc -n openshift-cluster-version get po cluster-version-operator-7f8d6dd84c-pfj79 -oyaml|grep scc openshift.io/scc: hostaccess
case added and automated.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0056