Description of problem: node-labeller SCC is privileged, which appears too relaxed. All the pods in the hco_namespace/openshift-cnv namespace should either be "restricted" or belong to a custom SCC. node-labeller with privileged SCC appears too relaxed, probably we want a custom SCC here ? Version-Release number of selected component (if applicable): CNV-2.4 How reproducible: [kbidarka@kbidarka-host auth]$ oc get pod kubevirt-node-labeller-b5bqt -o yaml -n openshift-cnv | grep scc openshift.io/scc: "privileged" Steps to Reproduce: 1. check for kubevirt-node-labeller-b5bqt pod SCC 2. oc get pod kubevirt-node-labeller-b5bqt -o yaml -n openshift-cnv | grep scc 3. Actual results: openshift.io/scc: "privileged" Current SCC "privileged" seems too relaxed. Expected results: The idea is to avoid having pods running in hco_namespace/openshit-cnv with "privileged" SCC. Should have an SCC, which is not too relaxed Additional info: Earlier as this pod functionality was being merged with virt-handler, thought this bug wouldn't be necessary, but as kubevirt-node-labeller-b5bqt is back, decided to have this bug to track this issue.
Currently cannot verify this bug, due to the below issue, https://bugzilla.redhat.com/show_bug.cgi?id=1847594#c3 Added the bug to Depends on
This is bug is now fixed, kubevirt-node-labeller-fvmlm openshift.io/scc: kubevirt-node-labeller kubevirt-node-labeller-gxpxn openshift.io/scc: kubevirt-node-labeller kubevirt-node-labeller-jflzg openshift.io/scc: kubevirt-node-labeller
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:3194