Description of problem: Deployed OCP4.3 then CNV 2.2, the privileged SCC is modified by adding the kubevirt-{handler,apiserver,controller} service accounts into it. The idea is to keep the default SCCs untouched to avoid issues with upgrades. Version-Release number of selected component (if applicable): OpenShift Virtualization 2.2 but I guess 2.3 as well How reproducible: Deploy OCP4.3, observe the default SCCs, then deploy CNV and observe the changes. Steps to Reproduce: 1. Deploy OCP4.3 2. Get the SCC object (oc get scc -o yaml > pre.yaml) 3. Deploy CNV 4. Get the SCC object again (oc get scc -o yaml > post.yaml) Actual results: privileged SCC before: ``` - allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - '*' allowedUnsafeSysctls: - '*' apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: RunAsAny groups: - system:cluster-admins - system:nodes - system:masters kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: 'privileged allows access to all privileged and host features and the ability to run as any user, any group, any fsGroup, and with any SELinux context. WARNING: this is the most relaxed SCC and should be used only for cluster administration. Grant with caution.' creationTimestamp: "2020-05-04T10:28:52Z" generation: 1 name: privileged resourceVersion: "6489" selfLink: /apis/security.openshift.io/v1/securitycontextconstraints/privileged uid: b000901d-f681-4df8-9d3b-876b1cb4e90d priority: null readOnlyRootFilesystem: false requiredDropCapabilities: null runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny users: - system:admin - system:serviceaccount:openshift-infra:build-controller volumes: - '*' ``` privileged SCC after: ``` - allowHostDirVolumePlugin: true allowHostIPC: true allowHostNetwork: true allowHostPID: true allowHostPorts: true allowPrivilegeEscalation: true allowPrivilegedContainer: true allowedCapabilities: - '*' allowedUnsafeSysctls: - '*' apiVersion: security.openshift.io/v1 defaultAddCapabilities: null fsGroup: type: RunAsAny groups: - system:cluster-admins - system:nodes - system:masters kind: SecurityContextConstraints metadata: annotations: kubernetes.io/description: 'privileged allows access to all privileged and host features and the ability to run as any user, any group, any fsGroup, and with any SELinux context. WARNING: this is the most relaxed SCC and should be used only for cluster administration. Grant with caution.' creationTimestamp: "2020-05-04T10:28:52Z" generation: 4 name: privileged resourceVersion: "88968" selfLink: /apis/security.openshift.io/v1/securitycontextconstraints/privileged uid: b000901d-f681-4df8-9d3b-876b1cb4e90d priority: null readOnlyRootFilesystem: false requiredDropCapabilities: null runAsUser: type: RunAsAny seLinuxContext: type: RunAsAny seccompProfiles: - '*' supplementalGroups: type: RunAsAny users: - system:admin - system:serviceaccount:openshift-infra:build-controller - system:serviceaccount:openshift-cnv:kubevirt-handler - system:serviceaccount:openshift-cnv:kubevirt-apiserver - system:serviceaccount:openshift-cnv:kubevirt-controller volumes: - '*' ``` diff: ``` 28c28 < generation: 1 --- > generation: 4 30c30 < resourceVersion: "6489" --- > resourceVersion: "88968" 46a47,49 > - system:serviceaccount:openshift-cnv:kubevirt-handler > - system:serviceaccount:openshift-cnv:kubevirt-apiserver > - system:serviceaccount:openshift-cnv:kubevirt-controller ``` Expected results: Files differ but only because some SCCs have been added. Additional info: It seems similar than https://bugzilla.redhat.com/show_bug.cgi?id=1823704
Igor, can you look into this?
Igor, what's the status of this?
Submitted upstream PR https://github.com/kubevirt/kubevirt/pull/3507 Regarding code verification, I've submitted a ticket to QuickLAB https://redhat.service-now.com/surl.do?n=PNT0835965 need to stabilize my cluster since I had issues after editing the CSV.
We implemented also an automated test upstream on the HCO side with: https://github.com/kubevirt/hyperconverged-cluster-operator/pull/628
cnv-2.4]$ oc get scc privileged -o yaml | grep -A2 ^users users: system:admin system:serviceaccount:openshift-infra:build-controller cnv-2.4]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.5.0-rc.4 True False 108m Cluster version is 4.5.0-rc.4 VERIFIED against virt-launcher image: virt-launcher/images/v2.4.0-49
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:3194