Bug 2223948 - PodSecurity violations messages found in virt-operator
Summary: PodSecurity violations messages found in virt-operator
Keywords:
Status: POST
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Installation
Version: 4.14.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.14.0
Assignee: Simone Tiraboschi
QA Contact: Debarati Basu-Nag
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-07-19 11:30 UTC by Ahmad
Modified: 2023-08-02 11:58 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-policy-controller pull 123 0 None open psalabelsyncer: synchronize also audit and warn labels when enforcing 2023-07-25 12:49:31 UTC
Red Hat Issue Tracker CNV-31152 0 None None None 2023-07-19 11:33:34 UTC

Description Ahmad 2023-07-19 11:30:36 UTC
http://pastebin.test.redhat.com/1105334


Description of problem: In audit log of a freshly deployed 4.14.0 cluster, I see pod security violation messages due to restricted profile virt-operator virt-launcher, virt-handler

Version-Release number of selected component (if applicable):
4.14.0

How reproducible:
100% 

Steps to Reproduce:
1.deploy CNV
2. Check audit logs of 4.14.0 


Actual results:
Sample entry (http://pastebin.test.redhat.com/1105334):
=============
User-agent: virt-operator/v0.0.0 (linux/amd64) kubernetes/$Format, Violations:
	{'kind': 'Event', 'apiVersion': 'audit.k8s.io/v1', 'level': 'Metadata', 'auditID': '9cfee1fa-78eb-47ee-a927-c3836cea6d39', 'stage': 'ResponseComplete', 'requestURI': '/apis/apps/v1/namespaces/openshift-cnv/daemonsets/virt-handler', 'verb': 'patch', 'user': {'username': 'system:serviceaccount:openshift-cnv:kubevirt-operator', 'uid': 'f968dcac-34bb-481d-bd7e-b2cda614d888', 'groups': ['system:serviceaccounts', 'system:serviceaccounts:openshift-cnv', 'system:authenticated'], 'extra': {'authentication.kubernetes.io/pod-name': ['virt-operator-6749d94f-9h6bv'], 'authentication.kubernetes.io/pod-uid': ['47a3dd95-c28d-433c-97c9-be3278d9ec3c']}}, 'sourceIPs': ['10.9.96.49'], 'userAgent': 'virt-operator/v0.0.0 (linux/amd64) kubernetes/$Format', 'objectRef': {'resource': 'daemonsets', 'namespace': 'openshift-cnv', 'name': 'virt-handler', 'apiGroup': 'apps', 'apiVersion': 'v1'}, 'responseStatus': {'metadata': {}, 'code': 200}, 'requestReceivedTimestamp': '2023-07-18T13:09:59.697520Z', 'stageTimestamp': '2023-07-18T13:09:59.716472Z', 'annotations': {'authorization.k8s.io/decision': 'allow', 'authorization.k8s.io/reason': 'RBAC: allowed by ClusterRoleBinding "kubevirt-hyperconverged-operator.v4.14.0-68bd6f97f6" of ClusterRole "kubevirt-hyperconverged-operator.v4.14.0-68bd6f97f6" to ServiceAccount "kubevirt-operator/openshift-cnv"', 'pod-security.kubernetes.io/audit-violations': 'would violate PodSecurity "restricted:latest": host namespaces (hostPID=true), privileged (containers "virt-launcher", "virt-handler" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "virt-launcher", "virt-handler" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "virt-launcher", "virt-handler" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "libvirt-runtimes", "virt-share-dir", "virt-lib-dir", "virt-private-dir", "device-plugin", "kubelet-pods-shortened", "kubelet-pods", "node-labeller" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "virt-launcher", "virt-handler" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "virt-launcher", "virt-handler" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")'}}
=============


Details attached.

Expected results:
No pod security violation message

Additional info:
http://pastebin.test.redhat.com/1105334
https://kubernetes.io/docs/concepts/security/pod-security-admission/
https://bugzilla.redhat.com/show_bug.cgi?id=2089744

Comment 1 Simone Tiraboschi 2023-07-19 15:20:57 UTC
Technically this is behaving as expected from our side.

on our namespace we have:
$ oc get namespace openshift-cnv -o yaml
apiVersion: v1
kind: Namespace
metadata:
  annotations:
    openshift.io/node-selector: ""
    openshift.io/sa.scc.mcs: s0:c27,c9
    openshift.io/sa.scc.supplemental-groups: 1000720000/10000
    openshift.io/sa.scc.uid-range: 1000720000/10000
  creationTimestamp: "2023-07-12T11:45:57Z"
  labels:
    kubernetes.io/metadata.name: openshift-cnv
    olm.operatorgroup.uid/1092e918-ae54-4bac-82ce-2bc3d255f802: ""
    olm.operatorgroup.uid/bb0d098d-b2da-4076-a3eb-384313089ba9: ""
    openshift.io/cluster-monitoring: "true"
    pod-security.kubernetes.io/enforce: privileged
    pod-security.kubernetes.io/enforce-version: v1.24
    security.openshift.io/scc.podSecurityLabelSync: "true"
  name: openshift-cnv
  resourceVersion: "67019"
  uid: b8c86726-d7a1-41fc-a361-135c850cd0f8
spec:
  finalizers:
  - kubernetes
status:
  phase: Active


with `security.openshift.io/scc.podSecurityLabelSync: "true"` to let the OCP psalabelsyncer set the expected value for `pod-security.kubernetes.io/enforce` according to our SCCs and indeed  it's setting `pod-security.kubernetes.io/enforce: privileged` and so our pods are correctly ammitted.

The point is that the OCP psalabelsyncer is not setting `pod-security.kubernetes.io/warn: privileged` and `pod-security.kubernetes.io/warn: audit` see: ( https://github.com/openshift/cluster-policy-controller/blob/0dff401e9819311c8f0de2792e913342c92883a1/pkg/psalabelsyncer/podsecurity_label_sync_controller.go#L257-L266 )
and the default at OCP level is now `pod-security.kubernetes.io/warn: restricted` and `pod-security.kubernetes.io/warn: restricted` and so the violations are still auted and warned although absolutely harmless.

Let's try to get this properly fixed on the OCP psalabelsyncer side.


Note You need to log in before you can comment on or make changes to this bug.