Bug 2128999 - virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24
Summary: virt-launcher cannot be started on OCP 4.12 due to PodSecurity restricted:v1.24
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Virtualization
Version: 4.12.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.10.6
Assignee: lpivarc
QA Contact: Akriti Gupta
URL:
Whiteboard:
Depends On: 2119128 2128997
Blocks: 2132015
TreeView+ depends on / blocked
 
Reported: 2022-09-22 09:10 UTC by Antonio Cardace
Modified: 2022-10-25 14:47 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 2128997
: 2132015 (view as bug list)
Environment:
Last Closed: 2022-10-25 14:47:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt hyperconverged-cluster-operator pull 2105 0 None Merged [release-1.6] Enable PSA FG on Kubevirt 2022-10-11 09:39:48 UTC
Github kubevirt kubevirt pull 8529 0 None Merged [release-0.49] Integrate with Pod security 2022-10-11 09:39:48 UTC
Red Hat Product Errata RHEA-2022:7179 0 None None None 2022-10-25 14:47:11 UTC

Comment 1 Kedar Bidarkar 2022-10-04 10:59:53 UTC
Should we be targeting this bug for 4.10.7 and not 4.10.6?

Just so as to avoid regression when someone upgrading CNV from 4.10.6 to 4.11.0.

4.10.6 ---> PSA enabled
4.11.0 ---> no PSA support
4.11.1 ---> PSA enabled

We would have 4.10.7 release again, before 4.12.0 release.

Comment 2 sgott 2022-10-06 18:30:44 UTC
This shouldn't cause an issue with the upgrade path. For our purposes, "PSA enabled" effectively means we're adding correct labels to resources. Thus moving to a cluster version that's not aware and then back to one that is will not cause any sort of issue.

Comment 3 Akriti Gupta 2022-10-12 07:25:42 UTC
Verified on  v4.10.6-29

Vm can be successfully started 

[akrgupta@fedora ~]$ oc get vm
NAME            AGE   STATUS         READY
vm-rhel84-ocs   34s   Provisioning   False
[akrgupta@fedora ~]$ virtctl start vm-rhel84-ocs
VM vm-rhel84-ocs was scheduled to start
[akrgupta@fedora ~]$ oc get vm
NAME            AGE   STATUS    READY
vm-rhel84-ocs   12m   Running   True
[akrgupta@fedora ~]$ oc get vmi
NAME            AGE     PHASE     IP            NODENAME                            READY
vm-rhel84-ocs   6m44s   Running   10.128.2.85   virt-akr-410-z96nw-worker-0-l8mq6   True
[akrgupta@fedora ~]$ oc get pod
NAME                                READY   STATUS    RESTARTS   AGE
virt-launcher-vm-rhel84-ocs-6gv72   1/1     Running   0          6m49s
[akrgupta@fedora ~]$ virtctl migrate vm-rhel84-ocs
VM vm-rhel84-ocs was scheduled to migrate
[akrgupta@fedora ~]$ oc get vmi
NAME            AGE     PHASE     IP            NODENAME                            READY
vm-rhel84-ocs   8m55s   Running   10.131.0.57   virt-akr-410-z96nw-worker-0-hckzd   True

Comment 5 Akriti Gupta 2022-10-20 12:21:02 UTC
1) created new namespace - it has default labels:
[akrgupta@fedora auth]$ oc describe ns namsespace-example
Name:         namsespace-example
Labels:       kubernetes.io/metadata.name=namsespace-example

2) Created and started VM in this namespace - labels updated:
[akrgupta@fedora ~]$ oc get vm
NAME            AGE     STATUS    READY
vm-rhel86-ocs   8m16s   Running   True
[akrgupta@fedora ~]$ oc describe ns namsespace-example
Name:         namsespace-example
Labels:       kubernetes.io/metadata.name=namsespace-example
              pod-security.kubernetes.io/enforce=privileged
              security.openshift.io/scc.podSecurityLabelSync=false

3) Removed VM - labels still the same (not reverted back):
[akrgupta@fedora ~]$ oc delete vm vm-rhel86-ocs
virtualmachine.kubevirt.io "vm-rhel86-ocs" deleted
[akrgupta@fedora ~]$ oc describe ns namsespace-example
Name:         namsespace-example
Labels:       kubernetes.io/metadata.name=namsespace-example
              pod-security.kubernetes.io/enforce=privileged
              security.openshift.io/scc.podSecurityLabelSync=false

PSA feature gate is present
[akrgupta@fedora ~]$ oc get kv -n openshift-cnv kubevirt-kubevirt-hyperconverged -o json | grep -A 15 "featureGates"
                "featureGates": [
                    "DataVolumes",
                    "SRIOV",
                    "CPUManager",
                    "CPUNodeDiscovery",
                    "Snapshot",
                    "HotplugVolumes",
                    "ExpandDisks",
                    "GPU",
                    "HostDevices",
                    "DownwardMetrics",
                    "NUMA",
                    "LiveMigration",
                    "PSA",
                    "WithHostModelCPU",
                    "HypervStrictCheck",

Comment 10 errata-xmlrpc 2022-10-25 14:47:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Virtualization 4.10.6 Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2022:7179


Note You need to log in before you can comment on or make changes to this bug.