Bug 1898999
Summary: | Windows VMs created from templates should only be scheduled on hyper-v-capable nodes | ||
---|---|---|---|
Product: | Container Native Virtualization (CNV) | Reporter: | Omer Yahud <oyahud> |
Component: | SSP | Assignee: | Kevin Wiesmueller <kwiesmul> |
Status: | CLOSED ERRATA | QA Contact: | Sarah Bennert <sbennert> |
Severity: | unspecified | Docs Contact: | |
Priority: | unspecified | ||
Version: | 2.5.0 | CC: | cnv-qe-bugs, fdeutsch, rnetser, sbennert |
Target Milestone: | --- | Keywords: | TestOnly |
Target Release: | 4.8.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2021-07-27 14:21:17 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Omer Yahud
2020-11-18 13:21:10 UTC
I'm quite sure that https://github.com/kubevirt/kubevirt/blob/master/pkg/virt-controller/services/template.go#L173 does what is requested here. KubeVirt is translating CPU features into scheduling requirements. Thus a VM with hyperv feature enabled, should lead to a launcher pod with the relevant hyperv selectors set. Ruth, can you confirm? This should work when a vm has some hyper-v feature and HypervStrictCheckEnabled is enabled. By default HypervStrictCheckEnabled is disbaled (https://kubevirt.io/user-guide/#/creation/guest-operating-system-information?id=hyperv-optimizations) I enabled the feature gate: $ oc get cm -n openshift-cnv kubevirt-config -oyaml apiVersion: v1 data: default-network-interface: masquerade feature-gates: DataVolumes,SRIOV,LiveMigration,CPUManager,CPUNodeDiscovery,Sidecar,Snapshot,HypervStrictCheckGate I created a Windows VM from templates, but nodeselectors was not added to the VMI: apiVersion: v1 items: - apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachineInstance metadata: annotations: kubevirt.io/latest-observed-api-version: v1alpha3 kubevirt.io/storage-observed-api-version: v1alpha3 creationTimestamp: "2020-11-30T13:00:01Z" finalizers: - foregroundDeleteVirtualMachine generation: 15 labels: kubevirt.io/domain: win-16 kubevirt.io/nodeName: ruth26-sxfvv-worker-0-b5q8v kubevirt.io/size: medium managedFields: - apiVersion: kubevirt.io/v1alpha3 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubevirt.io/latest-observed-api-version: {} f:kubevirt.io/storage-observed-api-version: {} f:labels: .: {} f:kubevirt.io/domain: {} f:kubevirt.io/nodeName: {} f:kubevirt.io/size: {} f:ownerReferences: {} f:spec: .: {} f:domain: .: {} f:clock: .: {} f:timer: .: {} f:hpet: .: {} f:present: {} f:hyperv: {} f:pit: .: {} f:tickPolicy: {} f:rtc: .: {} f:tickPolicy: {} f:utc: {} f:cpu: .: {} f:cores: {} f:sockets: {} f:threads: {} f:devices: .: {} f:disks: {} f:inputs: {} f:interfaces: {} f:features: .: {} f:acpi: {} f:apic: {} f:hyperv: .: {} f:relaxed: {} f:spinlocks: .: {} f:spinlocks: {} f:vapic: {} f:firmware: .: {} f:uuid: {} f:machine: .: {} f:type: {} f:resources: .: {} f:requests: .: {} f:memory: {} f:evictionStrategy: {} f:networks: {} f:terminationGracePeriodSeconds: {} f:volumes: {} f:status: .: {} f:activePods: .: {} f:97ad6b4c-f7d7-4697-8fc6-51fe3c156af0: {} f:guestOSInfo: {} f:nodeName: {} f:qosClass: {} manager: virt-controller operation: Update time: "2020-11-30T13:00:25Z" - apiVersion: kubevirt.io/v1alpha3 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: {} f:interfaces: {} f:migrationMethod: {} f:phase: {} manager: virt-handler operation: Update time: "2020-11-30T13:10:54Z" name: win-16 namespace: default ownerReferences: - apiVersion: kubevirt.io/v1alpha3 blockOwnerDeletion: true controller: true kind: VirtualMachine name: win-16 uid: b21a7628-bd37-46c6-bce0-9e7438aaba70 resourceVersion: "12525576" selfLink: /apis/kubevirt.io/v1alpha3/namespaces/default/virtualmachineinstances/win-16 uid: 878539f3-18ce-4d4b-93e9-4b112d860c54 spec: domain: clock: timer: hpet: present: false hyperv: present: true pit: present: true tickPolicy: delay rtc: present: true tickPolicy: catchup utc: {} cpu: cores: 1 sockets: 1 threads: 1 devices: disks: - disk: bus: sata name: rootdisk inputs: - bus: usb name: tablet type: tablet interfaces: - masquerade: {} model: e1000e name: default features: acpi: enabled: true apic: enabled: true hyperv: relaxed: enabled: true spinlocks: enabled: true spinlocks: 8191 vapic: enabled: true firmware: uuid: c5406853-6ccb-5524-a362-380a95246d99 machine: type: pc-q35-rhel8.2.0 resources: requests: cpu: 100m memory: 4Gi evictionStrategy: LiveMigrate networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - name: rootdisk persistentVolumeClaim: claimName: win-16 status: activePods: 97ad6b4c-f7d7-4697-8fc6-51fe3c156af0: ruth26-sxfvv-worker-0-b5q8v conditions: - lastProbeTime: null lastTransitionTime: null status: "True" type: LiveMigratable - lastProbeTime: null lastTransitionTime: "2020-11-30T13:00:22Z" status: "True" type: Ready - lastProbeTime: "2020-11-30T13:01:59Z" lastTransitionTime: null status: "True" type: AgentVersionNotSupported guestOSInfo: {} interfaces: - interfaceName: Ethernet 2 ipAddress: 10.131.0.156 ipAddresses: - 10.131.0.156 mac: 02:00:00:62:3b:91 name: default migrationMethod: LiveMigration nodeName: ruth26-sxfvv-worker-0-b5q8v phase: Running qosClass: Burstable kind: List metadata: resourceVersion: "" selfLink: "" Omer - a virt bug? These are the node's hyperv features: feature.node.kubernetes.io/kvm-info-cap-hyperv-base: "true" feature.node.kubernetes.io/kvm-info-cap-hyperv-frequencies: "true" feature.node.kubernetes.io/kvm-info-cap-hyperv-ipi: "true" feature.node.kubernetes.io/kvm-info-cap-hyperv-reenlightenment: "true" feature.node.kubernetes.io/kvm-info-cap-hyperv-reset: "true" feature.node.kubernetes.io/kvm-info-cap-hyperv-runtime: "true" feature.node.kubernetes.io/kvm-info-cap-hyperv-synic: "true" feature.node.kubernetes.io/kvm-info-cap-hyperv-synic2: "true" feature.node.kubernetes.io/kvm-info-cap-hyperv-synictimer: "true" feature.node.kubernetes.io/kvm-info-cap-hyperv-time: "true" feature.node.kubernetes.io/kvm-info-cap-hyperv-tlbflush: "true" feature.node.kubernetes.io/kvm-info-cap-hyperv-vpindex: "true" Ruth, the nodeSelector will be on the pod, not on the VMI. Thus after enabling the feature gate, and launching a VM with hyperv enlightments, please check the virt launcher pod to see if the selectors are set. Moving this bug to 2.7, as the related feature is also currently targeted to 2.7 Fabian, the selector is not set on the pod. In any case, the hyperv in the VM do not match those on the nodes so the VMI should not have been running. $ oc get pod virt-launcher-win-16-48j4c -oyaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "", "interface": "eth0", "ips": [ "10.131.0.156" ], "default": true, "dns": {} }] k8s.v1.cni.cncf.io/networks-status: |- [{ "name": "", "interface": "eth0", "ips": [ "10.131.0.156" ], "default": true, "dns": {} }] kubevirt.io/domain: win-16 traffic.sidecar.istio.io/kubevirtInterfaces: k6t-eth0 creationTimestamp: "2020-11-30T13:00:01Z" generateName: virt-launcher-win-16- labels: kubevirt.io: virt-launcher kubevirt.io/created-by: 878539f3-18ce-4d4b-93e9-4b112d860c54 kubevirt.io/domain: win-16 kubevirt.io/size: medium managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: .: {} f:kubevirt.io/domain: {} f:traffic.sidecar.istio.io/kubevirtInterfaces: {} f:generateName: {} f:labels: .: {} f:kubevirt.io: {} f:kubevirt.io/created-by: {} f:kubevirt.io/domain: {} f:kubevirt.io/size: {} f:ownerReferences: .: {} k:{"uid":"878539f3-18ce-4d4b-93e9-4b112d860c54"}: .: {} f:apiVersion: {} f:blockOwnerDeletion: {} f:controller: {} f:kind: {} f:name: {} f:uid: {} f:spec: f:automountServiceAccountToken: {} f:containers: k:{"name":"compute"}: .: {} f:command: {} f:image: {} f:imagePullPolicy: {} f:name: {} f:resources: .: {} f:limits: .: {} f:devices.kubevirt.io/kvm: {} f:devices.kubevirt.io/tun: {} f:requests: .: {} f:cpu: {} f:devices.kubevirt.io/kvm: {} f:devices.kubevirt.io/tun: {} f:memory: {} f:securityContext: .: {} f:capabilities: .: {} f:add: {} f:drop: {} f:privileged: {} f:runAsUser: {} f:terminationMessagePath: {} f:terminationMessagePolicy: {} f:volumeDevices: .: {} k:{"devicePath":"/dev/rootdisk"}: .: {} f:devicePath: {} f:name: {} f:volumeMounts: .: {} k:{"mountPath":"/var/run/kubevirt-ephemeral-disks"}: .: {} f:mountPath: {} f:name: {} k:{"mountPath":"/var/run/kubevirt/container-disks"}: .: {} f:mountPath: {} f:mountPropagation: {} f:name: {} k:{"mountPath":"/var/run/kubevirt/sockets"}: .: {} f:mountPath: {} f:name: {} k:{"mountPath":"/var/run/libvirt"}: .: {} f:mountPath: {} f:name: {} f:dnsPolicy: {} f:enableServiceLinks: {} f:hostname: {} f:nodeSelector: .: {} f:kubevirt.io/schedulable: {} f:restartPolicy: {} f:schedulerName: {} f:securityContext: .: {} f:fsGroup: {} f:runAsUser: {} f:seLinuxOptions: .: {} f:type: {} f:terminationGracePeriodSeconds: {} f:volumes: .: {} k:{"name":"container-disks"}: .: {} f:emptyDir: {} f:name: {} k:{"name":"ephemeral-disks"}: .: {} f:emptyDir: {} f:name: {} k:{"name":"libvirt-runtime"}: .: {} f:emptyDir: {} f:name: {} k:{"name":"rootdisk"}: .: {} f:name: {} f:persistentVolumeClaim: .: {} f:claimName: {} k:{"name":"sockets"}: .: {} f:emptyDir: {} f:name: {} k:{"name":"virt-bin-share-dir"}: .: {} f:emptyDir: {} f:name: {} manager: virt-controller operation: Update time: "2020-11-30T13:00:01Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:annotations: f:k8s.v1.cni.cncf.io/network-status: {} f:k8s.v1.cni.cncf.io/networks-status: {} manager: multus operation: Update time: "2020-11-30T13:00:20Z" - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:status: f:conditions: k:{"type":"ContainersReady"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Initialized"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} k:{"type":"Ready"}: .: {} f:lastProbeTime: {} f:lastTransitionTime: {} f:status: {} f:type: {} f:containerStatuses: {} f:hostIP: {} f:phase: {} f:podIP: {} f:podIPs: .: {} k:{"ip":"10.131.0.156"}: .: {} f:ip: {} f:startTime: {} manager: kubelet operation: Update time: "2020-11-30T13:00:22Z" name: virt-launcher-win-16-48j4c namespace: default ownerReferences: - apiVersion: kubevirt.io/v1alpha3 blockOwnerDeletion: true controller: true kind: VirtualMachineInstance name: win-16 uid: 878539f3-18ce-4d4b-93e9-4b112d860c54 resourceVersion: "12517302" selfLink: /api/v1/namespaces/default/pods/virt-launcher-win-16-48j4c uid: 97ad6b4c-f7d7-4697-8fc6-51fe3c156af0 spec: automountServiceAccountToken: false containers: - command: - /usr/bin/virt-launcher - --qemu-timeout - 5m - --name - win-16 - --uid - 878539f3-18ce-4d4b-93e9-4b112d860c54 - --namespace - default - --kubevirt-share-dir - /var/run/kubevirt - --ephemeral-disk-dir - /var/run/kubevirt-ephemeral-disks - --container-disk-dir - /var/run/kubevirt/container-disks - --grace-period-seconds - "195" - --hook-sidecars - "0" - --less-pvc-space-toleration - "10" - --ovmf-path - /usr/share/OVMF image: registry.redhat.io/container-native-virtualization/virt-launcher@sha256:12fd627e191fb5b1c397a576bb0f1a17af12802e2eb1e604016e7f5d6ce58f51 imagePullPolicy: IfNotPresent name: compute resources: limits: devices.kubevirt.io/kvm: "1" devices.kubevirt.io/tun: "1" requests: cpu: 100m devices.kubevirt.io/kvm: "1" devices.kubevirt.io/tun: "1" memory: "4481613825" securityContext: capabilities: add: - SYS_NICE drop: - NET_RAW privileged: false runAsUser: 0 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeDevices: - devicePath: /dev/rootdisk name: rootdisk volumeMounts: - mountPath: /var/run/kubevirt-ephemeral-disks name: ephemeral-disks - mountPath: /var/run/kubevirt/container-disks mountPropagation: HostToContainer name: container-disks - mountPath: /var/run/libvirt name: libvirt-runtime - mountPath: /var/run/kubevirt/sockets name: sockets dnsPolicy: ClusterFirst enableServiceLinks: false hostname: win-16 imagePullSecrets: - name: default-dockercfg-4ch8b nodeName: ruth26-sxfvv-worker-0-b5q8v nodeSelector: kubevirt.io/schedulable: "true" preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Never schedulerName: default-scheduler securityContext: fsGroup: 107 runAsUser: 0 seLinuxOptions: type: virt_launcher.process serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 210 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 - effect: NoSchedule key: node.kubernetes.io/memory-pressure operator: Exists volumes: - emptyDir: {} name: sockets - name: rootdisk persistentVolumeClaim: claimName: win-16 - emptyDir: {} name: virt-bin-share-dir - emptyDir: {} name: libvirt-runtime - emptyDir: {} name: ephemeral-disks - emptyDir: {} name: container-disks status: conditions: - lastProbeTime: null lastTransitionTime: "2020-11-30T13:00:01Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2020-11-30T13:00:22Z" status: "True" type: Ready - lastProbeTime: null lastTransitionTime: "2020-11-30T13:00:22Z" status: "True" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2020-11-30T13:00:01Z" status: "True" type: PodScheduled containerStatuses: - containerID: cri-o://fefb94db8c454a63683f3bfc4da6c7ebbb6ccb1a64a50f3de5d0d9d861be8a2a image: registry.redhat.io/container-native-virtualization/virt-launcher@sha256:12fd627e191fb5b1c397a576bb0f1a17af12802e2eb1e604016e7f5d6ce58f51 imageID: registry.redhat.io/container-native-virtualization/virt-launcher@sha256:12fd627e191fb5b1c397a576bb0f1a17af12802e2eb1e604016e7f5d6ce58f51 lastState: {} name: compute ready: true restartCount: 0 started: true state: running: startedAt: "2020-11-30T13:00:21Z" hostIP: 192.168.2.139 phase: Running podIP: 10.131.0.156 podIPs: - ip: 10.131.0.156 qosClass: Burstable startTime: "2020-11-30T13:00:01Z" Verified hyperv nodeSelector entries added to pod upon Windows VM creation nodeSelector: hyperv.node.kubevirt.io/frequencies: "true" hyperv.node.kubevirt.io/ipi: "true" hyperv.node.kubevirt.io/reenlightenment: "true" hyperv.node.kubevirt.io/reset: "true" hyperv.node.kubevirt.io/runtime: "true" hyperv.node.kubevirt.io/synic: "true" hyperv.node.kubevirt.io/synictimer: "true" hyperv.node.kubevirt.io/tlbflush: "true" hyperv.node.kubevirt.io/vpindex: "true" kubevirt.io/schedulable: "true" # Openshift cluster 4.8.0-fc.2 # HCO Operator registry.redhat.io/container-native-virtualization/hyperconverged-cluster-operator@sha256:508529059070b9fc4cb701416f7911e6954b15424af7e68ff161e93fda805dd3 hyperconverged-cluster-operator-container-v4.8.0-51 # SSP Operator registry.redhat.io/container-native-virtualization/kubevirt-ssp-operator@sha256:572b2ed7d34667520874e99402863f3011cd6f13452600451e715a4230c1f9b1 kubevirt-ssp-operator-container-v4.8.0-33 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Virtualization 4.8.0 Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2920 |