Bug 1781497 - Node labeller adds cpu-model-Haswell true; creating a VM with this cpu model fails.
Summary: Node labeller adds cpu-model-Haswell true; creating a VM with this cpu model...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: SSP
Version: 2.2.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 2.4.0
Assignee: Karel Šimon
QA Contact: Israel Pinto
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-12-10 07:11 UTC by Ruth Netser
Modified: 2020-07-03 04:51 UTC (History)
19 users (show)

Fixed In Version: virt-launcher-container-v2.4.0-45
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-06-22 16:29:37 UTC
Target Upstream Version:
bgaydos: needinfo-


Attachments (Terms of Use)
virt-launcher pod log (7.24 MB, text/plain)
2019-12-10 07:11 UTC, Ruth Netser
no flags Details

Description Ruth Netser 2019-12-10 07:11:02 UTC
Created attachment 1643520 [details]
virt-launcher pod log

Description of problem:
with CNV 2.2, node labeller adds node-labeller-feature.node.kubernetes.io/cpu-model-Haswell: "true" on a node with
model name : Intel Core Processor (Haswell, no TSX, IBRS)


Creating a VM with cpu model Haswell fails on:
Message='the CPU is incompatible with host CPU: Host CPU does not provide required features: hle, rtm')"



Version-Release number of selected component (if applicable):
$ oc get -n openshift-cnv csv
NAME                                      DISPLAY                                    VERSION   REPLACES                                  PHASE
kubevirt-hyperconverged-operator.v2.2.0   Container-native virtualization Operator   2.2.0     kubevirt-hyperconverged-operator.v2.1.0   Succeeded
local-storage-operator.v4.3.3             Local Storage                              4.3.3                                               Succeeded


How reproducible:
100%

Steps to Reproduce:
1. Node cpu - Intel Core Processor (Haswell, no TSX, IBRS)
2. Create a VM using this yaml:

apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  creationTimestamp: null
  labels:
    kubevirt-vm: vm-fedora
  name: vm-fedora-h
spec:
  running: true
  template:
    metadata:
      creationTimestamp: null
      labels:
        kubevirt-vm: vm-fedora
    spec:
      domain:
        cpu:
          model: Haswell
        devices:
          disks:
          - disk:
              bus: virtio
            name: disk0
          - disk:
              bus: virtio
            name: cloudinitdisk
          rng: {}
        machine:
          type: ""
        resources:
          requests:
            memory: 2Gi
      evictionStrategy: LiveMigrate
      terminationGracePeriodSeconds: 0
      volumes:
      - name: disk0
        containerDisk:
          image: kubevirt/fedora-cloud-container-disk-demo:latest 
      - cloudInitNoCloud:
          userData: |-
            #cloud-config
            password: fedora
            chpasswd: { expire: False }
            bootcmd:
              - dnf install -y dmidecode stress
        name: cloudinitdisk
status: {}

Actual results:
VMI remains in Scheduled state

  Warning  SyncFailed        35s (x25 over 38s)  virt-handler, host-172-16-0-18  server error. command SyncVMI failed: "LibvirtError(Code=91, Domain=31, Message='the CPU is incompatible with host CPU: Host CPU does not provide required features: hle, rtm')"


Expected results:
Node should not be labelled with Haswell=true

Additional info:

========== describe VMI ========
$ oc describe vmi vm-fedora-h
Name:         vm-fedora-h
Namespace:    default
Labels:       kubevirt-vm=vm-fedora
              kubevirt.io/nodeName=host-172-16-0-18
Annotations:  kubevirt.io/latest-observed-api-version: v1alpha3
              kubevirt.io/storage-observed-api-version: v1alpha3
API Version:  kubevirt.io/v1alpha3
Kind:         VirtualMachineInstance
Metadata:
  Creation Timestamp:  2019-12-10T06:58:47Z
  Finalizers:
    foregroundDeleteVirtualMachine
  Generate Name:  vm-fedora-h
  Generation:     679
  Owner References:
    API Version:           kubevirt.io/v1alpha3
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  VirtualMachine
    Name:                  vm-fedora-h
    UID:                   5f7ca7e7-4b1d-44b8-9243-670608bf56f2
  Resource Version:        773823
  Self Link:               /apis/kubevirt.io/v1alpha3/namespaces/default/virtualmachineinstances/vm-fedora-h
  UID:                     79716cf6-2a8e-4082-a3c2-12fd593445e8
Spec:
  Domain:
    Cpu:
      Model:  Haswell
    Devices:
      Disks:
        Disk:
          Bus:  virtio
        Name:   disk0
        Disk:
          Bus:  virtio
        Name:   cloudinitdisk
      Interfaces:
        Bridge:
        Name:  default
      Rng:
    Features:
      Acpi:
        Enabled:  true
    Firmware:
      Uuid:  684be450-be87-5337-a23c-d3eb7c1ef621
    Machine:
      Type:  q35
    Resources:
      Requests:
        Cpu:          100m
        Memory:       2Gi
  Eviction Strategy:  LiveMigrate
  Networks:
    Name:  default
    Pod:
  Termination Grace Period Seconds:  0
  Volumes:
    Container Disk:
      Image:              kubevirt/fedora-cloud-container-disk-demo:latest
      Image Pull Policy:  Always
    Name:                 disk0
    Cloud Init No Cloud:
      User Data:  #cloud-config
password: fedora
chpasswd: { expire: False }
bootcmd:
  - dnf install -y dmidecode stress
    Name:  cloudinitdisk
Status:
  Conditions:
    Last Probe Time:       <nil>
    Last Transition Time:  <nil>
    Message:               cannot migrate VMI with a bridge interface connected to a pod network
    Reason:                InterfaceNotLiveMigratable
    Status:                False
    Type:                  LiveMigratable
  Guest OS Info:
  Interfaces:
    Ip Address:      10.129.0.110
    Mac:             52:54:00:21:52:0e
    Name:            default
  Migration Method:  BlockMigration
  Node Name:         host-172-16-0-18
  Phase:             Scheduled
  Qos Class:         Burstable
Events:
  Type     Reason            Age                     From                            Message
  ----     ------            ----                    ----                            -------
  Normal   SuccessfulCreate  2m28s                   disruptionbudget-controller     Created PodDisruptionBudget kubevirt-disruption-budget-mnrxl
  Normal   SuccessfulCreate  2m28s                   virtualmachine-controller       Created virtual machine pod virt-launcher-vm-fedora-h-mzdwk
  Warning  SyncFailed        2m13s (x25 over 2m16s)  virt-handler, host-172-16-0-18  server error. command SyncVMI failed: "LibvirtError(Code=91, Domain=31, Message='the CPU is incompatible with host CPU: Host CPU does not provide required features: hle, rtm')"



==== pod log attached

========= Node cpu ========
$ cat /proc/cpuinfo |grep 'Intel Core'
model name	: Intel Core Processor (Haswell, no TSX, IBRS)
model name	: Intel Core Processor (Haswell, no TSX, IBRS)
model name	: Intel Core Processor (Haswell, no TSX, IBRS)
model name	: Intel Core Processor (Haswell, no TSX, IBRS)
model name	: Intel Core Processor (Haswell, no TSX, IBRS)
model name	: Intel Core Processor (Haswell, no TSX, IBRS)
model name	: Intel Core Processor (Haswell, no TSX, IBRS)
model name	: Intel Core Processor (Haswell, no TSX, IBRS)


========= Node yaml ========
$ oc get node host-172-16-0-15 -oyaml
apiVersion: v1
kind: Node
metadata:
  annotations:
    csi.volume.kubernetes.io/nodeid: '{"rook-ceph.rbd.csi.ceph.com":"host-172-16-0-15"}'
    kubevirt.io/heartbeat: "2019-12-10T07:04:46Z"
    machineconfiguration.openshift.io/currentConfig: rendered-worker-ff16a0372e6c37545b139db8cc189afe
    machineconfiguration.openshift.io/desiredConfig: rendered-worker-ff16a0372e6c37545b139db8cc189afe
    machineconfiguration.openshift.io/reason: ""
    machineconfiguration.openshift.io/state: Done
    node-labeller-feature.node.kubernetes.io/cpu-feature-aes: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-avx: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-avx2: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-bmi1: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-bmi2: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-erms: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-f16c: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-fma: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-fsgsbase: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-hle: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-invpcid: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-movbe: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-pcid: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-pclmuldq: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-popcnt: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-rdrand: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-rdtscp: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-rtm: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-smep: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-spec-ctrl: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-sse4.2: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-svm: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-tsc-deadline: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-vme: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-x2apic: "true"
    node-labeller-feature.node.kubernetes.io/cpu-feature-xsave: "true"
    node-labeller-feature.node.kubernetes.io/cpu-model-Haswell: "true"
    node-labeller-feature.node.kubernetes.io/cpu-model-Haswell-noTSX: "true"
    node-labeller-feature.node.kubernetes.io/cpu-model-Haswell-noTSX-IBRS: "true"
    node-labeller-feature.node.kubernetes.io/cpu-model-IvyBridge: "true"
    node-labeller-feature.node.kubernetes.io/cpu-model-IvyBridge-IBRS: "true"
    node-labeller-feature.node.kubernetes.io/cpu-model-Nehalem: "true"
    node-labeller-feature.node.kubernetes.io/cpu-model-Nehalem-IBRS: "true"
    node-labeller-feature.node.kubernetes.io/cpu-model-Opteron_G1: "true"
    node-labeller-feature.node.kubernetes.io/cpu-model-Opteron_G2: "true"
    node-labeller-feature.node.kubernetes.io/cpu-model-Penryn: "true"
    node-labeller-feature.node.kubernetes.io/cpu-model-SandyBridge: "true"
    node-labeller-feature.node.kubernetes.io/cpu-model-SandyBridge-IBRS: "true"
    node-labeller-feature.node.kubernetes.io/cpu-model-Westmere: "true"
    node-labeller-feature.node.kubernetes.io/cpu-model-Westmere-IBRS: "true"
    node-labeller-feature.node.kubernetes.io/cpu-model-kvm32: "true"
    node-labeller-feature.node.kubernetes.io/cpu-model-kvm64: "true"
    node-labeller-feature.node.kubernetes.io/cpu-model-qemu32: "true"
    node-labeller-feature.node.kubernetes.io/cpu-model-qemu64: "true"
    node-labeller-feature.node.kubernetes.io/kvm-info-cap-hyperv-base: "true"
    node-labeller-feature.node.kubernetes.io/kvm-info-cap-hyperv-frequencies: "true"
    node-labeller-feature.node.kubernetes.io/kvm-info-cap-hyperv-ipi: "true"
    node-labeller-feature.node.kubernetes.io/kvm-info-cap-hyperv-reenlightenment: "true"
    node-labeller-feature.node.kubernetes.io/kvm-info-cap-hyperv-reset: "true"
    node-labeller-feature.node.kubernetes.io/kvm-info-cap-hyperv-runtime: "true"
    node-labeller-feature.node.kubernetes.io/kvm-info-cap-hyperv-synic: "true"
    node-labeller-feature.node.kubernetes.io/kvm-info-cap-hyperv-synic2: "true"
    node-labeller-feature.node.kubernetes.io/kvm-info-cap-hyperv-synictimer: "true"
    node-labeller-feature.node.kubernetes.io/kvm-info-cap-hyperv-time: "true"
    node-labeller-feature.node.kubernetes.io/kvm-info-cap-hyperv-tlbflush: "true"
    node-labeller-feature.node.kubernetes.io/kvm-info-cap-hyperv-vpindex: "true"
    volumes.kubernetes.io/controller-managed-attach-detach: "true"
  creationTimestamp: "2019-12-09T10:25:36Z"
  labels:
    beta.kubernetes.io/arch: amd64
    beta.kubernetes.io/os: linux
    cpumanager: "false"
    feature.node.kubernetes.io/cpu-feature-aes: "true"
    feature.node.kubernetes.io/cpu-feature-avx: "true"
    feature.node.kubernetes.io/cpu-feature-avx2: "true"
    feature.node.kubernetes.io/cpu-feature-bmi1: "true"
    feature.node.kubernetes.io/cpu-feature-bmi2: "true"
    feature.node.kubernetes.io/cpu-feature-erms: "true"
    feature.node.kubernetes.io/cpu-feature-f16c: "true"
    feature.node.kubernetes.io/cpu-feature-fma: "true"
    feature.node.kubernetes.io/cpu-feature-fsgsbase: "true"
    feature.node.kubernetes.io/cpu-feature-hle: "true"
    feature.node.kubernetes.io/cpu-feature-invpcid: "true"
    feature.node.kubernetes.io/cpu-feature-movbe: "true"
    feature.node.kubernetes.io/cpu-feature-pcid: "true"
    feature.node.kubernetes.io/cpu-feature-pclmuldq: "true"
    feature.node.kubernetes.io/cpu-feature-popcnt: "true"
    feature.node.kubernetes.io/cpu-feature-rdrand: "true"
    feature.node.kubernetes.io/cpu-feature-rdtscp: "true"
    feature.node.kubernetes.io/cpu-feature-rtm: "true"
    feature.node.kubernetes.io/cpu-feature-smep: "true"
    feature.node.kubernetes.io/cpu-feature-spec-ctrl: "true"
    feature.node.kubernetes.io/cpu-feature-sse4.2: "true"
    feature.node.kubernetes.io/cpu-feature-svm: "true"
    feature.node.kubernetes.io/cpu-feature-tsc-deadline: "true"
    feature.node.kubernetes.io/cpu-feature-vme: "true"
    feature.node.kubernetes.io/cpu-feature-x2apic: "true"
    feature.node.kubernetes.io/cpu-feature-xsave: "true"
    feature.node.kubernetes.io/cpu-model-Haswell: "true"
    feature.node.kubernetes.io/cpu-model-Haswell-noTSX: "true"
    feature.node.kubernetes.io/cpu-model-Haswell-noTSX-IBRS: "true"
    feature.node.kubernetes.io/cpu-model-IvyBridge: "true"
    feature.node.kubernetes.io/cpu-model-IvyBridge-IBRS: "true"
    feature.node.kubernetes.io/cpu-model-Nehalem: "true"
    feature.node.kubernetes.io/cpu-model-Nehalem-IBRS: "true"
    feature.node.kubernetes.io/cpu-model-Opteron_G1: "true"
    feature.node.kubernetes.io/cpu-model-Opteron_G2: "true"
    feature.node.kubernetes.io/cpu-model-Penryn: "true"
    feature.node.kubernetes.io/cpu-model-SandyBridge: "true"
    feature.node.kubernetes.io/cpu-model-SandyBridge-IBRS: "true"
    feature.node.kubernetes.io/cpu-model-Westmere: "true"
    feature.node.kubernetes.io/cpu-model-Westmere-IBRS: "true"
    feature.node.kubernetes.io/cpu-model-kvm32: "true"
    feature.node.kubernetes.io/cpu-model-kvm64: "true"
    feature.node.kubernetes.io/cpu-model-qemu32: "true"
    feature.node.kubernetes.io/cpu-model-qemu64: "true"
    feature.node.kubernetes.io/kvm-info-cap-hyperv-base: "true"
    feature.node.kubernetes.io/kvm-info-cap-hyperv-frequencies: "true"
    feature.node.kubernetes.io/kvm-info-cap-hyperv-ipi: "true"
    feature.node.kubernetes.io/kvm-info-cap-hyperv-reenlightenment: "true"
    feature.node.kubernetes.io/kvm-info-cap-hyperv-reset: "true"
    feature.node.kubernetes.io/kvm-info-cap-hyperv-runtime: "true"
    feature.node.kubernetes.io/kvm-info-cap-hyperv-synic: "true"
    feature.node.kubernetes.io/kvm-info-cap-hyperv-synic2: "true"
    feature.node.kubernetes.io/kvm-info-cap-hyperv-synictimer: "true"
    feature.node.kubernetes.io/kvm-info-cap-hyperv-time: "true"
    feature.node.kubernetes.io/kvm-info-cap-hyperv-tlbflush: "true"
    feature.node.kubernetes.io/kvm-info-cap-hyperv-vpindex: "true"
    kubernetes.io/arch: amd64
    kubernetes.io/hostname: host-172-16-0-15
    kubernetes.io/os: linux
    kubevirt.io/schedulable: "true"
    node-role.kubernetes.io/worker: ""
    node.openshift.io/os_id: rhcos
  name: host-172-16-0-15
  resourceVersion: "777231"
  selfLink: /api/v1/nodes/host-172-16-0-15
  uid: 7ce0a899-7d03-41b4-a3d1-b346bbd045b6
spec: {}
status:
  addresses:
  - address: 172.16.0.15
    type: InternalIP
  - address: host-172-16-0-15
    type: Hostname
  allocatable:
    cpu: 7500m
    devices.kubevirt.io/kvm: "110"
    devices.kubevirt.io/tun: "110"
    devices.kubevirt.io/vhost-net: "110"
    ephemeral-storage: "38146022952"
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 15806156Ki
    ovs-cni.network.kubevirt.io/br0: 1k
    pods: "250"
  capacity:
    cpu: "8"
    devices.kubevirt.io/kvm: "110"
    devices.kubevirt.io/tun: "110"
    devices.kubevirt.io/vhost-net: "110"
    ephemeral-storage: 41391084Ki
    hugepages-1Gi: "0"
    hugepages-2Mi: "0"
    memory: 16420556Ki
    ovs-cni.network.kubevirt.io/br0: 1k
    pods: "250"
  conditions:
  - lastHeartbeatTime: "2019-12-10T07:04:52Z"
    lastTransitionTime: "2019-12-09T10:25:37Z"
    message: kubelet has sufficient memory available
    reason: KubeletHasSufficientMemory
    status: "False"
    type: MemoryPressure
  - lastHeartbeatTime: "2019-12-10T07:04:52Z"
    lastTransitionTime: "2019-12-09T10:25:37Z"
    message: kubelet has no disk pressure
    reason: KubeletHasNoDiskPressure
    status: "False"
    type: DiskPressure
  - lastHeartbeatTime: "2019-12-10T07:04:52Z"
    lastTransitionTime: "2019-12-09T10:25:37Z"
    message: kubelet has sufficient PID available
    reason: KubeletHasSufficientPID
    status: "False"
    type: PIDPressure
  - lastHeartbeatTime: "2019-12-10T07:04:52Z"
    lastTransitionTime: "2019-12-09T10:26:07Z"
    message: kubelet is posting ready status
    reason: KubeletReady
    status: "True"
    type: Ready
  daemonEndpoints:
    kubeletEndpoint:
      Port: 10250
  images:
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ade99a39a91062d3d71abf71f9ec837480e2e7e013475bee9b3162b84f42974d
    - <none>:<none>
    sizeBytes: 1556000181
  - names:
    - quay.io/redhat/cnv-tests-fedora-staging@sha256:ecffdc13c7c6d484c28f737455950b2fdc154fe52a6e3ff7d4bd884a1edfac7d
    - quay.io/redhat/cnv-tests-fedora-staging:31
    sizeBytes: 1174329850
  - names:
    - quay.io/cephcsi/cephcsi@sha256:fda7f716014a7b0f8faa794d2958b1e627967d6f77ca5c008658e80770d1ab6e
    - quay.io/cephcsi/cephcsi:v1.2.1
    sizeBytes: 1011347406
  - names:
    - docker.io/rook/ceph@sha256:b7a8562844b5458026ea8ba50c8930b1130dfa3cb29770f2de094d962eaba8e9
    - docker.io/rook/ceph@sha256:ee0af4131b9b9fdb66b2ddbbff13edbeb72cb3e1491cc61f47b78e541504ae65
    - docker.io/rook/ceph:v1.1.2
    sizeBytes: 1001701073
  - names:
    - docker.io/ceph/ceph@sha256:706e38a9fa64d2f45c47116206478703502b70ad8546498f5a7e42fc0c73c009
    - docker.io/ceph/ceph@sha256:fb86551e52e8909bf938a6d8652e49b61207d8bfb21c63b26ff38fd6736e0322
    - docker.io/ceph/ceph:v14.2.4-20190917
    sizeBytes: 922435365
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f759e60e9feeee12602198cd3206304f66feeadda3dbb6edebe6ac488ca03d
    - <none>:<none>
    sizeBytes: 829304950
  - names:
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-launcher@sha256:6afaaa2fc4a3cf894c4e9b27e2ba43638260e1640d6e46178600c2aca5ac04f0
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-launcher@sha256:add6b86993638ad009da9021311437d8a52cd40bd2fde61bb8e9b9203238c46e
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-launcher:v2.2.0-10
    sizeBytes: 766228357
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa2271b7cdf177aa0368fdf854027a5c54d03b90f089701190b2533147d4469d
    - <none>:<none>
    sizeBytes: 711784219
  - names:
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-kubevirt-ssp-operator@sha256:27dc5c4af9f3536b77190cc6a1e8880893fdca1e45f07fa40828a1ce5d01efcb
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-kubevirt-ssp-operator@sha256:2a1a34417f10066b236d7e89e364c164a098530024f6f10742704c5e46873746
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-kubevirt-ssp-operator:v2.2.0-14
    sizeBytes: 653814113
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:272de9d1dfb5e665ac02305f629c3ef22b92c2db0b33d19d14d2ee7b5c2b5fb5
    - <none>:<none>
    sizeBytes: 467733144
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fd355d207a75d6944223c315e3dee803fbacf071912359909f0d443a5e6efec
    - <none>:<none>
    sizeBytes: 459530452
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a439c4a128260accac47c791bed2a318f95bdd17d93b5903ab7f8780ef99baf
    - <none>:<none>
    sizeBytes: 441937100
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7882fa3059abf330a1eaa41cebc2751e4a0e18c573907a1cb69ea55c3a9840dc
    - <none>:<none>
    sizeBytes: 435367910
  - names:
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-ovs-cni-marker@sha256:7dad8446574601136636aac912b9006f9a7ed75292f1e21362c3324f0c3a37d8
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-ovs-cni-marker@sha256:7e9be76fd283772693f257c01f0a7a38c4dac14ffd368e1021ad664786565f39
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-ovs-cni-marker:v2.2.0-3
    sizeBytes: 420239074
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbffb2af5f428b842d95d7453c17fc38e72ee3e3e7bb9450ffb3fd31b6ad642c
    - <none>:<none>
    sizeBytes: 372617232
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:59473eab2c5aecf456904dca041d63e1a0d880a1e4b3c2a9b1a33f7b776c6d3c
    - <none>:<none>
    sizeBytes: 368829368
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3effc05bdafc82c61f5d71cd67b4132f6b7896a3a839698e8b4db334ca8d4635
    - <none>:<none>
    sizeBytes: 341180110
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c51252d457f4975303dee5edcb55564cd5d9304b98b965288cb0ed94eb0ee8a
    - <none>:<none>
    sizeBytes: 332812312
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:459419e2b363c1ae7692178eba24d0270bb769bc8c323538658879351bc5360e
    - <none>:<none>
    sizeBytes: 315054604
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e8d7a27c283c0b494df00435077ad5b9e4959a6bd39846e60022b56bfc602c24
    - <none>:<none>
    sizeBytes: 311855135
  - names:
    - docker.io/kubevirt/fedora-cloud-container-disk-demo@sha256:1d4f6f6d52974db84d2e1a031b6f634254fd97823c05d13d98d124846b001d0a
    - docker.io/kubevirt/fedora-cloud-container-disk-demo:latest
    sizeBytes: 307508366
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:234cb579f88f53781e547f202152eb273cf569073818fdeb4ba7cd7086071065
    - <none>:<none>
    sizeBytes: 306447854
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e09cf3dfab0431cf2630fe207567b1e7e98b42091ac8d76d90120402211938eb
    - <none>:<none>
    sizeBytes: 301791346
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f7fda3ad53ad3f5ca04121a033c88f0cc963ac2c136ce0d85828a2dfa9ed1186
    - <none>:<none>
    sizeBytes: 300114146
  - names:
    - quay.io/openshift/origin-local-storage-static-provisioner@sha256:873e4138f9c01976cc6c95a9390d47b0ab235e743f00ae2f1fa95835af6f8663
    - quay.io/openshift/origin-local-storage-static-provisioner:latest
    sizeBytes: 296392882
  - names:
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-importer@sha256:1e89a26fed36e3eaf477cafdf0af7223c9e62e0784aff810e697161699343716
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-importer@sha256:8444b0d482a53354ac6749986d928d2931dcfe508484105036b5b72c6086ac33
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-cdi-importer:v2.2.0-3
    sizeBytes: 288810771
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28eea6df6af9a6c613feeb4bfa20a60f631049cd13b85a3fe7eb55b087b5d333
    - <none>:<none>
    sizeBytes: 285868112
  - names:
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-kubernetes-nmstate-handler-rhel8@sha256:625991360c65be470a5282188936546bf3ce30eeca7ffe64c93c2f568dd803af
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-kubernetes-nmstate-handler-rhel8@sha256:a9932b8540b82861a8c6a566ab9fdad9af91cf1e54f219fbcf6a0cdab4aa946c
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-kubernetes-nmstate-handler-rhel8:v2.2.0-12
    sizeBytes: 285095810
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:34b4a65bd731ce2810ba2b0e9d2e56596e70b2a735b75b3f7ee3596f0071e1d3
    - <none>:<none>
    sizeBytes: 281283135
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:96f5cf1a7738f82a106cf49ee78b32a7dd16f119807db08905e362160efc3679
    - <none>:<none>
    sizeBytes: 277788312
  - names:
    - quay.io/gnufied/local-diskmaker@sha256:4ce79a657038e1ecd72f4deaddbd1fb30e23b52770afb1e018bf9d398da76546
    - quay.io/gnufied/local-diskmaker:v0.0.14
    sizeBytes: 275022288
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2592294b8965d8a767f3a52dc3a1406e8e814e8fb762df0bef941470d39403cc
    - <none>:<none>
    sizeBytes: 271210379
  - names:
    - docker.io/kubevirt/winrmcli@sha256:36626b21611d4dc93f86221307f5b9e29fa914257f6000e5219719e2cc32cc0d
    - docker.io/kubevirt/winrmcli:latest
    sizeBytes: 269517945
  - names:
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-operator@sha256:02df2ae5b35e57828f8242c47a46e51f9e7b6e6b773455384137990ea75a861b
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-operator@sha256:053c628d4c66708a722bf3fa8c465f5005b53b894ffc6f9b184a4df161846582
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-operator:v2.2.0-10
    sizeBytes: 259453792
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bda52b0ce2df0c48775c1716b689dfc47c6cd633a0d8bdc82b85d113e3714d6b
    - <none>:<none>
    sizeBytes: 257960046
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9b4dfd8ed97552e9b372247b6ef8a9e5b729bec4ff1a65059a5a139be9b16fd
    - <none>:<none>
    sizeBytes: 256929413
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ce54e4ec444a156fa4d467db67e6c4916efb8cd6ba00d212c905ea1b443ce430
    - <none>:<none>
    sizeBytes: 255881431
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33cb4b5424a43b257cf9ecd02358b6dd9ea8036347c137a32a175475ed2afdd3
    - <none>:<none>
    sizeBytes: 250744382
  - names:
    - quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7786fed414aeffb21d16c1d440ef5b8893b46f6238de9d617375642f33997370
    - <none>:<none>
    sizeBytes: 238782787
  - names:
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-hyperconverged-cluster-operator@sha256:0b240f7cfc1706668da80c617512ff1e09e463300615a6129401759494f041cf
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-hyperconverged-cluster-operator@sha256:7c954963110e3e202b7d3f90e06ce8a073866cb98cdd4ab3b8c447d5fdf7fa33
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-hyperconverged-cluster-operator:v2.2.0-9
    sizeBytes: 228969104
  - names:
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-api@sha256:43d9ae5caa172fa241b078c0f28e73a474134bd0eccfe3a47186587c6e908aaf
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-api@sha256:da1435f588d5d2ddc662216e7c3a3ba1c96b82a59e306dead5ad040a75d8cb8f
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-api:v2.2.0-10
    sizeBytes: 219790659
  - names:
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-controller@sha256:0b69d07e93e23a764c9da125f1210b28b72a5d6dea2e381dbed8790fe938a95e
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-controller@sha256:1bb84411616746249681645b0c4f9cdaa4bbf546a65673162f3bd152dae2c14a
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-controller:v2.2.0-10
    sizeBytes: 217490826
  - names:
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-hostpath-provisioner-rhel8-operator@sha256:3e1e40723458fcf63935251fd3fa525fa80da21971e59ad64bc0dc163abc1aff
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-hostpath-provisioner-rhel8-operator@sha256:63ef69c7965604bd0538618a01bb2cbe4898ab9d92cbd41736bc9f1c99fb65b0
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-hostpath-provisioner-rhel8-operator:v2.2.0-7
    sizeBytes: 198609275
  - names:
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-handler@sha256:5149922c6f07dec896b8f4d98e30e70e1012e27d843f3865b9cf543de8564201
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-handler@sha256:b3102dc9e24bda520e13b3d535bdaca64cc535693054749d7cf569bd34def24e
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-virt-handler:v2.2.0-10
    sizeBytes: 194483545
  - names:
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-kubemacpool@sha256:0341b4f53e2cca5b21608d689623aec909f9f3e6f190ac02080f9e4a465471f6
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-kubemacpool@sha256:8ae977b633866ec33ac365282e0da4eb2f80176155ca7f7a5c982627710e9917
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-kubemacpool:v2.2.0-3
    sizeBytes: 152457385
  - names:
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-hostpath-provisioner@sha256:1306c3b3d381a1568fa5f0af0ce9b8103b37c8f3221a457a0f19a1567e1a1f42
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-hostpath-provisioner@sha256:3f305b8c23c77acae19ca270e44a324119afa6a5e4b811da744cab63b0c1dc75
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-hostpath-provisioner:v2.2.0
    sizeBytes: 150959904
  - names:
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-kubevirt-template-validator@sha256:38a04a6328e941c70118f265e4cc34c01fe52c9351c2efd2fc3bf045460d8ee4
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-kubevirt-template-validator@sha256:f540d3a92d84f70951350b1287ad8be72f751a65e1cd2d4f5f595fd904d143d2
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-kubevirt-template-validator:v2.2.0-4
    sizeBytes: 150352024
  - names:
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-bridge-marker@sha256:3aa6e0fcde5b98d806d422752282af01cc2bf40b7a7dad15b271a9d355395dea
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-bridge-marker@sha256:b783ed3d9e73aaba5fa1daa7fdce02ba2e261ffd6b111f11f75e833919a59e37
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-bridge-marker:v2.2.0-2
    sizeBytes: 142788298
  - names:
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-kubevirt-cpu-node-labeller@sha256:604be5572b6ecffbb1a1d03b811190aaa372a86b35b7a7e9ee547330f68efb96
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-kubevirt-cpu-node-labeller@sha256:ae9c7081255bd08f81d2198bcf07dc908991a911d28af925891844ba3b8f3750
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-kubevirt-cpu-node-labeller:v2.2.0-2
    sizeBytes: 140275123
  - names:
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-cnv-containernetworking-plugins@sha256:5cd3d66f82ff56c7859378c1927014c7ebe6d00ab5b185b58581859db2d7751f
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-cnv-containernetworking-plugins@sha256:c5efa4190104277f4916e8c40bb8624d244f5f00cc1466c3162606a8ad09c658
    - registry-proxy.engineering.redhat.com/rh-osbs/container-native-virtualization-cnv-containernetworking-plugins:v2.2.0-2
    sizeBytes: 115212031
  nodeInfo:
    architecture: amd64
    bootID: b2a5ec39-d797-46e8-8690-c8e345e287ee
    containerRuntimeVersion: cri-o://1.16.1-1.dev.rhaos4.3.git356cd12.el8
    kernelVersion: 4.18.0-147.0.3.el8_1.x86_64
    kubeProxyVersion: v1.16.2
    kubeletVersion: v1.16.2
    machineID: 61fa799d543144f39f168d7505459bee
    operatingSystem: linux
    osImage: Red Hat Enterprise Linux CoreOS 43.81.201912040340.0 (Ootpa)
    systemUUID: 61fa799d-5431-44f3-9f16-8d7505459bee


======== libvirt output =======
[root@vm-fedora-h /]# virsh domcapabilities --machine q35 --arch x86_64 --virttype kvm
<domainCapabilities>
  <path>/usr/libexec/qemu-kvm</path>
  <domain>kvm</domain>
  <machine>pc-q35-rhel8.1.0</machine>
  <arch>x86_64</arch>
  <vcpu max='384'/>
  <iothreads supported='yes'/>
  <os supported='yes'>
    <enum name='firmware'>
      <value>efi</value>
    </enum>
    <loader supported='yes'>
      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
      <enum name='type'>
        <value>rom</value>
        <value>pflash</value>
      </enum>
      <enum name='readonly'>
        <value>yes</value>
        <value>no</value>
      </enum>
      <enum name='secure'>
        <value>yes</value>
        <value>no</value>
      </enum>
    </loader>
  </os>
  <cpu>
    <mode name='host-passthrough' supported='yes'/>
    <mode name='host-model' supported='yes'>
      <model fallback='forbid'>Haswell-noTSX-IBRS</model>
      <vendor>Intel</vendor>
      <feature policy='require' name='vme'/>
      <feature policy='require' name='ss'/>
      <feature policy='require' name='f16c'/>
      <feature policy='require' name='rdrand'/>
      <feature policy='require' name='hypervisor'/>
      <feature policy='require' name='arat'/>
      <feature policy='require' name='tsc_adjust'/>
      <feature policy='require' name='umip'/>
      <feature policy='require' name='md-clear'/>
      <feature policy='require' name='stibp'/>
      <feature policy='require' name='arch-capabilities'/>
      <feature policy='require' name='ssbd'/>
      <feature policy='require' name='xsaveopt'/>
      <feature policy='require' name='pdpe1gb'/>
      <feature policy='require' name='abm'/>
      <feature policy='require' name='skip-l1dfl-vmentry'/>
    </mode>
    <mode name='custom' supported='yes'>
      <model usable='yes'>qemu64</model>
      <model usable='yes'>qemu32</model>
      <model usable='no'>phenom</model>
      <model usable='yes'>pentium3</model>
      <model usable='yes'>pentium2</model>
      <model usable='yes'>pentium</model>
      <model usable='yes'>n270</model>
      <model usable='yes'>kvm64</model>
      <model usable='yes'>kvm32</model>
      <model usable='no'>cpu64-rhel6</model>
      <model usable='yes'>coreduo</model>
      <model usable='yes'>core2duo</model>
      <model usable='no'>athlon</model>
      <model usable='yes'>Westmere-IBRS</model>
      <model usable='yes'>Westmere</model>
      <model usable='no'>Skylake-Server-IBRS</model>
      <model usable='no'>Skylake-Server</model>
      <model usable='no'>Skylake-Client-IBRS</model>
      <model usable='no'>Skylake-Client</model>
      <model usable='yes'>SandyBridge-IBRS</model>
      <model usable='yes'>SandyBridge</model>
      <model usable='yes'>Penryn</model>
      <model usable='no'>Opteron_G5</model>
      <model usable='no'>Opteron_G4</model>
      <model usable='no'>Opteron_G3</model>
      <model usable='yes'>Opteron_G2</model>
      <model usable='yes'>Opteron_G1</model>
      <model usable='yes'>Nehalem-IBRS</model>
      <model usable='yes'>Nehalem</model>
      <model usable='yes'>IvyBridge-IBRS</model>
      <model usable='yes'>IvyBridge</model>
      <model usable='no'>Icelake-Server</model>
      <model usable='no'>Icelake-Client</model>
      <model usable='yes'>Haswell-noTSX-IBRS</model>
      <model usable='yes'>Haswell-noTSX</model>
      <model usable='no'>Haswell-IBRS</model>
      <model usable='yes'>Haswell</model>
      <model usable='no'>EPYC-IBPB</model>
      <model usable='no'>EPYC</model>
      <model usable='yes'>Conroe</model>
      <model usable='no'>Cascadelake-Server</model>
      <model usable='no'>Broadwell-noTSX-IBRS</model>
      <model usable='no'>Broadwell-noTSX</model>
      <model usable='no'>Broadwell-IBRS</model>
      <model usable='no'>Broadwell</model>
      <model usable='yes'>486</model>
    </mode>
  </cpu>
  <devices>
    <disk supported='yes'>
      <enum name='diskDevice'>
        <value>disk</value>
        <value>cdrom</value>
        <value>floppy</value>
        <value>lun</value>
      </enum>
      <enum name='bus'>
        <value>fdc</value>
        <value>scsi</value>
        <value>virtio</value>
        <value>usb</value>
        <value>sata</value>
      </enum>
      <enum name='model'>
        <value>virtio</value>
        <value>virtio-transitional</value>
        <value>virtio-non-transitional</value>
      </enum>
    </disk>
    <graphics supported='yes'>
      <enum name='type'>
        <value>sdl</value>
        <value>vnc</value>
        <value>spice</value>
      </enum>
    </graphics>
    <video supported='yes'>
      <enum name='modelType'>
        <value>vga</value>
        <value>cirrus</value>
        <value>qxl</value>
        <value>virtio</value>
      </enum>
    </video>
    <hostdev supported='yes'>
      <enum name='mode'>
        <value>subsystem</value>
      </enum>
      <enum name='startupPolicy'>
        <value>default</value>
        <value>mandatory</value>
        <value>requisite</value>
        <value>optional</value>
      </enum>
      <enum name='subsysType'>
        <value>usb</value>
        <value>pci</value>
        <value>scsi</value>
      </enum>
      <enum name='capsType'/>
      <enum name='pciBackend'/>
    </hostdev>
  </devices>
  <features>
    <gic supported='no'/>
    <vmcoreinfo supported='yes'/>
    <genid supported='yes'/>
    <sev supported='no'/>
  </features>
</domainCapabilities>

Comment 2 Karel Šimon 2019-12-10 09:20:50 UTC
CPU plugin (which is used by node-labeller to get all supported cpus) parses xml from libvirt. It adds only cpus which are reported as supported by libvirt. This issue is probably caused by libvirt, which wrongly detects supported cpus.

Comment 3 sgott 2019-12-10 22:19:54 UTC
Karel, assigning this to you for follow up.

Comment 7 sgott 2019-12-12 14:00:04 UTC
Jarda, I'm unable to reproduce this bug on my system. Is it possible that libvirt adapts what it reports as supported CPU models based upon which system it is running? On my system the Haswell models appear like so:

      <model usable='yes'>Haswell-noTSX-IBRS</model>
      <model usable='yes'>Haswell-noTSX</model>
      <model usable='yes'>Haswell-IBRS</model>
      <model usable='yes'>Haswell</model>

But Ruth is seeing this:

      <model usable='yes'>Haswell-noTSX-IBRS</model>
      <model usable='yes'>Haswell-noTSX</model>
      <model usable='no'>Haswell-IBRS</model>
      <model usable='no'>Haswell</model>

The above is using libvirt 5.6.0. When checking compatibility on libivrt 5.0.0, her system matched the output I'm seeing. It's not clear to me why this would change for different libvirt versions, or why it would differ between machines for the same version.

Comment 8 Jaroslav Suchanek 2019-12-12 16:24:21 UTC
Jiri, can you please advice? Thanks.

Comment 9 Jiri Denemark 2019-12-13 09:14:51 UTC
I don't see what version of qemu-kvm is used here, but most likely this is a duplicate of bug 1779078.

Comment 15 Fabian Deutsch 2019-12-20 14:57:29 UTC
In comment #7 it was mentioned that teh bug is that Haswell functionality is falsely advertised.

A workaround is to manually fix this incorrect flag if a user is running into this problem by using:

$ oc label node $THE_NODE_NAME feature.node.kubernetes.io/cpu-model-Haswell=false

Furthermore the user can just select a different CPU model for the time being.


Ruth, can you please try the following:

1. Use the oc label line above to fix the labels on the incorrect nodes
2. Choose a different CPU model for the VM
3. Launch the VM (with the new CPU model)

I assume choosing a differnt CPU model will workaround the problem, thus moving this out.


Audrey, can we get a release note for this bug?


Please move it back to 2.2 if the workaround does not work.

Comment 16 Fabian Deutsch 2019-12-20 15:02:18 UTC
Please note the cluster does in general not support Haswell (it's falsely advertised), thus keeping a VM requesting Haswell, but fixing the node labeling will lead to the VM to not getting scheduled. Which is correct. A user just has to choose a CPU model which is supported by a node on the cluster.cust0 should not be affected iirc, as they will probably be using host-model or passthrough due to performance reasons (David, please keep me honest here.

Comment 17 Audrey Spaulding 2019-12-20 16:00:40 UTC
@Fabian, you mean a release note for 2.2, correct?

Comment 18 Ruth Netser 2019-12-22 11:01:08 UTC
@Fabian

Tested with the workaround.
Labelled all the nodes (all having the same HW) with feature.node.kubernetes.io/cpu-model-Haswell=false.

1. The command should be 

$ oc label node $THE_NODE_NAME feature.node.kubernetes.io/cpu-model-Haswell=false --overwrite

otherwise the following error is displayed:
error: 'feature.node.kubernetes.io/cpu-model-Haswell' already has a value (true), and --overwrite is false


2. If I create a VM with cpu model: Haswell, the VMI is in PHASE Scheduling, the pod is in STATUS Pending (as expected as there are no pods which meet the cpu model requirement).

Events:
  Type     Reason            Age        From               Message
  ----     ------            ----       ----               -------
  Warning  FailedScheduling  <unknown>  default-scheduler  0/5 nodes are available: 2 Insufficient devices.kubevirt.io/kvm, 2 Insufficient devices.kubevirt.io/tun, 2 Insufficient devices.kubevirt.io/vhost-net, 5 node(s) didn't match node selector.


3. If I create a VM with cpu model Haswell-noTSX, the VMI is running.


HOWEVER, when I reboot a node that has been labelled with cpu-model-Haswell=false, after reboot the value is changed back to true.
Is there a way to make this change persistent?

Comment 23 Audrey Spaulding 2020-01-06 14:26:29 UTC
I've added Bob Gaydos as the doc contact. Bob is coordinating the CNV 2.2 release notes.

Comment 31 Bob Gaydos 2020-01-27 17:20:35 UTC
Here is the updated version after talking to Fabian.

* When attempting to create and launch a virtual machine using a Haswell CPU,
the launch of the virtual machine can fail. This is a bug due to incorrectly 
labeled nodes and a change in behavior from previous versions of
container-native virtualization, where virtual machines could be successfully
launched on Haswell hosts.
+
As a workaround, select a different CPU model, if possible.

I had not included the Ruth's last comment as it was dropped during conversation with Fabian, but I can add it if needed.

Thanks,

Bob

Comment 36 Ruth Netser 2020-06-22 16:29:37 UTC
Does not reproduce on CNV-2.4.0 (operatorVersion: v0.30.0)

Node CPU model:

$ cat /proc/cpuinfo |grep "model name"
model name	: Intel Core Processor (Haswell, no TSX, IBRS)
model name	: Intel Core Processor (Haswell, no TSX, IBRS)
model name	: Intel Core Processor (Haswell, no TSX, IBRS)
model name	: Intel Core Processor (Haswell, no TSX, IBRS)
model name	: Intel Core Processor (Haswell, no TSX, IBRS)
model name	: Intel Core Processor (Haswell, no TSX, IBRS)
model name	: Intel Core Processor (Haswell, no TSX, IBRS)
model name	: Intel Core Processor (Haswell, no TSX, IBRS)
model name	: Intel Core Processor (Haswell, no TSX, IBRS)
model name	: Intel Core Processor (Haswell, no TSX, IBRS)
model name	: Intel Core Processor (Haswell, no TSX, IBRS)
model name	: Intel Core Processor (Haswell, no TSX, IBRS)


Node labels:
                    feature.node.kubernetes.io/cpu-feature-xsave=true
                    feature.node.kubernetes.io/cpu-model-Haswell-noTSX=true
                    feature.node.kubernetes.io/cpu-model-Haswell-noTSX-IBRS=true
                    feature.node.kubernetes.io/cpu-model-IvyBridge=true
 

Node annotations: 

                    node-labeller-feature.node.kubernetes.io/cpu-feature-xsave=true
                    node-labeller-feature.node.kubernetes.io/cpu-model-Haswell-noTSX=true
                    node-labeller-feature.node.kubernetes.io/cpu-model-Haswell-noTSX-IBRS=true
                    node-labeller-feature.node.kubernetes.io/cpu-model-IvyBridge=true


VMI with cpu model Haswell-noTSX-IBRS or Haswell-noTSX is running; VMI with Haswell remains in Scheduling phase.
Closing.


Note You need to log in before you can comment on or make changes to this bug.