Description of problem: When I was testing CDI upload features according to doc https://github.com/mhenriks/containerized-data-importer/blob/fb6d5487c71dea8f2f651f956b13518bc2ab5715/doc/upload.md, the last step create VMI failed Version-Release number of selected component (if applicable): How reproducible: Steps to Reproduce: 1. create upload-pvc, upload-token, route 2. create VMI 3. VMI failed [cnv-qe-jenkins@cnv-executor-shiywang ~]$ oc get pods NAME READY STATUS RESTARTS AGE cdi-api-548bf94f55-grzlh 1/1 Running 0 5d cdi-deployment-7576d6f444-5sgzp 1/1 Running 0 5d cdi-upload-upload-demo 0/1 Completed 0 43m cdi-uploadproxy-85d8779849-nfj67 1/1 Running 0 5d [cnv-qe-jenkins@cnv-executor-shiywang ~]$ oc get route -o yaml apiVersion: v1 items: - apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"route.openshift.io/v1","kind":"Route","metadata":{"annotations":{},"name":"cdi-uploadproxy","namespace":"kube-system"},"spec":{"tls":{"termination":"passthrough"},"to":{"kind":"Service","name":"cdi-uploadproxy"}}} openshift.io/host.generated: "true" creationTimestamp: 2018-10-23T07:39:05Z name: cdi-uploadproxy namespace: kube-system resourceVersion: "1135703" selfLink: /apis/route.openshift.io/v1/namespaces/kube-system/routes/cdi-uploadproxy uid: b82844d2-d696-11e8-b546-fa163e265af7 spec: host: cdi-uploadproxy-kube-system.cloudapps.example.com tls: termination: passthrough to: kind: Service name: cdi-uploadproxy weight: 100 wildcardPolicy: None status: ingress: - conditions: - lastTransitionTime: 2018-10-23T07:39:05Z status: "True" type: Admitted host: cdi-uploadproxy-kube-system.cloudapps.example.com routerName: router wildcardPolicy: None kind: List metadata: resourceVersion: "" selfLink: "" [cnv-qe-jenkins@cnv-executor-shiywang ~]$ oc get vmi -o yaml apiVersion: v1 items: - apiVersion: kubevirt.io/v1alpha2 kind: VirtualMachineInstance metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"kubevirt.io/v1alpha2","kind":"VirtualMachineInstance","metadata":{"annotations":{},"creationTimestamp":null,"name":"vm-upload-test","namespace":"kube-system"},"spec":{"domain":{"devices":{"disks":[{"disk":{"bus":"virtio"},"name":"pvcdisk","volumeName":"pvcvolume"}]},"machine":{"type":""},"resources":{"requests":{"memory":"64M"}}},"terminationGracePeriodSeconds":0,"volumes":[{"name":"pvcvolume","persistentVolumeClaim":{"claimName":"upload-demo"}}]},"status":{}} creationTimestamp: 2018-10-23T07:47:57Z finalizers: - foregroundDeleteVirtualMachine generation: 1 labels: kubevirt.io/nodeName: cnv-executor-shiywang-node1.example.com name: vm-upload-test namespace: kube-system resourceVersion: "1138050" selfLink: /apis/kubevirt.io/v1alpha2/namespaces/kube-system/virtualmachineinstances/vm-upload-test uid: f52ecd7d-d697-11e8-b546-fa163e265af7 spec: domain: devices: disks: - disk: bus: virtio name: pvcdisk volumeName: pvcvolume interfaces: - bridge: {} name: default features: acpi: enabled: true firmware: uuid: 7712f7c0-1932-4a18-8304-a058ccd99ac5 machine: type: q35 resources: requests: memory: 64M networks: - name: default pod: {} terminationGracePeriodSeconds: 0 volumes: - name: pvcvolume persistentVolumeClaim: claimName: upload-demo status: interfaces: - ipAddress: 10.130.0.15 nodeName: cnv-executor-shiywang-node1.example.com phase: Failed kind: List metadata: resourceVersion: "" selfLink: "" [cnv-qe-jenkins@cnv-executor-shiywang ~]$ oc describe pod virt-launcher-vm-upload-test-mwc5q Name: virt-launcher-vm-upload-test-mwc5q Namespace: kube-system Priority: 0 PriorityClassName: <none> Node: cnv-executor-shiywang-node1.example.com/172.16.0.13 Start Time: Tue, 23 Oct 2018 07:48:04 +0000 Labels: kubevirt.io=virt-launcher kubevirt.io/created-by=f52ecd7d-d697-11e8-b546-fa163e265af7 Annotations: kubevirt.io/domain=vm-upload-test kubevirt.io/owned-by=virt-handler Status: Failed IP: 10.130.0.15 Containers: compute: Container ID: cri-o://c2e366774fd9dc7931bf6eacd362e03354bfd762953c1816a81cf7e684758533 Image: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cnv13-tech-preview/virt-launcher:v1.3.0 Image ID: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cnv13-tech-preview/virt-launcher@sha256:0db3b6c41f4a9bbbc22fb28b2c4b3158e55c0eea6204246903b1b023a058e780 Port: <none> Host Port: <none> Command: /usr/bin/virt-launcher --qemu-timeout 5m --name vm-upload-test --uid f52ecd7d-d697-11e8-b546-fa163e265af7 --namespace kube-system --kubevirt-share-dir /var/run/kubevirt --readiness-file /tmp/healthy --grace-period-seconds 15 --hook-sidecars 0 State: Terminated Reason: Error Exit Code: 2 Started: Tue, 23 Oct 2018 07:48:18 +0000 Finished: Tue, 23 Oct 2018 07:53:28 +0000 Ready: False Restart Count: 0 Limits: devices.kubevirt.io/kvm: 1 devices.kubevirt.io/tun: 1 Requests: devices.kubevirt.io/kvm: 1 devices.kubevirt.io/tun: 1 memory: 161679432 Readiness: exec [cat /tmp/healthy] delay=2s timeout=5s period=2s #success=1 #failure=5 Environment: <none> Mounts: /var/run/kubevirt from virt-share-dir (rw) /var/run/kubevirt-private/vmi-disks/pvcvolume from pvcvolume (rw) /var/run/libvirt from libvirt-runtime (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-dq6tr (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: pvcvolume: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: upload-demo ReadOnly: false virt-share-dir: Type: HostPath (bare host directory volume) Path: /var/run/kubevirt HostPathType: libvirt-runtime: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: default-token-dq6tr: Type: Secret (a volume populated by a Secret) SecretName: default-token-dq6tr Optional: false QoS Class: Burstable Node-Selectors: kubevirt.io/schedulable=true Tolerations: node.kubernetes.io/memory-pressure:NoSchedule Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 31m default-scheduler Successfully assigned kube-system/virt-launcher-vm-upload-test-mwc5q to cnv-executor-shiywang-node1.example.com Normal Pulling 31m kubelet, cnv-executor-shiywang-node1.example.com pulling image "brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cnv13-tech-preview/virt-launcher:v1.3.0" Normal Pulled 30m kubelet, cnv-executor-shiywang-node1.example.com Successfully pulled image "brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cnv13-tech-preview/virt-launcher:v1.3.0" Normal Created 30m kubelet, cnv-executor-shiywang-node1.example.com Created container Normal Started 30m kubelet, cnv-executor-shiywang-node1.example.com Started container Warning Unhealthy 30m (x5 over 30m) kubelet, cnv-executor-shiywang-node1.example.com Readiness probe failed: cat: /tmp/healthy: No such file or directory [cnv-qe-jenkins@cnv-executor-shiywang ~]$ oc get vmi -o yaml apiVersion: v1 items: - apiVersion: kubevirt.io/v1alpha2 kind: VirtualMachineInstance metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"kubevirt.io/v1alpha2","kind":"VirtualMachineInstance","metadata":{"annotations":{},"creationTimestamp":null,"name":"vm-upload-test","namespace":"kube-system"},"spec":{"domain":{"devices":{"disks":[{"disk":{"bus":"virtio"},"name":"pvcdisk","volumeName":"pvcvolume"}]},"machine":{"type":""},"resources":{"requests":{"memory":"64M"}}},"terminationGracePeriodSeconds":0,"volumes":[{"name":"pvcvolume","persistentVolumeClaim":{"claimName":"upload-demo"}}]},"status":{}} creationTimestamp: 2018-10-23T07:47:57Z finalizers: - foregroundDeleteVirtualMachine generation: 1 labels: kubevirt.io/nodeName: cnv-executor-shiywang-node1.example.com name: vm-upload-test namespace: kube-system resourceVersion: "1137178" selfLink: /apis/kubevirt.io/v1alpha2/namespaces/kube-system/virtualmachineinstances/vm-upload-test uid: f52ecd7d-d697-11e8-b546-fa163e265af7 spec: domain: devices: disks: - disk: bus: virtio name: pvcdisk volumeName: pvcvolume interfaces: - bridge: {} name: default features: acpi: enabled: true firmware: uuid: 7712f7c0-1932-4a18-8304-a058ccd99ac5 machine: type: q35 resources: requests: memory: 64M networks: - name: default pod: {} terminationGracePeriodSeconds: 0 volumes: - name: pvcvolume persistentVolumeClaim: claimName: upload-demo status: conditions: - lastProbeTime: null lastTransitionTime: 2018-10-23T07:48:30Z message: 'server error. command Launcher.Sync failed: In-kernel virtio-net device emulation ''/dev/vhost-net'' not present' reason: Synchronizing with the Domain failed. status: "False" type: Synchronized interfaces: - ipAddress: 10.130.0.15 nodeName: cnv-executor-shiywang-node1.example.com phase: Scheduled kind: List metadata: resourceVersion: "" selfLink: "" Actual results: [cnv-qe-jenkins@cnv-executor-shiywang ~]$ oc describe vmi Name: vm-upload-test Namespace: kube-system Labels: kubevirt.io/nodeName=cnv-executor-shiywang-node1.example.com Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"kubevirt.io/v1alpha2","kind":"VirtualMachineInstance","metadata":{"annotations":{},"creationTimestamp":null,"name":"vm-upload-test","nam... API Version: kubevirt.io/v1alpha2 Kind: VirtualMachineInstance Metadata: Creation Timestamp: 2018-10-23T07:47:57Z Finalizers: foregroundDeleteVirtualMachine Generation: 1 Resource Version: 1138050 Self Link: /apis/kubevirt.io/v1alpha2/namespaces/kube-system/virtualmachineinstances/vm-upload-test UID: f52ecd7d-d697-11e8-b546-fa163e265af7 Spec: Domain: Devices: Disks: Disk: Bus: virtio Name: pvcdisk Volume Name: pvcvolume Interfaces: Bridge: Name: default Features: Acpi: Enabled: true Firmware: Uuid: 7712f7c0-1932-4a18-8304-a058ccd99ac5 Machine: Type: q35 Resources: Requests: Memory: 64M Networks: Name: default Pod: Termination Grace Period Seconds: 0 Volumes: Name: pvcvolume Persistent Volume Claim: Claim Name: upload-demo Status: Interfaces: Ip Address: 10.130.0.15 Node Name: cnv-executor-shiywang-node1.example.com Phase: Failed Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulCreate 32m virtualmachine-controller Created virtual machine pod virt-launcher-vm-upload-test-mwc5q Normal SuccessfulHandOver 32m virtualmachine-controller Pod ownership transferred to the node virt-launcher-vm-upload-test-mwc5q Warning SyncFailed 29m (x16 over 31m) virt-handler, cnv-executor-shiywang-node1.example.com server error. command Launcher.Sync failed: In-kernel virtio-net device emulation '/dev/vhost-net' not present Warning Stopped 26m virt-handler, cnv-executor-shiywang-node1.example.com The VirtualMachineInstance crashed. Expected results: VMI create, running success Additional info: image: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cnv13-tech-preview/virt-api:v1.3.0 CNV 1.3 CDI 1.2
forget to add 2 more steps, but I do did: [cnv-qe-jenkins@cnv-executor-shiywang ~]$ echo $TOKEN eyJhbGciOiJQUzUxMiIsImtpZCI6IiJ9.eyJwdmNOYW1lIjoidXBsb2FkLWRlbW8iLCJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImNyZWF0aW9uVGltZXN0YW1wIjoiMjAxOC0xMC0yM1QwNzo0Mjo0NS44NjM1NzM5MzdaIn0.HcRSxbiuKTh9hDGx3VT78CDyy0RRCG0yhYSGk2brTs7UZt_56Q1zUEHgFciMhZbteW7vrfRjgeosOxgatY4qGfRXXFsxPC5awR5olz3aRv2GLyyVSJxc2hd9GzHyaSeY3-MB3dbLDmpBOxAEzT320xpTxZzUFeCT-h9RfknKWkuWRUmZFtwmNBrGBKKUx_lweKPcs8jbMhYJJKJuGODpvTY-R3qXIHZzcP2OfL5zOTQq03VzWW3IU5zIyYMheFEfMNnDY4c2TGKbY9vaMXytrJMF0fCxl6wKUISmikMAEABk3qxWwbNdlr9QQRJhOuZ_TxokoYS8xSoPfcbSbyNxMw [cnv-qe-jenkins@cnv-executor-shiywang ~]$ curl -v --insecure -H "Authorization: Bearer $TOKEN" --data-binary @cirros-0.4.0-x86_64-disk.img https://cdi-uploadproxy-kube-system.cloudapps.example.com/v1alpha1/upload
Sorry, the CDI version is 1.3 actually image: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/cnv13-tech-preview/virt-cdi-controller:1.3
All our kubevirt-ansible e2e related VMI creating tests failed due to this bug
I see that there is no interface model specified on the VMI. That means that it should default to "virtio-net". But I see that virt-controller did not request "devices.kubevirt.io/vhost-net". I would need the following additional information: * oc decribe node "cnv-executor-shiywang-node1.example.com" * oc get configmap -n kube-system kubevirt-config -o yaml * the content of the ".version" file in the virt-controller pods It looks a little bit to me like virt-controller is not the right version.
[cloud-user@cnv-executor-shiywang-master1 ~]$ oc get nodes NAME STATUS ROLES AGE VERSION cnv-executor-shiywang-master1.example.com Ready infra,master 7d v1.11.0+d4cacc0 cnv-executor-shiywang-node1.example.com Ready compute 7d v1.11.0+d4cacc0 cnv-executor-shiywang-node2.example.com Ready compute 7d v1.11.0+d4cacc0 [cloud-user@cnv-executor-shiywang-master1 ~]$ oc describe node cnv-executor-shiywang-node1.example.com Name: cnv-executor-shiywang-node1.example.com Roles: compute Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux glusterfs=storage-host kubernetes.io/hostname=cnv-executor-shiywang-node1.example.com kubevirt.io/schedulable=true node-role.kubernetes.io/compute=true Annotations: kubevirt.io/heartbeat=2018-10-25T09:03:30Z node.openshift.io/md5sum=9059959584cecf5be762ba42d3c5e451 volumes.kubernetes.io/controller-managed-attach-detach=true CreationTimestamp: Thu, 18 Oct 2018 02:22:52 -0400 Taints: <none> Unschedulable: false Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Thu, 25 Oct 2018 05:04:37 -0400 Thu, 18 Oct 2018 02:22:53 -0400 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Thu, 25 Oct 2018 05:04:37 -0400 Thu, 18 Oct 2018 02:22:53 -0400 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Thu, 25 Oct 2018 05:04:37 -0400 Thu, 18 Oct 2018 02:22:53 -0400 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Thu, 25 Oct 2018 05:04:37 -0400 Thu, 18 Oct 2018 02:22:53 -0400 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Thu, 25 Oct 2018 05:04:37 -0400 Thu, 18 Oct 2018 02:24:02 -0400 KubeletReady kubelet is posting ready status Addresses: InternalIP: 172.16.0.13 Hostname: cnv-executor-shiywang-node1.example.com Capacity: cpu: 4 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8009164Ki pods: 250 Allocatable: cpu: 4 devices.kubevirt.io/kvm: 110 devices.kubevirt.io/tun: 110 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7906764Ki pods: 250 System Info: Machine ID: 947e4beedcce47e0912020a6d391972d System UUID: 947E4BEE-DCCE-47E0-9120-20A6D391972D Boot ID: c686c6d2-46bb-4945-a27f-418b483f0471 Kernel Version: 3.10.0-944.el7.x86_64 OS Image: Red Hat Enterprise Linux Server 7.6 (Maipo) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.11.5 Kubelet Version: v1.11.0+d4cacc0 Kube-Proxy Version: v1.11.0+d4cacc0 Non-terminated Pods: (14 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- default local-volume-provisioner-nsb48 0 (0%) 0 (0%) 0 (0%) 0 (0%) glusterfs glusterfs-storage-vc9m9 100m (2%) 0 (0%) 100Mi (1%) 0 (0%) kube-system cdi-api-548bf94f55-grzlh 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system kube-multus-ds-amd64-kwp95 100m (2%) 100m (2%) 50Mi (0%) 50Mi (0%) kube-system ovs-cni-plugin-amd64-ctg29 100m (2%) 100m (2%) 50Mi (0%) 50Mi (0%) kube-system ovs-vsctl-amd64-d6w67 100m (2%) 100m (2%) 50Mi (0%) 50Mi (0%) kube-system virt-api-79cb7d7bf4-wt4dh 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system virt-controller-5dd47b64f9-gbwhh 0 (0%) 0 (0%) 0 (0%) 0 (0%) kube-system virt-handler-cks9g 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-infra hawkular-cassandra-1-5n9ww 0 (0%) 0 (0%) 1G (12%) 2G (24%) openshift-metrics-server metrics-server-f45d99cf8-znld5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-node sync-md555 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-sdn ovs-wgq22 100m (2%) 200m (5%) 300Mi (3%) 400Mi (5%) openshift-sdn sdn-nzwrk 100m (2%) 0 (0%) 200Mi (2%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 600m (15%) 500m (12%) memory 1786432000 (22%) 2516325Ki (31%) devices.kubevirt.io/kvm 0 0 devices.kubevirt.io/tun 0 0 Events: <none> [cloud-user@cnv-executor-shiywang-master1 ~]$ oc get configmap -n kube-system kubevirt-config -o yaml apiVersion: v1 data: feature-gates: DataVolumes kind: ConfigMap metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","data":{"feature-gates":"DataVolumes"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"kubevirt-config","namespace":"kube-system"}} creationTimestamp: 2018-10-18T06:36:30Z name: kubevirt-config namespace: kube-system resourceVersion: "4274" selfLink: /api/v1/namespaces/kube-system/configmaps/kubevirt-config uid: 25aa2de4-d2a0-11e8-b546-fa163e265af7 for ".version" file I didn't find any [cloud-user@cnv-executor-shiywang-master1 ~]$ oc rsh virt-controller-5dd47b64f9-gbwhh sh-4.2$ find / -name "*version" /proc/sys/kernel/bootloader_version /proc/sys/kernel/version /proc/sys/net/ipv4/conf/all/force_igmp_version /proc/sys/net/ipv4/conf/default/force_igmp_version /proc/sys/net/ipv4/conf/eth0/force_igmp_version /proc/sys/net/ipv4/conf/lo/force_igmp_version /proc/sys/net/ipv6/conf/all/force_mld_version /proc/sys/net/ipv6/conf/default/force_mld_version /proc/sys/net/ipv6/conf/eth0/force_mld_version /proc/sys/net/ipv6/conf/lo/force_mld_version find: '/proc/tty/driver': Permission denied /proc/version find: '/run/secrets/rhsm': Permission denied /sys/devices/pci0000:00/0000:00:01.2/usb1/1-1/version /sys/devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/input/input4/id/version /sys/devices/pci0000:00/0000:00:01.2/usb1/version /sys/devices/virtual/dmi/id/product_version /sys/devices/virtual/dmi/id/chassis_version /sys/devices/virtual/dmi/id/bios_version /sys/devices/platform/i8042/serio0/input/input1/id/version /sys/devices/platform/i8042/serio1/input/input2/id/version /sys/devices/platform/i8042/serio1/input/input3/id/version /sys/devices/platform/pcspkr/input/input5/id/version /sys/devices/LNXSYSTM:00/LNXPWRBN:00/input/input0/id/version /sys/class/drm/version /sys/kernel/boot_params/version /sys/module/drm/srcversion /sys/module/drm/rhelversion /sys/module/kvm/srcversion /sys/module/kvm/rhelversion /sys/module/llc/srcversion /sys/module/llc/rhelversion /sys/module/lrw/srcversion /sys/module/lrw/rhelversion /sys/module/stp/srcversion /sys/module/stp/rhelversion /sys/module/tpm/version /sys/module/ttm/srcversion /sys/module/ttm/rhelversion /sys/module/uio/srcversion /sys/module/uio/rhelversion /sys/module/vmd/version /sys/module/xfs/srcversion /sys/module/xfs/rhelversion /sys/module/ghash_clmulni_intel/srcversion /sys/module/ghash_clmulni_intel/rhelversion /sys/module/tcp_cubic/version /sys/module/acpi/parameters/acpica_version /sys/module/ext4/srcversion /sys/module/ext4/rhelversion /sys/module/fuse/srcversion /sys/module/fuse/rhelversion /sys/module/jbd2/srcversion /sys/module/jbd2/rhelversion /sys/module/veth/srcversion /sys/module/veth/rhelversion /sys/module/syscopyarea/srcversion /sys/module/syscopyarea/rhelversion /sys/module/scsi_dh_alua/version /sys/module/scsi_dh_rdac/version /sys/module/devlink/srcversion /sys/module/devlink/rhelversion /sys/module/serio_raw/srcversion /sys/module/serio_raw/rhelversion /sys/module/drm_panel_orientation_quirks/srcversion /sys/module/drm_panel_orientation_quirks/rhelversion /sys/module/dm_multipath/srcversion /sys/module/dm_multipath/rhelversion /sys/module/crct10dif_common/srcversion /sys/module/crct10dif_common/rhelversion /sys/module/nfnetlink/srcversion /sys/module/nfnetlink/rhelversion /sys/module/iosf_mbi/srcversion /sys/module/iosf_mbi/rhelversion /sys/module/dm_bio_prison/srcversion /sys/module/dm_bio_prison/rhelversion /sys/module/xt_statistic/srcversion /sys/module/xt_statistic/rhelversion /sys/module/pata_acpi/srcversion /sys/module/pata_acpi/rhelversion /sys/module/pata_acpi/version /sys/module/ppdev/srcversion /sys/module/ppdev/rhelversion /sys/module/xt_mark/srcversion /sys/module/xt_mark/rhelversion /sys/module/vxlan/srcversion /sys/module/vxlan/rhelversion /sys/module/vxlan/version /sys/module/udp_tunnel/srcversion /sys/module/udp_tunnel/rhelversion /sys/module/sb_edac/srcversion /sys/module/sb_edac/rhelversion /sys/module/efivars/version /sys/module/i2c_piix4/srcversion /sys/module/i2c_piix4/rhelversion /sys/module/crct10dif_pclmul/srcversion /sys/module/crct10dif_pclmul/rhelversion /sys/module/sysimgblt/srcversion /sys/module/sysimgblt/rhelversion /sys/module/xt_comment/srcversion /sys/module/xt_comment/rhelversion /sys/module/scsi_transport_iscsi/srcversion /sys/module/scsi_transport_iscsi/rhelversion /sys/module/scsi_transport_iscsi/version /sys/module/aesni_intel/srcversion /sys/module/aesni_intel/rhelversion /sys/module/xt_recent/srcversion /sys/module/xt_recent/rhelversion /sys/module/br_netfilter/srcversion /sys/module/br_netfilter/rhelversion /sys/module/mbcache/srcversion /sys/module/mbcache/rhelversion /sys/module/ip_tables/srcversion /sys/module/ip_tables/rhelversion /sys/module/dm_region_hash/srcversion /sys/module/dm_region_hash/rhelversion /sys/module/ablk_helper/srcversion /sys/module/ablk_helper/rhelversion /sys/module/drm_kms_helper/srcversion /sys/module/drm_kms_helper/rhelversion /sys/module/ipt_MASQUERADE/srcversion /sys/module/ipt_MASQUERADE/rhelversion /sys/module/fb_sys_fops/srcversion /sys/module/fb_sys_fops/rhelversion /sys/module/virtio_blk/srcversion /sys/module/virtio_blk/rhelversion /sys/module/virtio_net/srcversion /sys/module/virtio_net/rhelversion /sys/module/virtio_pci/srcversion /sys/module/virtio_pci/rhelversion /sys/module/virtio_pci/version /sys/module/crc32c_intel/srcversion /sys/module/crc32c_intel/rhelversion /sys/module/virtio_balloon/srcversion /sys/module/virtio_balloon/rhelversion /sys/module/crct10dif_generic/srcversion /sys/module/crct10dif_generic/rhelversion /sys/module/bridge/srcversion /sys/module/bridge/rhelversion /sys/module/bridge/version /sys/module/cirrus/srcversion /sys/module/cirrus/rhelversion /sys/module/cryptd/srcversion /sys/module/cryptd/rhelversion /sys/module/dm_log/srcversion /sys/module/dm_log/rhelversion /sys/module/dm_mod/srcversion /sys/module/dm_mod/rhelversion /sys/module/dm_persistent_data/srcversion /sys/module/dm_persistent_data/rhelversion /sys/module/configfs/version /sys/module/floppy/srcversion /sys/module/floppy/rhelversion /sys/module/dm_mirror/srcversion /sys/module/dm_mirror/rhelversion /sys/module/ata_generic/srcversion /sys/module/ata_generic/rhelversion /sys/module/ata_generic/version /sys/module/tpm_tis/version /sys/module/joydev/srcversion /sys/module/joydev/rhelversion /sys/module/openvswitch/srcversion /sys/module/openvswitch/rhelversion /sys/module/libata/srcversion /sys/module/libata/rhelversion /sys/module/libata/version /sys/module/dm_bufio/srcversion /sys/module/dm_bufio/rhelversion /sys/module/nf_nat/srcversion /sys/module/nf_nat/rhelversion /sys/module/nf_defrag_ipv4/srcversion /sys/module/nf_defrag_ipv4/rhelversion /sys/module/nf_defrag_ipv6/srcversion /sys/module/nf_defrag_ipv6/rhelversion /sys/module/pcspkr/srcversion /sys/module/pcspkr/rhelversion /sys/module/target_core_mod/srcversion /sys/module/target_core_mod/rhelversion /sys/module/nf_reject_ipv4/srcversion /sys/module/nf_reject_ipv4/rhelversion /sys/module/sunrpc/srcversion /sys/module/sunrpc/rhelversion /sys/module/nf_nat_ipv4/srcversion /sys/module/nf_nat_ipv4/rhelversion /sys/module/nf_nat_ipv6/srcversion /sys/module/nf_nat_ipv6/rhelversion /sys/module/gf128mul/srcversion /sys/module/gf128mul/rhelversion /sys/module/xt_conntrack/srcversion /sys/module/xt_conntrack/rhelversion /sys/module/virtio/srcversion /sys/module/virtio/rhelversion /sys/module/crc_t10dif/srcversion /sys/module/crc_t10dif/rhelversion /sys/module/xt_nat/srcversion /sys/module/xt_nat/rhelversion /sys/module/nf_nat_masquerade_ipv4/srcversion /sys/module/nf_nat_masquerade_ipv4/rhelversion /sys/module/xz_dec/version /sys/module/parport/srcversion /sys/module/parport/rhelversion /sys/module/virtio_console/srcversion /sys/module/virtio_console/rhelversion /sys/module/dm_thin_pool/srcversion /sys/module/dm_thin_pool/rhelversion /sys/module/dm_snapshot/srcversion /sys/module/dm_snapshot/rhelversion /sys/module/kvm_intel/srcversion /sys/module/kvm_intel/rhelversion /sys/module/ip6_udp_tunnel/srcversion /sys/module/ip6_udp_tunnel/rhelversion /sys/module/nf_conntrack_netlink/srcversion /sys/module/nf_conntrack_netlink/rhelversion /sys/module/parport_pc/srcversion /sys/module/parport_pc/rhelversion /sys/module/iptable_filter/srcversion /sys/module/iptable_filter/rhelversion /sys/module/ata_piix/srcversion /sys/module/ata_piix/rhelversion /sys/module/ata_piix/version /sys/module/overlay/srcversion /sys/module/overlay/rhelversion /sys/module/target_core_user/srcversion /sys/module/target_core_user/rhelversion /sys/module/tpm_tis_core/version /sys/module/glue_helper/srcversion /sys/module/glue_helper/rhelversion /sys/module/sysfillrect/srcversion /sys/module/sysfillrect/rhelversion /sys/module/ipt_REJECT/srcversion /sys/module/ipt_REJECT/rhelversion /sys/module/nf_conntrack/srcversion /sys/module/nf_conntrack/rhelversion /sys/module/irqbypass/srcversion /sys/module/irqbypass/rhelversion /sys/module/virtio_ring/srcversion /sys/module/virtio_ring/rhelversion /sys/module/xt_addrtype/srcversion /sys/module/xt_addrtype/rhelversion /sys/module/iptable_nat/srcversion /sys/module/iptable_nat/rhelversion /sys/module/libcrc32c/srcversion /sys/module/libcrc32c/rhelversion /sys/module/crc32_pclmul/srcversion /sys/module/crc32_pclmul/rhelversion /sys/module/nf_conntrack_ipv4/srcversion /sys/module/nf_conntrack_ipv4/rhelversion /sys/module/nf_conntrack_ipv6/srcversion /sys/module/nf_conntrack_ipv6/rhelversion find: '/var/lib/yum/history/2018-08-09/1': Permission denied find: '/var/lib/yum/history/2018-08-09/2': Permission denied /var/lib/yum/rpmdb-indexes/version find: '/var/lib/machines': Permission denied find: '/var/cache/ldconfig': Permission denied /usr/share/mime/version sh-4.2$
sh-4.2$ find / -name ".version" -print find: '/proc/tty/driver': Permission denied find: '/run/secrets/rhsm': Permission denied find: '/var/lib/yum/history/2018-08-09/1': Permission denied find: '/var/lib/yum/history/2018-08-09/2': Permission denied find: '/var/lib/machines': Permission denied find: '/var/cache/ldconfig': Permission denied sh-4.2$
So I did a ``` virtctl version Server Version: version.Info{GitVersion:"v1.3.0-1-2-g52ab5bc", GitCommit:"93e8226fa15325a0eaea2b693de82824a8b98270", GitTreeState:"clean", BuildDate:"2018-10-15T13:52:07Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"linux/amd64"} ``` on that cluster (have access to it from the other bug report). This commit "93e8226fa15325a0eaea2b693de82824a8b98270" is twenty days old. Because it seems like v1.3.0 builds always reference the latest build (thanks mark for the hint), it seems like the very old pre-build pulls in the virt-launcher image from the latest build, which depends on newer virt-controller and virt-handler builds. Bottom-line: virt-launcher is too new. Upstream we work with the strict rule that we never change the content of a tag (except for "latest" of course). If downstream wants to handle that different (which it currently does) then it somehow needs to re-verify on every virt-launcher release that it still works with all older releases of virt-api, virt-controller and virt-handler. I would not advise that. The issue will go away once you re-deploy kubevirt fully with the latest images.
Adding Marc. FYI @Marc overriding tagged content is causing the issue here. Related to our recent offline discussion about having moving tags.
@Roman Mohr I tested in a new env just build today, the problem disappeared, @Nelly the env I used is build from our jenkins-job cnv-1.3-ocp-3.11, Do we have any plan to avoid this ? or we already had ? so this is not a bug? something we operate wrong?
Here a general receipt which would work: If you have a job which creates the containers and manifests for you, ensure that you create unique tags (e.g. add the jenins job number: virt-handler:v1.3.0-1234) and use these tags in the manifest. The manifest location can stay the same between builds. This way you have a non-moving starting point (the location of the manifests) but you can guarantee a consistent experience because of the exact build tags. Otherwise even simple things like just adding a node to the cluster, starting a new VMI may pull in containers from a completely different jenkins build.
Closing this as this was a deployment issue in the QE env.