Created attachment 1654764 [details] openshift-cnv_asb-vmi-nfs-rhel.xml Test on packages: Guest: RHEL-6.10-x86_64: kernel: 2.6.32-754.el6.x86_64 cnv host: rhel7.7.z kernel: 3.10.0-1062.9.1.el7.x86_64 libvirt-daemon-driver-qemu-4.5.0-23.el7_7.5.x86_64 qemu-kvm-rhev-2.12.0-33.el7_7.8.x86_64 virt-launcher pod: libvirt-daemon-driver-qemu-5.6.0-6.module+el8.1.0+4244+9aa4e6bb.x86_64 qemu-kvm-core-4.1.0-14.module+el8.1.0+4548+ed1300f4.x86_64 Test steps: 1. Create pv,pvc base on nfs storage in cnv2.2 2. Create q35 rhel6.10 vmi with yaml # oc create -f asb-vmi-nfs-rhel.yaml 3. Check the guest xml in virt-launcher pod: openshift-cnv_asb-vmi-nfs-rhel.xml sh-4.4# virsh list --all Id Name State ------------------------------------------------ 1 openshift-cnv_asb-vmi-nfs-rhel running # virsh dumpxml openshift-cnv_asb-vmi-nfs-rhel| grep interface -A 10 <interface type='bridge'> <mac address='0a:58:0a:82:00:43'/> <source bridge='k6t-eth0'/> <target dev='vnet0'/> <model type='virtio'/> <mtu size='1450'/> <alias name='ua-default'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> 4. Check in guest, `ifconfig -a` no interface list, lspci with Ethernet controller # ifconfig -a # lspci|grep Ethernet 01:00.0 Ethernet controller: Red Hat, Inc. Virtio network device (rev 01) Actual results: In step4: Device plug to pcie controller and failed to find the device Expected results: In step4: Guest network work well Additional info: Guest xml: openshift-cnv_asb-vmi-nfs-rhel.xml Guest yaml file: asb-vmi-nfs-rhel.yaml
Created attachment 1654765 [details] asb-vmi-nfs-rhel.yaml
The pod network does not really work on el7 hosts (see bug 1741626). Can you share the complete `oc get vmi -o yaml` ? Can you try a secondary (multus) interface with your el6 guest? If kubevirt has to support https://libvirt.org/formatdomain.html#elementsVirtioTransitional we should categorize this as a feature (we have to change our API to support this).
Please do not clear the needinfo flag until you provide the needed info.
*** Bug 1788923 has been marked as a duplicate of this bug. ***
As a workaround, set the VM's interfaces->model to e1000 and set networkInterfaceMultiqueue to False
Since we have a workaround, there is not enough info and I don't believe it is a blocker, I'm moving it to 2.4.
Hi, Nelly and Dan Sorry for the late reply, I have tried on cnv2.3 for rhel6.10 guest, the rtl8139 interface work well, the virtio interface has same issue as in cnv2.2. I update the info for `oc get vmi -o yaml` in files: - for vmi: rtl8139 is in file: rhel6_10-rtl8139.yaml - for vmi: virtio is in file: rhel6_10-virtio.yaml If you need more information, please feel free to let me know, thank you! Additional information: ------------------------------------------------------------ Guest: RHEL-6.10-x86_64: kernel: 2.6.32-754.el6.x86_64 cnv host: rhel7.8 kernel: 3.10.0-1127.el7.x86_64 libvirt-4.5.0-33.el7.x86_64 qemu-img-rhev-2.12.0-44.el7_8.1.x86_64 virt-launcher pod: libvirt-daemon-kvm-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64 qemu-kvm-core-4.1.0-23.module+el8.1.1+5748+5fcc84a8.1.x86_64 Test steps: 1. create rhel6.10 vmi with rtl8139 interface: asb-vmi-nfs-rhel.yaml login to vmi: - ifconfig list eth0 with ipaddress - lspci|grep Ethernet 02:01.0 Ethernet controller: Realtek Semiconductor Co.. Ltd. RTL-8100/8101L/8139 PCI Fast Ethernet Adapter (rev 20) - host ping vm, vm ping host work well - guest xml: <interface type='bridge'> <mac address='0a:58:0a:81:02:0c'/> <source bridge='k6t-eth0'/> <target dev='vnet0'/> <model type='rtl8139'/> <mtu size='1450'/> <alias name='ua-default'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </interface> 2. create rhel6.10 vmi with virtio interface: asb-vmi-nfs-rhel-virtio.yaml login to vmi: - ifconfig no interface - lspci| grep Ethernet 01:00.0 Ethernet controller: Red Hat, Inc, Virtio network device (rev 01) - guest xml: <interface type='bridge'> <mac address='0a:58:0a:81:02:0e'/> <source bridge='k6t-eth0'/> <target dev='vnet0'/> <model type='virtio'/> <mtu size='1450'/> <alias name='ua-default'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface>
Created attachment 1675707 [details] rhel6_10-rtl8139.yaml
Created attachment 1675708 [details] rhel6_10-virtio.yaml
Created attachment 1675709 [details] asb-vmi-nfs-rhel-virtio.yaml
This enhancement did not make it to U/S before the feature freeze. Since it is not a blocker and has a workaround, I'm postponing it to 2.5.
Idea: our current rhel6 cluster has no bus defined, which is interpreted as a virtio by KubeVirt https://kubevirt.io/api-reference/master/definitions.html#_v1_interface I think we should use an explicit value that rhel6 actually supports (until we add proper support for virtio-transitional)
Posted https://github.com/kubevirt/kubevirt/pull/4730 in kubevirt/kubevirt which would should allow rhel6 guests to boot properly. Shall we proceed with a new bug or shall I take it over?
(In reply to Roman Mohr from comment #16) > Posted https://github.com/kubevirt/kubevirt/pull/4730 in kubevirt/kubevirt > which would should allow rhel6 guests to boot properly. Shall we proceed > with a new bug or shall I take it over? Created https://bugzilla.redhat.com/show_bug.cgi?id=1911662 to track this.
Verify with: virt-operator-container-v2.6.0-106 virt-launcher-container-v2.6.0-106 Create VM with CNV common template to get useVirtioTransitional: true Tested: 1. Connect via console and VNC 2. Connect with SSH All PASS [1] apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: annotations: kubevirt.io/latest-observed-api-version: v1alpha3 kubevirt.io/storage-observed-api-version: v1alpha3 name.os.template.kubevirt.io/rhel6.10: Red Hat Enterprise Linux 6.0 or higher vm.kubevirt.io/flavor: small vm.kubevirt.io/os: rhel6 vm.kubevirt.io/validations: | [ { "name": "minimal-required-memory", "path": "jsonpath::.spec.domain.resources.requests.memory", "rule": "integer", "message": "This VM requires more memory.", "min": 536870912 } ] vm.kubevirt.io/workload: server selfLink: >- /apis/kubevirt.io/v1alpha3/namespaces/rhel6/virtualmachines/rhel6-canadian-mole resourceVersion: '2893630' name: rhel6-canadian-mole uid: 878b31e4-9e64-4951-b402-11840b9a5915 creationTimestamp: '2021-01-31T12:21:23Z' generation: 1 managedFields: - apiVersion: kubevirt.io/v1alpha3 fieldsType: FieldsV1 fieldsV1: 'f:metadata': 'f:annotations': .: {} 'f:name.os.template.kubevirt.io/rhel6.10': {} 'f:vm.kubevirt.io/flavor': {} 'f:vm.kubevirt.io/os': {} 'f:vm.kubevirt.io/validations': {} 'f:vm.kubevirt.io/workload': {} 'f:labels': 'f:vm.kubevirt.io/template.version': {} 'f:vm.kubevirt.io/template.namespace': {} 'f:app': {} .: {} 'f:os.template.kubevirt.io/rhel6.10': {} 'f:vm.kubevirt.io/template.revision': {} 'f:workload.template.kubevirt.io/server': {} 'f:flavor.template.kubevirt.io/small': {} 'f:vm.kubevirt.io/template': {} 'f:spec': .: {} 'f:dataVolumeTemplates': {} 'f:running': {} 'f:template': .: {} 'f:metadata': .: {} 'f:labels': .: {} 'f:flavor.template.kubevirt.io/small': {} 'f:kubevirt.io/domain': {} 'f:kubevirt.io/size': {} 'f:os.template.kubevirt.io/rhel6.10': {} 'f:vm.kubevirt.io/name': {} 'f:workload.template.kubevirt.io/server': {} 'f:spec': .: {} 'f:domain': .: {} 'f:cpu': .: {} 'f:cores': {} 'f:sockets': {} 'f:threads': {} 'f:devices': .: {} 'f:disks': {} 'f:interfaces': {} 'f:rng': {} 'f:useVirtioTransitional': {} 'f:machine': .: {} 'f:type': {} 'f:resources': .: {} 'f:requests': .: {} 'f:memory': {} 'f:evictionStrategy': {} 'f:hostname': {} 'f:networks': {} 'f:terminationGracePeriodSeconds': {} 'f:volumes': {} manager: Mozilla operation: Update time: '2021-01-31T12:21:23Z' - apiVersion: kubevirt.io/v1alpha3 fieldsType: FieldsV1 fieldsV1: 'f:metadata': 'f:annotations': 'f:kubevirt.io/latest-observed-api-version': {} 'f:kubevirt.io/storage-observed-api-version': {} 'f:status': .: {} 'f:conditions': {} 'f:created': {} 'f:ready': {} 'f:volumeSnapshotStatuses': {} manager: virt-controller operation: Update time: '2021-01-31T12:22:49Z' namespace: rhel6 labels: app: rhel6-canadian-mole flavor.template.kubevirt.io/small: 'true' os.template.kubevirt.io/rhel6.10: 'true' vm.kubevirt.io/template: rhel6-server-small vm.kubevirt.io/template.namespace: openshift vm.kubevirt.io/template.revision: '1' vm.kubevirt.io/template.version: v0.13.1 workload.template.kubevirt.io/server: 'true' spec: dataVolumeTemplates: - metadata: creationTimestamp: null name: rhel6-canadian-mole-rootdisk-e8bbk spec: pvc: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi storageClassName: standard volumeMode: Filesystem source: http: url: >- http://cnv-qe-server.rhevdev.lab.eng.rdu2.redhat.com/files/cnv-tests/rhel-images/rhel-610.qcow2 running: true template: metadata: creationTimestamp: null labels: flavor.template.kubevirt.io/small: 'true' kubevirt.io/domain: rhel6-canadian-mole kubevirt.io/size: small os.template.kubevirt.io/rhel6.10: 'true' vm.kubevirt.io/name: rhel6-canadian-mole workload.template.kubevirt.io/server: 'true' spec: domain: cpu: cores: 1 sockets: 1 threads: 1 devices: disks: - disk: bus: sata name: cloudinitdisk - bootOrder: 1 disk: bus: sata name: rootdisk interfaces: - masquerade: {} model: e1000e name: default rng: {} useVirtioTransitional: true machine: type: pc-q35-rhel8.3.0 resources: requests: memory: 2Gi evictionStrategy: LiveMigrate hostname: rhel6-canadian-mole networks: - name: default pod: {} terminationGracePeriodSeconds: 180 volumes: - cloudInitNoCloud: userData: | #cloud-config user: cloud-user password: redhat chpasswd: expire: false name: cloudinitdisk - dataVolume: name: rhel6-canadian-mole-rootdisk-e8bbk name: rootdisk status: conditions: - lastProbeTime: null lastTransitionTime: '2021-01-31T12:22:46Z' status: 'True' type: Ready created: true ready: true volumeSnapshotStatuses: - enabled: false name: cloudinitdisk reason: Volume type does not suport snapshots - enabled: false name: rootdisk reason: 'No Volume Snapshot Storage Class found for volume [rootdisk]'
*** This bug has been marked as a duplicate of bug 1911662 ***