Bug 1794243 - Default network is not work in rhel6.10 q35 VMI
Summary: Default network is not work in rhel6.10 q35 VMI
Keywords:
Status: CLOSED DUPLICATE of bug 1911662
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Networking
Version: 2.2.0
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
: 4.9.0
Assignee: omergi
QA Contact: Meni Yakove
URL:
Whiteboard: libvirt_CNV_INT
: 1788923 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-01-23 02:37 UTC by chhu
Modified: 2021-08-18 09:26 UTC (History)
13 users (show)

Fixed In Version: virt-launcher-container-v4.9.0-27
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-08-18 09:26:23 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
openshift-cnv_asb-vmi-nfs-rhel.xml (6.58 KB, text/plain)
2020-01-23 02:37 UTC, chhu
no flags Details
asb-vmi-nfs-rhel.yaml (1.27 KB, text/plain)
2020-01-23 02:38 UTC, chhu
no flags Details
rhel6_10-rtl8139.yaml (2.87 KB, text/plain)
2020-04-02 12:00 UTC, chhu
no flags Details
rhel6_10-virtio.yaml (2.87 KB, text/plain)
2020-04-02 12:01 UTC, chhu
no flags Details
asb-vmi-nfs-rhel-virtio.yaml (1.38 KB, text/plain)
2020-04-02 12:02 UTC, chhu
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt common-templates pull 292 0 None closed use rtl8139 interface model on rhel6 and centos6 2021-01-31 12:16:38 UTC

Internal Links: 1911662

Description chhu 2020-01-23 02:37:05 UTC
Created attachment 1654764 [details]
openshift-cnv_asb-vmi-nfs-rhel.xml

Test on packages: 
Guest: 
RHEL-6.10-x86_64: kernel: 2.6.32-754.el6.x86_64

cnv host: rhel7.7.z
kernel: 3.10.0-1062.9.1.el7.x86_64
libvirt-daemon-driver-qemu-4.5.0-23.el7_7.5.x86_64
qemu-kvm-rhev-2.12.0-33.el7_7.8.x86_64

virt-launcher pod:
libvirt-daemon-driver-qemu-5.6.0-6.module+el8.1.0+4244+9aa4e6bb.x86_64
qemu-kvm-core-4.1.0-14.module+el8.1.0+4548+ed1300f4.x86_64

Test steps:
1. Create pv,pvc base on nfs storage in cnv2.2
2. Create q35 rhel6.10 vmi with yaml
# oc create -f asb-vmi-nfs-rhel.yaml

3. Check the guest xml in virt-launcher pod: openshift-cnv_asb-vmi-nfs-rhel.xml
sh-4.4# virsh list --all
 Id   Name                             State
------------------------------------------------
 1    openshift-cnv_asb-vmi-nfs-rhel   running

# virsh dumpxml openshift-cnv_asb-vmi-nfs-rhel| grep interface -A 10
    <interface type='bridge'>
      <mac address='0a:58:0a:82:00:43'/>
      <source bridge='k6t-eth0'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <mtu size='1450'/>
      <alias name='ua-default'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>

4. Check in guest, `ifconfig -a` no interface list, lspci with Ethernet controller
# ifconfig -a
# lspci|grep Ethernet
01:00.0 Ethernet controller: Red Hat, Inc. Virtio network device (rev 01)

Actual results:
In step4: Device plug to pcie controller and failed to find the device

Expected results:
In step4: Guest network work well

Additional info:
Guest xml: openshift-cnv_asb-vmi-nfs-rhel.xml
Guest yaml file: asb-vmi-nfs-rhel.yaml

Comment 1 chhu 2020-01-23 02:38:25 UTC
Created attachment 1654765 [details]
asb-vmi-nfs-rhel.yaml

Comment 3 Dan Kenigsberg 2020-02-19 12:29:08 UTC
The pod network does not really work on el7 hosts (see bug 1741626). 
Can you share the complete `oc get vmi -o yaml` ?
Can you try a secondary (multus) interface with your el6 guest?

If kubevirt has to support https://libvirt.org/formatdomain.html#elementsVirtioTransitional we should categorize this as a feature (we have to change our API to support this).

Comment 5 Dan Kenigsberg 2020-02-24 07:23:58 UTC
Please do not clear the needinfo flag until you provide the needed info.

Comment 7 Dan Kenigsberg 2020-03-12 14:26:32 UTC
*** Bug 1788923 has been marked as a duplicate of this bug. ***

Comment 8 Ruth Netser 2020-03-16 15:01:52 UTC
As a workaround, set the VM's interfaces->model to e1000 and set networkInterfaceMultiqueue to False

Comment 9 Petr Horáček 2020-03-17 19:49:28 UTC
Since we have a workaround, there is not enough info and I don't believe it is a blocker, I'm moving it to 2.4.

Comment 10 chhu 2020-04-02 11:58:36 UTC
Hi, Nelly and Dan

Sorry for the late reply, I have tried on cnv2.3 for rhel6.10 guest,
the rtl8139 interface work well, the virtio interface has same issue as in cnv2.2.

I update the info for `oc get vmi -o yaml` in files:
- for vmi: rtl8139 is in file: rhel6_10-rtl8139.yaml
- for vmi: virtio is in file:  rhel6_10-virtio.yaml

If you need more information, please feel free to let me know, thank you!

Additional information:
------------------------------------------------------------
Guest: 
RHEL-6.10-x86_64: kernel: 2.6.32-754.el6.x86_64

cnv host: rhel7.8
kernel: 3.10.0-1127.el7.x86_64
libvirt-4.5.0-33.el7.x86_64
qemu-img-rhev-2.12.0-44.el7_8.1.x86_64

virt-launcher pod:
libvirt-daemon-kvm-5.6.0-10.module+el8.1.1+5309+6d656f05.x86_64
qemu-kvm-core-4.1.0-23.module+el8.1.1+5748+5fcc84a8.1.x86_64

Test steps:
1. create rhel6.10 vmi with rtl8139 interface: asb-vmi-nfs-rhel.yaml
  login to vmi:
- ifconfig list eth0 with ipaddress
- lspci|grep Ethernet
  02:01.0 Ethernet controller: Realtek Semiconductor Co.. Ltd. RTL-8100/8101L/8139 PCI Fast Ethernet Adapter (rev 20)
- host ping vm, vm ping host work well
- guest xml:
    <interface type='bridge'>
      <mac address='0a:58:0a:81:02:0c'/>
      <source bridge='k6t-eth0'/>
      <target dev='vnet0'/>
      <model type='rtl8139'/>
      <mtu size='1450'/>
      <alias name='ua-default'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>
    </interface>

2. create rhel6.10 vmi with virtio interface: asb-vmi-nfs-rhel-virtio.yaml
   login to vmi: 
- ifconfig no interface
- lspci| grep Ethernet
  01:00.0 Ethernet controller: Red Hat, Inc, Virtio network device (rev 01)
- guest xml:
    <interface type='bridge'>
      <mac address='0a:58:0a:81:02:0e'/>
      <source bridge='k6t-eth0'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <mtu size='1450'/>
      <alias name='ua-default'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </interface>

Comment 11 chhu 2020-04-02 12:00:47 UTC
Created attachment 1675707 [details]
rhel6_10-rtl8139.yaml

Comment 12 chhu 2020-04-02 12:01:25 UTC
Created attachment 1675708 [details]
rhel6_10-virtio.yaml

Comment 13 chhu 2020-04-02 12:02:07 UTC
Created attachment 1675709 [details]
asb-vmi-nfs-rhel-virtio.yaml

Comment 14 Petr Horáček 2020-06-04 08:07:23 UTC
This enhancement did not make it to U/S before the feature freeze. Since it is not a blocker and has a workaround, I'm postponing it to 2.5.

Comment 15 Dan Kenigsberg 2020-11-15 14:14:54 UTC
Idea: our current rhel6 cluster has no bus defined, which is interpreted as a virtio by KubeVirt https://kubevirt.io/api-reference/master/definitions.html#_v1_interface
I think we should use an explicit value that rhel6 actually supports (until we add proper support for virtio-transitional)

Comment 16 Roman Mohr 2020-12-30 13:47:57 UTC
Posted https://github.com/kubevirt/kubevirt/pull/4730 in kubevirt/kubevirt which would should allow rhel6 guests to boot properly. Shall we proceed with a new bug or shall I take it over?

Comment 17 Roman Mohr 2020-12-30 15:32:02 UTC
(In reply to Roman Mohr from comment #16)
> Posted https://github.com/kubevirt/kubevirt/pull/4730 in kubevirt/kubevirt
> which would should allow rhel6 guests to boot properly. Shall we proceed
> with a new bug or shall I take it over?

Created https://bugzilla.redhat.com/show_bug.cgi?id=1911662 to track this.

Comment 18 Israel Pinto 2021-01-31 14:26:16 UTC
Verify with: 
virt-operator-container-v2.6.0-106
virt-launcher-container-v2.6.0-106


Create VM with CNV common template to get useVirtioTransitional: true

Tested:
1. Connect via console and VNC
2. Connect with SSH
All PASS
 

[1]
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  annotations:
    kubevirt.io/latest-observed-api-version: v1alpha3
    kubevirt.io/storage-observed-api-version: v1alpha3
    name.os.template.kubevirt.io/rhel6.10: Red Hat Enterprise Linux 6.0 or higher
    vm.kubevirt.io/flavor: small
    vm.kubevirt.io/os: rhel6
    vm.kubevirt.io/validations: |
      [
        {
          "name": "minimal-required-memory",
          "path": "jsonpath::.spec.domain.resources.requests.memory",
          "rule": "integer",
          "message": "This VM requires more memory.",
          "min": 536870912
        }
      ]
    vm.kubevirt.io/workload: server
  selfLink: >-
    /apis/kubevirt.io/v1alpha3/namespaces/rhel6/virtualmachines/rhel6-canadian-mole
  resourceVersion: '2893630'
  name: rhel6-canadian-mole
  uid: 878b31e4-9e64-4951-b402-11840b9a5915
  creationTimestamp: '2021-01-31T12:21:23Z'
  generation: 1
  managedFields:
    - apiVersion: kubevirt.io/v1alpha3
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            .: {}
            'f:name.os.template.kubevirt.io/rhel6.10': {}
            'f:vm.kubevirt.io/flavor': {}
            'f:vm.kubevirt.io/os': {}
            'f:vm.kubevirt.io/validations': {}
            'f:vm.kubevirt.io/workload': {}
          'f:labels':
            'f:vm.kubevirt.io/template.version': {}
            'f:vm.kubevirt.io/template.namespace': {}
            'f:app': {}
            .: {}
            'f:os.template.kubevirt.io/rhel6.10': {}
            'f:vm.kubevirt.io/template.revision': {}
            'f:workload.template.kubevirt.io/server': {}
            'f:flavor.template.kubevirt.io/small': {}
            'f:vm.kubevirt.io/template': {}
        'f:spec':
          .: {}
          'f:dataVolumeTemplates': {}
          'f:running': {}
          'f:template':
            .: {}
            'f:metadata':
              .: {}
              'f:labels':
                .: {}
                'f:flavor.template.kubevirt.io/small': {}
                'f:kubevirt.io/domain': {}
                'f:kubevirt.io/size': {}
                'f:os.template.kubevirt.io/rhel6.10': {}
                'f:vm.kubevirt.io/name': {}
                'f:workload.template.kubevirt.io/server': {}
            'f:spec':
              .: {}
              'f:domain':
                .: {}
                'f:cpu':
                  .: {}
                  'f:cores': {}
                  'f:sockets': {}
                  'f:threads': {}
                'f:devices':
                  .: {}
                  'f:disks': {}
                  'f:interfaces': {}
                  'f:rng': {}
                  'f:useVirtioTransitional': {}
                'f:machine':
                  .: {}
                  'f:type': {}
                'f:resources':
                  .: {}
                  'f:requests':
                    .: {}
                    'f:memory': {}
              'f:evictionStrategy': {}
              'f:hostname': {}
              'f:networks': {}
              'f:terminationGracePeriodSeconds': {}
              'f:volumes': {}
      manager: Mozilla
      operation: Update
      time: '2021-01-31T12:21:23Z'
    - apiVersion: kubevirt.io/v1alpha3
      fieldsType: FieldsV1
      fieldsV1:
        'f:metadata':
          'f:annotations':
            'f:kubevirt.io/latest-observed-api-version': {}
            'f:kubevirt.io/storage-observed-api-version': {}
        'f:status':
          .: {}
          'f:conditions': {}
          'f:created': {}
          'f:ready': {}
          'f:volumeSnapshotStatuses': {}
      manager: virt-controller
      operation: Update
      time: '2021-01-31T12:22:49Z'
  namespace: rhel6
  labels:
    app: rhel6-canadian-mole
    flavor.template.kubevirt.io/small: 'true'
    os.template.kubevirt.io/rhel6.10: 'true'
    vm.kubevirt.io/template: rhel6-server-small
    vm.kubevirt.io/template.namespace: openshift
    vm.kubevirt.io/template.revision: '1'
    vm.kubevirt.io/template.version: v0.13.1
    workload.template.kubevirt.io/server: 'true'
spec:
  dataVolumeTemplates:
    - metadata:
        creationTimestamp: null
        name: rhel6-canadian-mole-rootdisk-e8bbk
      spec:
        pvc:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 20Gi
          storageClassName: standard
          volumeMode: Filesystem
        source:
          http:
            url: >-
              http://cnv-qe-server.rhevdev.lab.eng.rdu2.redhat.com/files/cnv-tests/rhel-images/rhel-610.qcow2
  running: true
  template:
    metadata:
      creationTimestamp: null
      labels:
        flavor.template.kubevirt.io/small: 'true'
        kubevirt.io/domain: rhel6-canadian-mole
        kubevirt.io/size: small
        os.template.kubevirt.io/rhel6.10: 'true'
        vm.kubevirt.io/name: rhel6-canadian-mole
        workload.template.kubevirt.io/server: 'true'
    spec:
      domain:
        cpu:
          cores: 1
          sockets: 1
          threads: 1
        devices:
          disks:
            - disk:
                bus: sata
              name: cloudinitdisk
            - bootOrder: 1
              disk:
                bus: sata
              name: rootdisk
          interfaces:
            - masquerade: {}
              model: e1000e
              name: default
          rng: {}
          useVirtioTransitional: true
        machine:
          type: pc-q35-rhel8.3.0
        resources:
          requests:
            memory: 2Gi
      evictionStrategy: LiveMigrate
      hostname: rhel6-canadian-mole
      networks:
        - name: default
          pod: {}
      terminationGracePeriodSeconds: 180
      volumes:
        - cloudInitNoCloud:
            userData: |
              #cloud-config
              user: cloud-user
              password: redhat
              chpasswd:
                expire: false
          name: cloudinitdisk
        - dataVolume:
            name: rhel6-canadian-mole-rootdisk-e8bbk
          name: rootdisk
status:
  conditions:
    - lastProbeTime: null
      lastTransitionTime: '2021-01-31T12:22:46Z'
      status: 'True'
      type: Ready
  created: true
  ready: true
  volumeSnapshotStatuses:
    - enabled: false
      name: cloudinitdisk
      reason: Volume type does not suport snapshots
    - enabled: false
      name: rootdisk
      reason: 'No Volume Snapshot Storage Class found for volume [rootdisk]'

Comment 19 Petr Horáček 2021-08-18 09:26:23 UTC

*** This bug has been marked as a duplicate of bug 1911662 ***


Note You need to log in before you can comment on or make changes to this bug.