Bug 2002219 - Failed to hotplug volume to VM with IOThreads
Summary: Failed to hotplug volume to VM with IOThreads
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Storage
Version: 4.8.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 4.9.1
Assignee: Alexander Wels
QA Contact: Yan Du
URL:
Whiteboard: libvirt_CNV_INT
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-09-08 09:49 UTC by chhu
Modified: 2021-12-13 19:59 UTC (History)
6 users (show)

Fixed In Version: 4.9.1-21
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-12-13 19:59:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt kubevirt pull 6425 0 None Merged Handle iothreads for hotplugged disks. 2021-11-08 14:54:57 UTC
Github kubevirt kubevirt pull 6499 0 None Merged [release-0.44] Handle iothreads for hotplugged disks. 2021-11-09 12:27:08 UTC
Red Hat Product Errata RHBA-2021:5091 0 None None None 2021-12-13 19:59:17 UTC

Description chhu 2021-09-08 09:49:30 UTC
Description of problem:
Failed to hotplug volume to VM with IOThreads

Version-Release number of selected component (if applicable):
CNV 4.8.1

How reproducible:
100%

Steps to Reproduce:
1. Create PV and VM, details in attached yaml files.
# oc create -f asb-pv-dv-nfs-rhel.yaml
persistentvolume/asb-pv-dv-nfs-rhel created

# oc create -f asb-vm-dv-nfs.yaml
virtualmachine.kubevirt.io/asb-vm-dv-nfs-rhel created

# oc get vmi
NAME                 AGE     PHASE     IP             NODENAME
asb-vm-dv-nfs-rhel   3m59s   Running   10.128.0.137   dell-per730-64.lab.eng.pek2.redhat.com

2. Login to VM console, there is 1 disk /dev/vda, check the VM xml,
it with iothread settings.

# virtctl console asb-vm-dv-nfs-rhel
check there is disk /dev/vda

# oc rsh virt-launcher-asb-vm-dv-nfs-rhel-c2qdn
sh-4.4# virsh dumpxml 1|grep disk -A 8
    <disk type='file' device='disk' model='virtio-non-transitional'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' discard='unmap' iothread='1' queues='2'/>
      <source file='/var/run/kubevirt-private/vmi-disks/rootdisk/disk.img' index='1'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <boot order='1'/>
      <alias name='ua-rootdisk'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>


3. Create PV and PVC, details in attached yaml files.
# oc create -f asb-pv-disk1.yaml
# oc create -f asb-pvc-disk1.yaml

# oc get pv
NAME                 CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                           STORAGECLASS   REASON   AGE
asb-pv-disk1         1Gi        RWX            Retain           Bound    openshift-cnv/asb-pvc-disk1     nfs                     122m
asb-pv-dv-nfs-rhel   12Gi       RWX            Retain           Bound    openshift-cnv/asb-dv-nfs-rhel   nfs                     7m14s

# oc get pvc
NAME              STATUS   VOLUME               CAPACITY   ACCESS MODES   STORAGECLASS   AGE
asb-dv-nfs-rhel   Bound    asb-pv-dv-nfs-rhel   12Gi       RWX            nfs            7m8s
asb-pvc-disk1     Bound    asb-pv-disk1         1Gi        RWX            nfs            122m

4. Try to hotplug volume to VM by command line without error.
# virtctl addvolume asb-vm-dv-nfs-rhel --volume-name=asb-pvc-disk1 --serial=123456

5. Check hp-volume pod is running, oc describe vmi, there is libvirt error:
"virt-handler               server error. command SyncVMI failed: "LibvirtError(Code=67, Domain=10, Message='unsupported configuration: IOThreads not available for bus scsi target sda')""

# oc describe vmi >vmi.yaml
------------------------------------------------------
  Warning  SyncFailed        4m26s (x15 over 4m41s)  virt-handler               server error. command SyncVMI failed: "LibvirtError(Code=67, Domain=10, Message='unsupported configuration: IOThreads not available for bus scsi target sda')"
------------------------------------------------------

# oc get pod|grep volume
hp-volume-fbb5r                                       1/1     Running   0          39s

# oc get vmi asb-vm-dv-nfs-rhel -o yaml >asb-vm-dv-nfs-rhel-vmi.yaml
-----------------------------------
  volumeStatus:
  - hotplugVolume:
      attachPodName: hp-volume-fbb5r
      attachPodUID: bfffd77c-f8c6-44ef-bc23-cecac3d8bb88
    message: Volume asb-pvc-disk1 has been mounted in virt-launcher pod
    name: asb-pvc-disk1
    phase: MountedToPod
    reason: VolumeMountedToPod
    target: ""
  - name: rootdisk
    target: vda
---------------------------------

6. Login to VM, there is no new attached disk, 
check the guest xml, there is no new attached disk.

7. Remove the "dedicatedIOThread: true" in asb-vm-dv-nfs.yaml, redo step1-5, no libvirt error, login to VM, there is new attached disk /dev/sda,
check the guest xml, there is new attached disk.

-------------------------------------------------------------
Disk /dev/sda: 967 MiB, 1013972992 bytes, 1980416 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

    volumeStatus:
    - hotplugVolume:
        attachPodName: hp-volume-w6tf8
        attachPodUID: fa924df7-d77d-4862-aa23-58ebe49608df
      message: Successfully attach hotplugged volume asb-pvc-disk1 to VM
      name: asb-pvc-disk1
      phase: Ready
      reason: VolumeReady
      target: sda
    - name: rootdisk
      target: vda

    <disk type='file' device='disk' model='virtio-non-transitional'>
      <driver name='qemu' type='raw' cache='none' error_policy='stop' discard='unmap' queues='2'/>
      <source file='/var/run/kubevirt-private/vmi-disks/rootdisk/disk.img' index='1'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <boot order='1'/>
      <alias name='ua-rootdisk'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' error_policy='stop' discard='unmap'/>
      <source file='/var/run/kubevirt/hotplug-disks/asb-pvc-disk1/disk.img' index='2'/>
      <backingStore/>
      <target dev='sda' bus='scsi'/>
      <serial>123456</serial>
      <alias name='ua-asb-pvc-disk1'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

Actual results:
1. Hotplug volume by command line is using bus=scsi by default
2. In step 6, hit libvirt error, there is no new attached disk, check the VM xml, there is no new attached disk

Expected results:
1. Hotplug volume by command line use bus=virtio by default or let user to set the bus value
2. In step 6, there is new attached disk in VM and VM xml.

Additional info:
- yaml files in step1-3 are attached to this bug

Comment 5 Kedar Bidarkar 2021-09-08 12:16:27 UTC
@awels, We think this is storage related, could you take a look at this bug?

Comment 6 Roman Mohr 2021-09-08 12:18:04 UTC
Alexander, looks like we need some special handling when the VM has iothreads. Would be great if you could have a look.

Comment 8 Alexander Wels 2021-09-15 14:21:52 UTC
Hi, yes, I will take a look, my first guess is that I need to enable IOThreads on the scsi controller if there are disks with IOThreads on the VM.

Comment 9 Alexander Wels 2021-09-16 19:06:30 UTC
Created a PR to fix this issue.

Comment 10 Yan Du 2021-11-16 14:38:51 UTC
Test on CNV4.9.1-27, issue have been fixed.

Events:
  Type    Reason              Age                From                       Message
  ----    ------              ----               ----                       -------
  Normal  SuccessfulCreate    103m               virtualmachine-controller  Created virtual machine pod virt-launcher-asb-vm-dv-nfs-rhel-k2nn6
  Normal  Created             103m               virt-handler               VirtualMachineInstance defined.
  Normal  Started             103m               virt-handler               VirtualMachineInstance started.
  Normal  SuccessfulCreate    70s                virtualmachine-controller  Created attachment pod hp-volume-gnrjd
  Normal  SuccessfulCreate    65s (x8 over 70s)  virtualmachine-controller  Created hotplug attachment pod hp-volume-gnrjd, for volume blank-dv
  Normal  VolumeMountedToPod  65s                virt-handler               Volume blank-dv has been mounted in virt-launcher pod
  Normal  VolumeReady         64s                virt-handler               Successfully attach hotplugged volume blank-dv to VM

  volumeStatus:
  - hotplugVolume:
      attachPodName: hp-volume-gnrjd
      attachPodUID: 0e37c116-c723-4db9-b13f-55309d0621db
    message: Successfully attach hotplugged volume blank-dv to VM
    name: blank-dv
    phase: Ready
    reason: VolumeReady
    target: sda
  - name: rootdisk
    target: vda

Comment 16 errata-xmlrpc 2021-12-13 19:59:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Virtualization 4.9.1 Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:5091


Note You need to log in before you can comment on or make changes to this bug.