RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2035237 - devices not removed from the definition after hot-unplug when JSON syntax for -device is used
Summary: devices not removed from the definition after hot-unplug when JSON syntax for...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: libvirt
Version: 8.6
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: Meina Li
URL:
Whiteboard:
: 2035006 2037765 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-12-23 10:51 UTC by Meina Li
Modified: 2022-05-10 13:40 UTC (History)
18 users (show)

Fixed In Version: libvirt-8.0.0-0rc1.1.module+el8.6.0+13853+e8cd34b9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-05-10 13:25:20 UTC
Type: Bug
Target Upstream Version: 8.0.0
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-106506 0 None None None 2021-12-23 10:58:59 UTC
Red Hat Product Errata RHSA-2022:1759 0 None None None 2022-05-10 13:25:59 UTC

Description Meina Li 2021-12-23 10:51:09 UTC
Description of problem:
Detach disk unsucessfully in dumpxml after cold-plugging disk to the guest

Version-Release number of selected component (if applicable):
libvirt-7.10.0-1.module+el8.6.0+13502+4f24a11d.x86_64
qemu-kvm-6.2.0-1.module+el8.6.0+13725+61ae1949.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Prepare a disk image.
# ll /var/lib/avocado/data/avocado-vt/attach.img 
-rw-r--r--. 1 root root 196624 Dec 23 04:05 /var/lib/avocado/data/avocado-vt/attach.img
2. Cold plug the disk image to a shutoff guest.
# virsh domstate avocado-vt-vm1
shut off
# virsh attach-disk --domain avocado-vt-vm1 --source /var/lib/avocado/data/avocado-vt/attach.img --target vdb --driver qemu --config
Disk attached successfully
3. Start the guest and check the disk in guest.
# virsh start avocado-vt-vm1
Domain 'avocado-vt-vm1' started
# virsh domblklist avocado-vt-vm1
 Target   Source
-----------------------------------------------------------------------------
 vda      /var/lib/avocado/data/avocado-vt/images/jeos-27-x86_64.qcow2
 vdb      /var/lib/avocado/data/avocado-vt/attach.img
# virsh console avocado-vt-vm1
...
 # lsblk
NAME          MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
vda           252:0    0    10G  0 disk 
├─vda1        252:1    0   600M  0 part /boot/efi
├─vda2        252:2    0     1G  0 part /boot
└─vda3        252:3    0   8.4G  0 part 
  ├─rhel-root 253:0    0   7.4G  0 lvm  /
  └─rhel-swap 253:1    0     1G  0 lvm  [SWAP]
vdb           252:16   0 192.5K  0 disk

4. Detach the disk.
# virsh detach-disk avocado-vt-vm1 vdb
Disk detached successfully

5. Check the status of the detaching disk.
# virsh domblklist avocado-vt-vm1
 Target   Source
-----------------------------------------------------------------------------
 vda      /var/lib/avocado/data/avocado-vt/images/jeos-27-x86_64-ovmf.qcow2
 vdb      /var/lib/avocado/data/avocado-vt/attach.img
# virsh dumpxml avocado-vt-vm1 | grep /disk -B8
...
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/avocado/data/avocado-vt/attach.img' index='1'/>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </disk>
# virsh console avocado-vt-vm1
...
  # lsblk
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda           252:0    0   10G  0 disk 
├─vda1        252:1    0  600M  0 part /boot/efi
├─vda2        252:2    0    1G  0 part /boot
└─vda3        252:3    0  8.4G  0 part 
  ├─rhel-root 253:0    0  7.4G  0 lvm  /
  └─rhel-swap 253:1    0    1G  0 lvm  [SWAP]


Actual results:
The disk is still exist in dumpxml and domblklist guest. 

Expected results:
The disk should be detached successfully.

Additional info:

Comment 1 Ján Tomko 2021-12-23 14:23:24 UTC
Since the following libvirt commit:
commit c9b13e05570d07addb4bfb86c5baf373064842e0

    qemu: Use JSON directly for '-device'

git describe: v7.8.0-225-gc9b13e0557 contains: v7.9.0-rc1~82

We only receive the DEVICE_DELETED event for the backend:
{
  "timestamp": {
    "seconds": 1640268422,
    "microseconds": 765168
  },
  "event": "DEVICE_DELETED",
  "data": {
    "path": "/machine/peripheral/virtio-disk2/virtio-backend"
  }
}

Before, we would get one for the frontend as well:

{
  "timestamp": {
    "seconds": 1640268422,
    "microseconds": 815923
  },
  "event": "DEVICE_DELETED",
  "data": {
    "device": "virtio-disk2",
    "path": "/machine/peripheral/virtio-disk2"
  }
}

Comment 2 Peter Krempa 2022-01-03 14:44:25 UTC
I've filed https://bugzilla.redhat.com/show_bug.cgi?id=2036669 to track the qemu regression of not sending DEVICE_DELETED for the frontend when JSON syntax is used for -device.

For now libvirt will need to revert the support for JSON with -device until qemu is fixed. This bug will track the reversion.

Comment 4 Peter Krempa 2022-01-06 14:13:31 UTC
*** Bug 2037765 has been marked as a duplicate of this bug. ***

Comment 5 Peter Krempa 2022-01-10 08:43:17 UTC
Upstream works this around by disabling the use of JSON with -device for now until qemu fixes it:

commit bd3d00babc9b9b51bf2ee3ee39fafe479b8f8ae3
Author: Peter Krempa <pkrempa>
Date:   Mon Jan 3 15:50:49 2022 +0100

    qemu: Revert to using non-JSON commandline for -device
    
    When -device is configured via JSON a bug [1] is triggered in qemu were
    the DEVICE_DELETED event for the removal of the device frontend is no
    longer delivered to libvirt. Without the DEVICE_DELETED event we don't
    remove the corresponding entries in the VM XML.
    
    Until qemu will be fixed we must stop using the JSON syntax for -device.
    
    This patch removes the detection of the capability. The capability is
    used only during startup of a fresh VM so we don't need to consider any
    compaitibility steps for existing VMs.
    
    For users who wish to use 'libvirt-7.9' and 'libvirt-7.10' with
    'qemu-6.2' there are two possible workarounds:
    
     - filter out the 'device.json' qemu capability '/etc/libvirt/qemu.conf':
    
       capability_filters = [ "device.json" ]
    
     - filter out the 'device.json' qemu capability via qemu namespace XML:
    
       <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>
         [...]
         <qemu:capabilities>
           <qemu:del capability='device.json'/>
         </qemu:capabilities>
       </domain>
    
    We must never again use the same capability name as we are now
    instructing users to filter it as a workaround so once qemu is fixed
    we'll need to pick a new capability value for it.
    
    [1] https://bugzilla.redhat.com/show_bug.cgi?id=2036669
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=2035237
    Signed-off-by: Peter Krempa <pkrempa>
    Reviewed-by: Ani Sinha <ani>
    Reviewed-by: Ján Tomko <jtomko>

v7.10.0-470-gbd3d00babc

(Note that the workaround described in the commit message is _NOT_ needed with libvirt-8.0, that's just for fixing older versions.)

Comment 9 Meina Li 2022-01-18 03:54:34 UTC
Verified Version:
libvirt-8.0.0-1.module+el8.6.0+13888+55157bfb.x86_64
qemu-kvm-6.2.0-2.module+el8.6.0+13738+17338784.x86_64
kernel-4.18.0-359.el8.x86_64

Verified Steps:
S1: Hot-unplug disk from guest.
1. Prepare a guest and a disk image.
# ll /var/lib/libvirt/images/test.img 
-rw-r--r--. 1 root root 209715200 Jan 17 22:19 /var/lib/libvirt/images/test.img
# virsh domstate lmn
shut off
2. Attach a disk with --config.
# virsh attach-disk --domain lmn --source /var/lib/libvirt/images/test.img --target vdb --driver qemu --config
Disk attached successfully
3. Start the guest.
# virsh start lmn
Domain 'lmn' started
# virsh domblklist lmn
 Target   Source
---------------------------------------------
 vda      /var/lib/libvirt/images/lmn.qcow2
 vdb      /var/lib/libvirt/images/test.img
4. Check the dumpxml.
# virsh dumpxml lmn | grep /disk -B7
......
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/test.img' index='1'/>
      <backingStore/>
      <target dev='vdb' bus='virtio'/>
      <alias name='virtio-disk1'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </disk>
5. Detach the disk.
# virsh detach-disk lmn vdb
Disk detached successfully
# virsh dumpxml lmn | grep /disk -B7
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/lmn.qcow2' index='2'/>
      <backingStore/>
      <target dev='vda' bus='virtio'/>
      <alias name='virtio-disk0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </disk>
S2: Check the related jobs of features in this bug, and find no related qemu cmd issues.

Comment 10 Peter Krempa 2022-01-18 15:01:27 UTC
*** Bug 2035006 has been marked as a duplicate of this bug. ***

Comment 12 errata-xmlrpc 2022-05-10 13:25:20 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: virt:rhel and virt-devel:rhel security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1759


Note You need to log in before you can comment on or make changes to this bug.