RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1510323 - libvirtd crash when update cdrom device two times
Summary: libvirtd crash when update cdrom device two times
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.5
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: jiyan
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-07 08:33 UTC by yafu
Modified: 2018-04-10 11:00 UTC (History)
9 users (show)

Fixed In Version: libvirt-3.9.0-2.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-10 10:59:09 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0704 0 None None None 2018-04-10 11:00:18 UTC

Description yafu 2017-11-07 08:33:49 UTC
Description of problem:
libvirtd crash when update cdrom device two times

Version-Release number of selected component (if applicable):
libvirt-3.9.0-1.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Create two images:
#qemu-img create -f qcow2 /var/lib/libvirt/images/cdrom1.xml 10M
#qemu-img create -f qcow2 /var/lib/libvirt/images/cdrom2.xml 10M

2.Prepare two cdrom disk xml as follows:
#cat cdrom1.xml
 <disk type='file' device='cdrom'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/cdrom1.img'/>
      <backingStore/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <alias name='ide0-1-0'/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
#cat cdrom2.xml
<disk type='file' device='cdrom'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/cdrom2.img'/>
      <backingStore/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <alias name='ide0-1-0'/>
      <a

3.Start a guest with cdrom disk:
#virsh dumpxml avocado-vt-vm1
...
 <disk type='file' device='cdrom'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/cdrom1.img'/>
      <backingStore/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <alias name='ide0-1-0'/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>
...

4.Update the cdrom device:
#virsh update-device avocado-vt-vm1 cdrom2.xml
Device updated successfully

5.Update the cdrom device again:
#virsh update-device avocado-vt-vm1 cdrom1.xml
error: Disconnected from qemu:///system due to end of file
error: Failed to update device from cdrom1.xml
error: End of file while reading data: Input/output error

Actual results:
libvirtd crash when update cdrom device two times

Expected results:
libvirtd should not crash after multiple updating cdrom device.

Additional info:
1.It works well with libvirt-3.2.0-14.el7_4.1.x86_64.

2.The backtrace of the crashed libvirtd is as follows:
(gdb) t a a bt

Thread 18 (Thread 0x7f44907e0700 (LWP 51387)):
#0  0x00007f44b2def945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f44b59dfb16 in virCondWait (c=c@entry=0x7f44882ebea8, m=m@entry=0x7f44882ebe80) at util/virthread.c:154
#2  0x00007f44b59e05d3 in virThreadPoolWorker (opaque=opaque@entry=0x55b78e7b3ee0) at util/virthreadpool.c:124
#3  0x00007f44b59df8a8 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f44b2debe25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f44b2b161ad in clone () from /lib64/libc.so.6

Thread 17 (Thread 0x7f44a640f700 (LWP 49724)):
#0  0x00007f44934f7ad0 in qemuDomainChangeEjectableMedia (driver=driver@entry=0x7f448826ae30, vm=vm@entry=0x7f4488393200, disk=disk@entry=0x7f44882ed4b0, newsrc=0x7f448831d150, 
    force=force@entry=false) at qemu/qemu_hotplug.c:303
#1  0x00007f4493573063 in qemuDomainChangeDiskLive (force=false, driver=0x7f448826ae30, dev=<optimized out>, vm=<optimized out>, conn=<optimized out>) at qemu/qemu_driver.c:7850
#2  qemuDomainUpdateDeviceLive (dom=<optimized out>, force=false, dev=<optimized out>, vm=<optimized out>, conn=<optimized out>) at qemu/qemu_driver.c:7881
#3  qemuDomainUpdateDeviceFlags (dom=<optimized out>, xml=<optimized out>, flags=1) at qemu/qemu_driver.c:8591
#4  0x00007f44b5a9a007 in virDomainUpdateDeviceFlags (domain=domain@entry=0x7f4488320440, 
    xml=0x7f448831cfa0 " <disk type='file' device='cdrom'>\n      <driver name='qemu' type='qcow2'/>\n      <source file='/var/lib/libvirt/images/cdrom1.img'/>\n      <backingStore/>\n      <target dev='hdc' bus='ide'/>\n      <r"..., flags=0) at libvirt-domain.c:8338
#5  0x000055b78d19a36e in remoteDispatchDomainUpdateDeviceFlags (server=0x55b78e7b1f90, msg=0x55b78e7fdde0, args=0x7f448826cf70, rerr=0x7f44a640ec10, client=<optimized out>)
    at remote_dispatch.h:12339
#6  remoteDispatchDomainUpdateDeviceFlagsHelper (server=0x55b78e7b1f90, client=<optimized out>, msg=0x55b78e7fdde0, rerr=0x7f44a640ec10, args=0x7f448826cf70, ret=0x7f44883186f0)
    at remote_dispatch.h:12315
#7  0x00007f44b5b04b72 in virNetServerProgramDispatchCall (msg=0x55b78e7fdde0, client=0x55b78e7fdf70, server=0x55b78e7b1f90, prog=0x55b78e7fb030) at rpc/virnetserverprogram.c:437
#8  virNetServerProgramDispatch (prog=0x55b78e7fb030, server=server@entry=0x55b78e7b1f90, client=0x55b78e7fdf70, msg=0x55b78e7fdde0) at rpc/virnetserverprogram.c:307
#9  0x000055b78d1bac7d in virNetServerProcessMsg (msg=<optimized out>, prog=<optimized out>, client=<optimized out>, srv=0x55b78e7b1f90) at rpc/virnetserver.c:148
#10 virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x55b78e7b1f90) at rpc/virnetserver.c:169
#11 0x00007f44b59e0521 in virThreadPoolWorker (opaque=opaque@entry=0x55b78e7a65e0) at util/virthreadpool.c:167
#12 0x00007f44b59df8a8 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#13 0x00007f44b2debe25 in start_thread () from /lib64/libpthread.so.0
#14 0x00007f44b2b161ad in clone () from /lib64/libc.so.6

Thread 16 (Thread 0x7f44a5c0e700 (LWP 49725)):
#0  0x00007f44b2def945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f44b59dfb16 in virCondWait (c=c@entry=0x55b78e7b20e8, m=m@entry=0x55b78e7b20c0) at util/virthread.c:154
#2  0x00007f44b59e05d3 in virThreadPoolWorker (opaque=opaque@entry=0x55b78e7a6520) at util/virthreadpool.c:124
#3  0x00007f44b59df8a8 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f44b2debe25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f44b2b161ad in clone () from /lib64/libc.so.6

Thread 15 (Thread 0x7f44a540d700 (LWP 49726)):
#0  0x00007f44b2def945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f44b59dfb16 in virCondWait (c=c@entry=0x55b78e7b20e8, m=m@entry=0x55b78e7b20c0) at util/virthread.c:154
#2  0x00007f44b59e05d3 in virThreadPoolWorker (opaque=opaque@entry=0x55b78e7a6460) at util/virthreadpool.c:124
#3  0x00007f44b59df8a8 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f44b2debe25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f44b2b161ad in clone () from /lib64/libc.so.6

---Type <return> to continue, or q <return> to quit---
Thread 14 (Thread 0x7f44a4c0c700 (LWP 49727)):
#0  0x00007f44b2def945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f44b59dfb16 in virCondWait (c=c@entry=0x55b78e7b20e8, m=m@entry=0x55b78e7b20c0) at util/virthread.c:154
#2  0x00007f44b59e05d3 in virThreadPoolWorker (opaque=opaque@entry=0x55b78e7a63a0) at util/virthreadpool.c:124
#3  0x00007f44b59df8a8 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f44b2debe25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f44b2b161ad in clone () from /lib64/libc.so.6

Thread 13 (Thread 0x7f44a440b700 (LWP 49728)):
#0  0x00007f44b2def945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f44b59dfb16 in virCondWait (c=c@entry=0x55b78e7b20e8, m=m@entry=0x55b78e7b20c0) at util/virthread.c:154
#2  0x00007f44b59e05d3 in virThreadPoolWorker (opaque=opaque@entry=0x55b78e7a62e0) at util/virthreadpool.c:124
#3  0x00007f44b59df8a8 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f44b2debe25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f44b2b161ad in clone () from /lib64/libc.so.6

Thread 12 (Thread 0x7f44a3c0a700 (LWP 49729)):
#0  0x00007f44b2def945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f44b59dfb16 in virCondWait (c=c@entry=0x55b78e7b2188, m=m@entry=0x55b78e7b20c0) at util/virthread.c:154
#2  0x00007f44b59e056b in virThreadPoolWorker (opaque=opaque@entry=0x55b78e7a63a0) at util/virthreadpool.c:124
#3  0x00007f44b59df8a8 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f44b2debe25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f44b2b161ad in clone () from /lib64/libc.so.6

Thread 11 (Thread 0x7f44a3409700 (LWP 49730)):
#0  0x00007f44b2def945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f44b59dfb16 in virCondWait (c=c@entry=0x55b78e7b2188, m=m@entry=0x55b78e7b20c0) at util/virthread.c:154
#2  0x00007f44b59e056b in virThreadPoolWorker (opaque=opaque@entry=0x55b78e7a62e0) at util/virthreadpool.c:124
#3  0x00007f44b59df8a8 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f44b2debe25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f44b2b161ad in clone () from /lib64/libc.so.6

Thread 10 (Thread 0x7f44a2c08700 (LWP 49731)):
#0  0x00007f44b2def945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f44b59dfb16 in virCondWait (c=c@entry=0x55b78e7b2188, m=m@entry=0x55b78e7b20c0) at util/virthread.c:154
#2  0x00007f44b59e056b in virThreadPoolWorker (opaque=opaque@entry=0x55b78e7a63a0) at util/virthreadpool.c:124
#3  0x00007f44b59df8a8 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f44b2debe25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f44b2b161ad in clone () from /lib64/libc.so.6

Thread 9 (Thread 0x7f44a2407700 (LWP 49732)):
#0  0x00007f44b2def945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f44b59dfb16 in virCondWait (c=c@entry=0x55b78e7b2188, m=m@entry=0x55b78e7b20c0) at util/virthread.c:154
#2  0x00007f44b59e056b in virThreadPoolWorker (opaque=opaque@entry=0x55b78e7a62e0) at util/virthreadpool.c:124
#3  0x00007f44b59df8a8 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f44b2debe25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f44b2b161ad in clone () from /lib64/libc.so.6
---Type <return> to continue, or q <return> to quit---

Thread 8 (Thread 0x7f44a1c06700 (LWP 49733)):
#0  0x00007f44b2def945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f44b59dfb16 in virCondWait (c=c@entry=0x55b78e7b2188, m=m@entry=0x55b78e7b20c0) at util/virthread.c:154
#2  0x00007f44b59e056b in virThreadPoolWorker (opaque=opaque@entry=0x55b78e7a63a0) at util/virthreadpool.c:124
#3  0x00007f44b59df8a8 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f44b2debe25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f44b2b161ad in clone () from /lib64/libc.so.6

Thread 7 (Thread 0x7f4492fe5700 (LWP 49734)):
#0  0x00007f44b2def945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f44b59dfb16 in virCondWait (c=c@entry=0x55b78e7fb238, m=m@entry=0x55b78e7fb210) at util/virthread.c:154
#2  0x00007f44b59e05d3 in virThreadPoolWorker (opaque=opaque@entry=0x55b78e7fb340) at util/virthreadpool.c:124
#3  0x00007f44b59df8a8 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f44b2debe25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f44b2b161ad in clone () from /lib64/libc.so.6

Thread 6 (Thread 0x7f44927e4700 (LWP 49735)):
#0  0x00007f44b2def945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f44b59dfb16 in virCondWait (c=c@entry=0x55b78e7fb238, m=m@entry=0x55b78e7fb210) at util/virthread.c:154
#2  0x00007f44b59e05d3 in virThreadPoolWorker (opaque=opaque@entry=0x55b78e7fb6c0) at util/virthreadpool.c:124
#3  0x00007f44b59df8a8 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f44b2debe25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f44b2b161ad in clone () from /lib64/libc.so.6

Thread 5 (Thread 0x7f4491fe3700 (LWP 49736)):
#0  0x00007f44b2def945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f44b59dfb16 in virCondWait (c=c@entry=0x55b78e7fb238, m=m@entry=0x55b78e7fb210) at util/virthread.c:154
#2  0x00007f44b59e05d3 in virThreadPoolWorker (opaque=opaque@entry=0x55b78e7fba40) at util/virthreadpool.c:124
#3  0x00007f44b59df8a8 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f44b2debe25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f44b2b161ad in clone () from /lib64/libc.so.6

Thread 4 (Thread 0x7f44917e2700 (LWP 49737)):
#0  0x00007f44b2def945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f44b59dfb16 in virCondWait (c=c@entry=0x55b78e7fb238, m=m@entry=0x55b78e7fb210) at util/virthread.c:154
#2  0x00007f44b59e05d3 in virThreadPoolWorker (opaque=opaque@entry=0x55b78e7fbdc0) at util/virthreadpool.c:124
#3  0x00007f44b59df8a8 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f44b2debe25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f44b2b161ad in clone () from /lib64/libc.so.6

Thread 3 (Thread 0x7f4490fe1700 (LWP 49738)):
#0  0x00007f44b2def945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f44b59dfb16 in virCondWait (c=c@entry=0x55b78e7fb238, m=m@entry=0x55b78e7fb210) at util/virthread.c:154
#2  0x00007f44b59e05d3 in virThreadPoolWorker (opaque=opaque@entry=0x55b78e7fba40) at util/virthreadpool.c:124
#3  0x00007f44b59df8a8 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f44b2debe25 in start_thread () from /lib64/libpthread.so.0
---Type <return> to continue, or q <return> to quit---
#5  0x00007f44b2b161ad in clone () from /lib64/libc.so.6

Thread 2 (Thread 0x7f448fdcc700 (LWP 49801)):
#0  0x00007f44b2def945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f44b59dfb16 in virCondWait (c=c@entry=0x7f448810faa0, m=m@entry=0x7f448810fa60) at util/virthread.c:154
#2  0x00007f4494073828 in udevEventHandleThread (opaque=<optimized out>) at node_device/node_device_udev.c:1729
#3  0x00007f44b59df8d2 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f44b2debe25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f44b2b161ad in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7f44b66678c0 (LWP 49723)):
#0  0x00007f44b2b0b89d in poll () from /lib64/libc.so.6
#1  0x00007f44b59868c6 in poll (__timeout=4920, __nfds=12, __fds=<optimized out>) at /usr/include/bits/poll2.h:46
#2  virEventPollRunOnce () at util/vireventpoll.c:641
#3  0x00007f44b59853a2 in virEventRunDefaultImpl () at util/virevent.c:327
#4  0x00007f44b5afed0d in virNetDaemonRun (dmn=0x55b78e7b3d40) at rpc/virnetdaemon.c:837
#5  0x000055b78d17e0ae in main (argc=<optimized out>, argv=<optimized out>) at libvirtd.c:1494

Comment 3 lijuan men 2017-11-08 06:03:52 UTC
there is another scenario,maybe the same issue:

1.prepare a guest the cdrom xml:
<disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='sdc' bus='scsi'/>     --->no source file
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
    </disk>

2.update the cdrom with the following xml:
[root@localhost ~]# cat cdrom.xml
<disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/RHEL-7.4-20170711.0-Server-x86_64-dvd1.iso'/>
      <target dev='sdc' bus='scsi'/>
      <readonly/>
    </disk>

[root@localhost ~]# virsh update-device test cdrom.xml
error: Disconnected from qemu:///system due to end of file
error: Failed to update device from cdrom.xml
error: End of file while reading data: Input/output error

NOTE:using floppy to test the above steps is the same result

Comment 4 yisun 2017-11-08 06:32:03 UTC
failed another case, record it here for future reference.

1. have a running vm with cdrom pointing to /opt/b.iso 
 ## virsh dumpxml v
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/opt/b.iso' startupPolicy='requisite'/>
      <target dev='hdb' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='1'/>
    </disk>

2. save the vm
 ## virsh save v v.save

Domain v saved to v.save

3. remove or move the iso file
## mv /opt/b.iso /opt/b.iso.bkup

4. restore the vm
## virsh restore v.save
error: Disconnected from qemu:///system due to end of file
error: Failed to restore domain from v.save
error: End of file while reading data: Input/output error

Comment 8 jiyan 2017-11-16 10:22:24 UTC
Reproduce the bug in libvirt-3.9.0-1.el7.x86_64 via the following method and check the storage source private	data is NULL at last

1. Prepare 2 xml files as following:
# cat boot.xml 
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/var/lib/libvirt/images/boot.iso'/>
<target dev='hda' bus='ide'/>
<readonly/>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>

# cat boot1.xml 
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/var/lib/libvirt/images/boot1.iso'/>
<target dev='hda' bus='ide'/>
<readonly/>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>

2. Prepare shutdown vm named 'pc' and start the vm
# virsh dumpxml pc --inactive |grep "<disk" -A10
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/boot.iso'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

# virsh start pc
Domain pc started

# virsh dumpxml pc |grep "<disk" -A10
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/boot.iso'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

3. Update the cdrom via boot1.xml and check the dumpxml file info
# virsh update-device pc boot1.xml 
Device updated successfully

# virsh dumpxml pc |grep "<disk" -A10
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/boot1.iso'/>
      <backingStore/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

4. Update the cdrom via boot.xml and check the dumpxml file info again
# virsh update-device pc boot.xml 
error: Disconnected from qemu:///system due to end of file
error: Failed to update device from boot.xml
error: End of file while reading data: Input/output error

# virsh dumpxml pc |grep "<disk" -A10
    <disk type='file' device='cdrom'>
      <target dev='hda' bus='ide' tray='open'/>
      <readonly/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

Comment 9 jiyan 2017-11-16 10:48:57 UTC
Test env components:
libvirt-3.9.0-2.virtcov.el7.x86_64
qemu-kvm-rhev-2.10.0-6.el7.x86_64
kernel-3.10.0-774.el7.x86_64

Test scenario:
Sceanrio-1: Update cdrom without src data via xml file with src data
1. Prepare xml file as following 
# cat cdrom.xml 
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/boot.iso'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <alias name='virtio-disk0'/>
    </disk>

2. Prepare shutdown vm named 'pc' and start vm, check the dumpxml file info
# virsh dumpxml pc --inactive |grep "<disk" -A10
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

# virsh start pc
Domain pc started

# virsh dumpxml pc |grep "<disk" -A10
    <disk type='file' device='cdrom'>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

3. Update the cdrom via the xml file and check the dumpxml file info
# virsh update-device pc cdrom.xml 
Device updated successfully

# virsh dumpxml pc |grep "<disk" -A10
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/boot.iso'/>
      <backingStore/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

Sceanrio-2: Update floppy without src data via xml file with src data
1. Prepare xml file as following 
# cat floppy.xml 
    <disk type='file' device='floppy'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/boot.iso'/>
      <target dev='fda' bus='fdc'/>
      <readonly/>
      <alias name='fdc0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

2. Prepare shutdown vm named 'pc' and start vm, check the dumpxml file info
# virsh dumpxml pc --inactive |grep "<disk" -A10
    <disk type='file' device='floppy'>
      <driver name='qemu' type='raw'/>
      <target dev='fda' bus='fdc'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

# virsh start pc
Domain pc started

# virsh dumpxml pc |grep "<disk" -A10
    <disk type='file' device='floppy'>
      <target dev='fda' bus='fdc'/>
      <readonly/>
      <alias name='fdc0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

3. Update the cdrom via the xml file and check the dumpxml file info
# virsh update-device pc floppy.xml 
Device updated successfully

# virsh dumpxml pc |grep "<disk" -A10
    <disk type='file' device='floppy'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/boot.iso'/>
      <backingStore/>
      <target dev='fda' bus='fdc'/>
      <readonly/>
      <alias name='fdc0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

Sceanrio-3: Update cdrom/floppy twice
1. Prepare 2 xml files as following:
# cat boot.xml 
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/boot.iso'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

# cat boot1.xml 
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/boot1.iso'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

2. Prepare shutdown vm named 'pc' and start the vm
# virsh dumpxml pc --inactive |grep "<disk" -A10
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/boot.iso'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

# virsh start pc
Domain pc started

# virsh dumpxml pc |grep "<disk" -A10
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/boot.iso'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

3. Update the cdrom via boot1.xml and check the dumpxml file info
# virsh update-device pc boot1.xml 
Device updated successfully

# virsh dumpxml pc |grep "<disk" -A10
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/boot1.iso'/>
      <backingStore/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

4. Update the cdrom via boot.xml and check the dumpxml file info again
# virsh update-device pc boot.xml 
Device updated successfully

# virsh dumpxml pc |grep "<disk" -A10
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/boot.iso'/>
      <backingStore/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

Sceanrio-4: Restore vm with src data deleted
1. Prepare shutdown vm named 'pc', start vm and check the dumpxml file info
# virsh dumpxml pc --inactive |grep "<disk" -A10
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/boot.iso' startupPolicy='requisite'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

# virsh start pc
Domain pc started

# virsh dumpxml pc |grep "<disk" -A10
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/boot.iso' startupPolicy='requisite'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

2. Save vm 
# virsh save pc pc.save
Domain pc saved to pc.save

3. Remove src data for cdrom and then restore vm
# mv /var/lib/libvirt/images/boot.iso /tmp/

# virsh restore pc.save 
Domain restored from pc.save

4. Check dumpxml file info of vm
# virsh list --all
 21    pc                             running

# virsh dumpxml pc |grep "<disk" -A10
    <disk type='file' device='cdrom'>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <alias name='ide0-0-0'/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

# virsh dumpxml pc --inactive|grep "<disk" -A10
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/boot.iso' startupPolicy='requisite'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

All the results are as expected, move this bug to be verified.

Comment 10 jiyan 2017-12-12 03:30:24 UTC
Hi, Peter, please help to check the following 2 issues whether they are regression, thanks very much.

Version:
libvirt-3.9.0-5.el7.x86_64
kernel-3.10.0-820.el7.x86_64
qemu-kvm-rhev-2.10.0-12.el7.x86_64

Scenario1:  insert  source file for cdrom device by 'virsh change-media'
# virsh domstate test
shut off

# virsh dumpxml test --inactive|grep "<disk" -A5
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>

# virsh start test
Domain test started

# virsh domstate test
running

# virsh change-media test hdc  --insert --current boot1.iso
error: Disconnected from qemu:///system due to end of file
error: Failed to complete action insert on media
error: End of file while reading data: Input/output error


Scenario2:  update source file for cdrom device by 'virsh change-media'
# virsh domstate test
shut off

# virsh dumpxml test --inactive|grep "<disk" -A6
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/boot.iso'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>

# virsh start test
Domain test started

# virsh change-media test hdc  --update --current boot1.iso
error: Disconnected from qemu:///system due to end of file
error: Failed to complete action update on media
error: End of file while reading data: Input/output error

Comment 11 Peter Krempa 2017-12-19 15:24:32 UTC
This looks like an instance of https://bugzilla.redhat.com/show_bug.cgi?id=1522682 could you please re-try with the -6 package?

The error message reported with upstream git version is:
error: Cannot access storage file 'boot1.iso': No such file or directory

which was modified not to contain the uid and gid in some cases due to the above bug.

Comment 12 jiyan 2018-01-08 02:36:05 UTC
Hi, Peter, sry for checking this issue so late, the following is detailed info.
Error info is same as you said in libvirt-3.9.0-6.el7.x86_64.

Version:
kernel-3.10.0-826.el7.x86_64
libvirt-3.9.0-6.el7.x86_64
qemu-kvm-rhev-2.10.0-15.el7.x86_64

Scenario1:  insert  source file for cdrom device by 'virsh change-media'
# ll boot.iso boot1.iso 
-rw-r--r--. 1 root root 536870912 Jan  8 10:31 boot1.iso
-rw-r--r--. 1 qemu qemu 536870912 Dec 15 16:17 boot.iso

# virsh domstate test
shut off

# virsh dumpxml test |grep "<disk" -A5
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

# virsh start test
Domain test started

# virsh domstate test
running

# virsh change-media test hda --insert --current boot.iso 
error: Failed to complete action insert on media
error: Cannot access storage file 'boot.iso': No such file or directory


Scenario2:  update source file for cdrom device by 'virsh change-media'
# ll boot.iso boot1.iso 
-rw-r--r--. 1 root root 536870912 Jan  8 10:31 boot1.iso
-rw-r--r--. 1 qemu qemu 536870912 Dec 15 16:17 boot.iso

# virsh domstate test
shut off

# virsh dumpxml test |grep "<disk" -A6
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/var/lib/libvirt/images/boot.iso'/>
      <target dev='hda' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
    </disk>

# virsh start test
Domain test started

# virsh change-media test hda --update --current boot1.iso 
error: Failed to complete action update on media
error: Cannot access storage file 'boot1.iso': No such file or directory

Comment 13 Peter Krempa 2018-01-08 09:35:35 UTC
Yes, so the last findings were the problem I've described so this should be okay now.

Comment 17 errata-xmlrpc 2018-04-10 10:59:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0704


Note You need to log in before you can comment on or make changes to this bug.