Bug 1076719
| Summary: | libvirtd crashes if VM crashes or is destroyed while hot-attaching disks | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Eric Blake <eblake> |
| Component: | libvirt | Assignee: | Peter Krempa <pkrempa> |
| Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 6.2 | CC: | ajia, dyuan, eblake, jdenemar, jherrman, mzhan, rbalakri, shyu, tdosek |
| Target Milestone: | rc | Keywords: | ZStream |
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | libvirt-0.10.2-30.el6 | Doc Type: | Bug Fix |
| Doc Text: |
Prior to this update, there was a typographical error in a condition that checks whether QEMU successfully attached a new disk to a guest. Due to the error, the libvirtd daemon terminated unexpectedly if the monitor command was unsuccesful; for instance, in case of a virtual machine failure or when attaching a guest disk drive was interrupted. In this update, the error has been corrected, and libvirtd no longer crashes in the described circumstances.
|
Story Points: | --- |
| Clone Of: | 1075973 | Environment: | |
| Last Closed: | 2014-10-14 04:20:36 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | 573946, 1026966, 1075973 | ||
| Bug Blocks: | 1080471 | ||
|
Description
Eric Blake
2014-03-14 21:44:05 UTC
Verified with packages:
libvirt-0.10.2-31.el6.x86_64
qemu-kvm-0.12.1.2-2.423.el6.x86_64
Test steps:
1. Create a guest with gluster volume
# virsh create r6-qcow2-gluster.xml
Domain r6-qcow2 created from r6-qcow2-gluster.xml
# virsh dumpxml r6-qcow2| grep disk -A 7
<disk type='network' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source protocol='gluster' name='gluster-vol1/rhel6-qcow2-disk.img'>
<host name='10.66.106.25' port='24007'/>
</source>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
2. Try to attach a gluster volume which is not available, press Ctrl+C to interupt the "virsh attach-device" process. VM is still running, then destroy the vm successfully, no libvirtd crash.
# more disk-gluster-vol.xml
<disk type='network' device='disk'>
<driver name='qemu' type='qcow2'/>
<source protocol='gluster' name='gluster-vol1/test.img'>
<host name='10.66.106.24' port='24007'/>
</source>
<target dev='vdb' bus='virtio'/>
</disk>
# virsh attach-device r6-qcow2 disk-gluster-vol.xml
^C
# virsh list --all
Id Name State
----------------------------------------------------
5 r6-qcow2 running
# virsh destroy r6-qcow2
Domain r6-qcow2 destroyed
# virsh list --all
Id Name State
----------------------------------------------------
# service libvirtd status
libvirtd (pid 2445) is running...
3. Try to attach a gluster volume which is not available, got the qemu error.
VM is still running, no libvirtd crash.
# more disk-gluster-vol.xml
<disk type='network' device='disk'>
<driver name='qemu' type='qcow2'/>
<source protocol='gluster' name='gluster-vol1/test.img'>
<host name='10.66.106.24' port='24007'/>
</source>
<target dev='vdb' bus='virtio'/>
</disk>
# virsh attach-device r6-qcow2 disk-gluster-vol.xml
error: Failed to attach device from disk-gluster-vol.xml
error: internal error unable to execute QEMU command '__com.redhat_drive_add': Device 'drive-virtio-disk1' could not be initialized
# virsh list --all
Id Name State
----------------------------------------------------
6 r6-qcow2 running
# service libvirtd status
libvirtd (pid 2445) is running...
4. Attach/detach another available gluster volume successfully.
# more disk-gluster-vol.xml
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<source protocol='gluster' name='gluster-vol1/exist.img'>
<host name='10.66.106.25' port='24007'/>
</source>
<target dev='vdb' bus='virtio'/>
</disk>
# virsh attach-device r6-qcow2 disk-gluster-vol.xml
Device attached successfully
# virsh detach-disk r6-qcow2 vdb
Disk detached successfully
# virsh list --all
Id Name State
----------------------------------------------------
6 r6-qcow2 running
# service libvirtd status
libvirtd (pid 2445) is running...
Test results:
current command work well.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2014-1374.html |