Bug 1142722 - libvirtd dead while destroy one guest with block disk
Summary: libvirtd dead while destroy one guest with block disk
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Ján Tomko
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-09-17 09:34 UTC by Xuesong Zhang
Modified: 2015-03-05 07:44 UTC (History)
5 users (show)

Fixed In Version: libvirt-1.2.8-3.el7
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-03-05 07:44:44 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2015:0323 0 normal SHIPPED_LIVE Low: libvirt security, bug fix, and enhancement update 2015-03-05 12:10:54 UTC

Description Xuesong Zhang 2014-09-17 09:34:16 UTC
Description
Libvirtd deamon will dead while destroying one guest with block disk.

Version:
libvirt-1.2.8-2.el7.x86_64
qemu-kvm-rhev-2.1.0-3.el7.x86_64
kernel-3.10.0-158.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. add the following block disk to guest.
<disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/sdb1'/>
      <target dev='vdb' bus='virtio'/>
      <shareable/>
    </disk>

2. start the guest
# virsh start test
Domain test started

3. destroy the guest.
# virsh destroy test
error: Failed to destroy domain test
error: End of file while reading data: Input/output error
error: Failed to reconnect to the hypervisor



Actual results:
As step 3

Expected results:
guest should be destroyed and libvirtd should not dead.

Additional info:
Following is the bt info.
(gdb) c
Continuing.
Detaching after fork from child process 23637.

Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7f720af5c700 (LWP 22606)]
qemuSharedDeviceEntryDomainExists (entry=entry@entry=0x7f71ec008850, name=name@entry=0x7f72001fa310 "test", idx=idx@entry=0x7f720af5b0a4)
    at qemu/qemu_conf.c:981
981	        if (STREQ(entry->domains[i], name)) {
(gdb) t a a bt

Thread 11 (Thread 0x7f720f765700 (LWP 22597)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f721e276d06 in virCondWait (c=c@entry=0x7f721f697350, m=m@entry=0x7f721f697328) at util/virthread.c:153
#2  0x00007f721e2771bb in virThreadPoolWorker (opaque=opaque@entry=0x7f721f629510) at util/virthreadpool.c:104
#3  0x00007f721e276abe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#4  0x00007f721bb3adf3 in start_thread (arg=0x7f720f765700) at pthread_create.c:308
#5  0x00007f721b4613dd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 10 (Thread 0x7f720ef64700 (LWP 22598)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f721e276d06 in virCondWait (c=c@entry=0x7f721f697350, m=m@entry=0x7f721f697328) at util/virthread.c:153
#2  0x00007f721e2771bb in virThreadPoolWorker (opaque=opaque@entry=0x7f721f629300) at util/virthreadpool.c:104
#3  0x00007f721e276abe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#4  0x00007f721bb3adf3 in start_thread (arg=0x7f720ef64700) at pthread_create.c:308
#5  0x00007f721b4613dd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 9 (Thread 0x7f720e763700 (LWP 22599)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f721e276d06 in virCondWait (c=c@entry=0x7f721f697350, m=m@entry=0x7f721f697328) at util/virthread.c:153
#2  0x00007f721e2771bb in virThreadPoolWorker (opaque=opaque@entry=0x7f721f629510) at util/virthreadpool.c:104
#3  0x00007f721e276abe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#4  0x00007f721bb3adf3 in start_thread (arg=0x7f720e763700) at pthread_create.c:308
#5  0x00007f721b4613dd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 8 (Thread 0x7f720df62700 (LWP 22600)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f721e276d06 in virCondWait (c=c@entry=0x7f721f697350, m=m@entry=0x7f721f697328) at util/virthread.c:153
#2  0x00007f721e2771bb in virThreadPoolWorker (opaque=opaque@entry=0x7f721f629300) at util/virthreadpool.c:104
#3  0x00007f721e276abe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#4  0x00007f721bb3adf3 in start_thread (arg=0x7f720df62700) at pthread_create.c:308
#5  0x00007f721b4613dd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 7 (Thread 0x7f720d761700 (LWP 22601)):
---Type <return> to continue, or q <return> to quit---
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f721e276d06 in virCondWait (c=c@entry=0x7f721f697350, m=m@entry=0x7f721f697328) at util/virthread.c:153
#2  0x00007f721e2771bb in virThreadPoolWorker (opaque=opaque@entry=0x7f721f629510) at util/virthreadpool.c:104
#3  0x00007f721e276abe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#4  0x00007f721bb3adf3 in start_thread (arg=0x7f720d761700) at pthread_create.c:308
#5  0x00007f721b4613dd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 6 (Thread 0x7f720cf60700 (LWP 22602)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f721e276d06 in virCondWait (c=c@entry=0x7f721f6973e8, m=m@entry=0x7f721f697328) at util/virthread.c:153
#2  0x00007f721e2771db in virThreadPoolWorker (opaque=opaque@entry=0x7f721f629300) at util/virthreadpool.c:104
#3  0x00007f721e276abe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#4  0x00007f721bb3adf3 in start_thread (arg=0x7f720cf60700) at pthread_create.c:308
#5  0x00007f721b4613dd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 5 (Thread 0x7f720c75f700 (LWP 22603)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f721e276d06 in virCondWait (c=c@entry=0x7f721f6973e8, m=m@entry=0x7f721f697328) at util/virthread.c:153
#2  0x00007f721e2771db in virThreadPoolWorker (opaque=opaque@entry=0x7f721f629510) at util/virthreadpool.c:104
#3  0x00007f721e276abe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#4  0x00007f721bb3adf3 in start_thread (arg=0x7f720c75f700) at pthread_create.c:308
#5  0x00007f721b4613dd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 4 (Thread 0x7f720bf5e700 (LWP 22604)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f721e276d06 in virCondWait (c=c@entry=0x7f721f6973e8, m=m@entry=0x7f721f697328) at util/virthread.c:153
#2  0x00007f721e2771db in virThreadPoolWorker (opaque=opaque@entry=0x7f721f629300) at util/virthreadpool.c:104
#3  0x00007f721e276abe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#4  0x00007f721bb3adf3 in start_thread (arg=0x7f720bf5e700) at pthread_create.c:308
#5  0x00007f721b4613dd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 3 (Thread 0x7f720b75d700 (LWP 22605)):
#0  pthread_cond_wait@@GLIBC_2.3.2 () at ../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1  0x00007f721e276d06 in virCondWait (c=c@entry=0x7f721f6973e8, m=m@entry=0x7f721f697328) at util/virthread.c:153
---Type <return> to continue, or q <return> to quit---
#2  0x00007f721e2771db in virThreadPoolWorker (opaque=opaque@entry=0x7f721f629510) at util/virthreadpool.c:104
#3  0x00007f721e276abe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#4  0x00007f721bb3adf3 in start_thread (arg=0x7f720b75d700) at pthread_create.c:308
#5  0x00007f721b4613dd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 2 (Thread 0x7f720af5c700 (LWP 22606)):
#0  qemuSharedDeviceEntryDomainExists (entry=entry@entry=0x7f71ec008850, name=name@entry=0x7f72001fa310 "test", idx=idx@entry=0x7f720af5b0a4)
    at qemu/qemu_conf.c:981
#1  0x00007f7207749eb6 in qemuSharedDeviceEntryRemove (key=0x7f71e00009e0 "8:17", name=name@entry=0x7f72001fa310 "test", driver=0x7f72000e5e00)
    at qemu/qemu_conf.c:1155
#2  0x00007f720774a544 in qemuRemoveSharedDisk (driver=0x7f72000e5e00, disk=<optimized out>, name=0x7f72001fa310 "test") at qemu/qemu_conf.c:1210

#3  0x00007f720774a639 in qemuRemoveSharedDevice (driver=driver@entry=0x7f72000e5e00, dev=dev@entry=0x7f720af5b1d0, name=<optimized out>)
    at qemu/qemu_conf.c:1261
#4  0x00007f72077508a1 in qemuProcessStop (driver=driver@entry=0x7f72000e5e00, vm=vm@entry=0x7f72001e0cf0, 
    reason=reason@entry=VIR_DOMAIN_SHUTOFF_DESTROYED, flags=flags@entry=0) at qemu/qemu_process.c:4722
#5  0x00007f7207796480 in qemuDomainDestroyFlags (dom=0x7f71e0000930, flags=<optimized out>) at qemu/qemu_driver.c:2159
#6  0x00007f721e2fa4fc in virDomainDestroy (domain=domain@entry=0x7f71e0000930) at libvirt.c:2201
#7  0x00007f721eda031c in remoteDispatchDomainDestroy (server=<optimized out>, msg=<optimized out>, args=<optimized out>, rerr=0x7f720af5bc80, 

    client=0x7f721f6a6010) at remote_dispatch.h:3384
#8  remoteDispatchDomainDestroyHelper (server=<optimized out>, client=0x7f721f6a6010, msg=<optimized out>, rerr=0x7f720af5bc80, 
    args=<optimized out>, ret=<optimized out>) at remote_dispatch.h:3362
#9  0x00007f721e370572 in virNetServerProgramDispatchCall (msg=0x7f721f6a62d0, client=0x7f721f6a6010, server=0x7f721f6971d0, prog=0x7f721f6a3400)
    at rpc/virnetserverprogram.c:437
#10 virNetServerProgramDispatch (prog=0x7f721f6a3400, server=server@entry=0x7f721f6971d0, client=0x7f721f6a6010, msg=0x7f721f6a62d0)
    at rpc/virnetserverprogram.c:307
#11 0x00007f721edae0ad in virNetServerProcessMsg (msg=<optimized out>, prog=<optimized out>, client=<optimized out>, srv=0x7f721f6971d0)
    at rpc/virnetserver.c:172
#12 virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x7f721f6971d0) at rpc/virnetserver.c:193
#13 0x00007f721e277125 in virThreadPoolWorker (opaque=opaque@entry=0x7f721f629300) at util/virthreadpool.c:145
#14 0x00007f721e276abe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#15 0x00007f721bb3adf3 in start_thread (arg=0x7f720af5c700) at pthread_create.c:308
#16 0x00007f721b4613dd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Thread 1 (Thread 0x7f721ed34880 (LWP 22596)):
---Type <return> to continue, or q <return> to quit---
#0  __lll_lock_wait () at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
#1  0x00007f721bb3cd41 in _L_lock_790 () from /lib64/libpthread.so.0
#2  0x00007f721bb3cc47 in __GI___pthread_mutex_lock (mutex=mutex@entry=0x7f72001e0d00) at pthread_mutex_lock.c:64
#3  0x00007f721e276c25 in virMutexLock (m=m@entry=0x7f72001e0d00) at util/virthread.c:88
#4  0x00007f721e2611ce in virObjectLock (anyobj=anyobj@entry=0x7f72001e0cf0) at util/virobject.c:323
#5  0x00007f720774c3dc in qemuProcessHandleEvent (mon=<optimized out>, vm=0x7f72001e0cf0, eventName=0x7f721f626060 "SHUTDOWN", seconds=1410946076, 
    micros=501040, details=0x0, opaque=0x7f72000e5e00) at qemu/qemu_process.c:669
#6  0x00007f7207763a2e in qemuMonitorEmitEvent (mon=mon@entry=0x7f71ec008c80, event=event@entry=0x7f721f626060 "SHUTDOWN", seconds=1410946076, 
    micros=501040, details=0x0) at qemu/qemu_monitor.c:1192
#7  0x00007f7207774a01 in qemuMonitorJSONIOProcessEvent (obj=0x7f721f624930, mon=0x7f71ec008c80) at qemu/qemu_monitor_json.c:158
#8  qemuMonitorJSONIOProcessLine (msg=0x0, line=<optimized out>, mon=0x7f71ec008c80) at qemu/qemu_monitor_json.c:195
#9  qemuMonitorJSONIOProcess (mon=mon@entry=0x7f71ec008c80, 
    data=0x7f721f6a6a80 "{\"timestamp\": {\"seconds\": 1410946076, \"microseconds\": 501040}, \"event\": \"SHUTDOWN\"}\r\n", len=85, 
    msg=msg@entry=0x0) at qemu/qemu_monitor_json.c:237
#10 0x00007f72077624ad in qemuMonitorIOProcess (mon=0x7f71ec008c80) at qemu/qemu_monitor.c:402
#11 qemuMonitorIO (watch=watch@entry=12, fd=<optimized out>, events=0, events@entry=1, opaque=opaque@entry=0x7f71ec008c80) at qemu/qemu_monitor.c:657
#12 0x00007f721e23d27a in virEventPollDispatchHandles (fds=<optimized out>, nfds=<optimized out>) at util/vireventpoll.c:510
#13 virEventPollRunOnce () at util/vireventpoll.c:660
#14 0x00007f721e23b962 in virEventRunDefaultImpl () at util/virevent.c:308
#15 0x00007f721edaf55d in virNetServerRun (srv=srv@entry=0x7f721f6971d0) at rpc/virnetserver.c:1139
#16 0x00007f721ed7c567 in main (argc=<optimized out>, argv=<optimized out>) at libvirtd.c:1540

Comment 1 Ján Tomko 2014-09-17 10:45:40 UTC
Upstream patch:
https://www.redhat.com/archives/libvir-list/2014-September/msg01065.html

Comment 2 Ján Tomko 2014-09-18 07:11:17 UTC
Now pushed upstream:
commit 540ee872494316ef5bfc17ef3dd4338080c3e513
Author:     Ján Tomko <jtomko@redhat.com>
CommitDate: 2014-09-18 09:05:21 +0200

    qemu: fix crash with shared disks
    
    Commit f36a94f introduced a double free on all success paths
    in qemuSharedDeviceEntryInsert.
    
    Only call qemuSharedDeviceEntryFree on the error path and
    set entry to NULL before jumping there if the entry already
    is in the hash table.
    
    https://bugzilla.redhat.com/show_bug.cgi?id=1142722

git describe: v1.2.8-192-g540ee87

Comment 5 Xuesong Zhang 2014-12-19 10:33:26 UTC
Verify this bug with the following package version:
libvirt-1.2.8-11.el7.x86_64
qemu-kvm-rhev-2.1.2-17.el7.x86_64
kernel-3.10.0-219.el7.x86_64

Steps:
1. add the following block disk to guest with shareable option.
<disk type='block' device='disk'>
      <driver name='qemu' type='raw'/>
      <source dev='/dev/sdb'/>
      <target dev='vdb' bus='virtio'/>
      <shareable/>
    </disk>

2. start the geust
# virsh start rhel7.1
Domain rhel7.1 started

3. login the geust, make sure the disk is working well in the guest.

4. destroy the guest
# virsh destroy rhel7.1
Domain rhel7.1 destroyed

5. checking the libvirtd service status, it is working well, no crash.
# service libvirtd status
Redirecting to /bin/systemctl status  libvirtd.service
libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled)
   Active: active (running) since Fri 2014-12-19 13:20:13 CST; 5h 8min ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 1407 (libvirtd)
   CGroup: /system.slice/libvirtd.service
           ├─1407 /usr/sbin/libvirtd
           ├─3752 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default....
           └─3753 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default....

Dec 19 18:18:07 localhost.localdomain libvirtd[1407]: ignore NIC_RX_FILTER_CH...
Dec 19 18:18:11 localhost.localdomain dnsmasq-dhcp[3752]: DHCPDISCOVER(virbr0...
Dec 19 18:18:11 localhost.localdomain dnsmasq-dhcp[3752]: DHCPOFFER(virbr0) 1...
Dec 19 18:18:11 localhost.localdomain dnsmasq-dhcp[3752]: DHCPREQUEST(virbr0)...
Dec 19 18:18:11 localhost.localdomain dnsmasq-dhcp[3752]: DHCPACK(virbr0) 192...
Dec 19 18:20:57 localhost.localdomain libvirtd[1407]: ignore NIC_RX_FILTER_CH...
Dec 19 18:21:01 localhost.localdomain dnsmasq-dhcp[3752]: DHCPDISCOVER(virbr0...
Dec 19 18:21:01 localhost.localdomain dnsmasq-dhcp[3752]: DHCPOFFER(virbr0) 1...
Dec 19 18:21:01 localhost.localdomain dnsmasq-dhcp[3752]: DHCPREQUEST(virbr0)...
Dec 19 18:21:01 localhost.localdomain dnsmasq-dhcp[3752]: DHCPACK(virbr0) 192...
Hint: Some lines were ellipsized, use -l to show in full.


So change the bug status to verify.

Comment 7 errata-xmlrpc 2015-03-05 07:44:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-0323.html


Note You need to log in before you can comment on or make changes to this bug.