Description of problem: libvirtd crash when migrating ~20 domains via vdsm. (on source host) tb: #0 __pthread_mutex_lock (mutex=0x7f8314002990) at pthread_mutex_lock.c:51 #1 0x000000000047a665 in qemuDomainObjEnterMonitorInternal (driver=0x7f8328021000, driver_locked=true, obj=0x7f8314002990, asyncJob=<optimized out>) at qemu/qemu_domain.c:913 #2 0x00000000004508ad in qemuDomainMonitorCommand (domain=<optimized out>, cmd=0x7f83280f6e00 "{\"execute\": \"query-blockstats\"}", result=0x7f83280e4fc0, flags=0) at qemu/qemu_driver.c:10020 #3 0x00007f83395baa39 in virDomainQemuMonitorCommand (domain=0x7f83280f65b0, cmd=0x7f83280f6e00 "{\"execute\": \"query-blockstats\"}", result=0x7f83280e4fc0, flags=0) at libvirt-qemu.c:102 #4 0x0000000000421ecf in qemuDispatchMonitorCommand (ret=0x7f83280e4fc0, args=0x7f83280f6dc0, rerr=0x7f8332ce3c70, hdr=<optimized out>, client=<optimized out>, server=<optimized out>) at remote.c:2843 #5 qemuDispatchMonitorCommandHelper (server=<optimized out>, client=<optimized out>, hdr=<optimized out>, rerr=0x7f8332ce3c70, args=0x7f83280f6dc0, ret=0x7f83280e4fc0) at qemu_dispatch.h:72 #6 0x000000000043f43e in virNetServerProgramDispatchCall (msg=0x18f5830, client=0x18e3c60, server=0x187c560, prog=0x1886090) at rpc/virnetserverprogram.c:401 #7 virNetServerProgramDispatch (prog=0x1886090, server=0x187c560, client=0x18e3c60, msg=0x18f5830) at rpc/virnetserverprogram.c:287 #8 0x00000000004416ac in virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x187c560) at rpc/virnetserver.c:136 #9 0x00007f833918f12e in virThreadPoolWorker (opaque=<optimized out>) at util/threadpool.c:144 #10 0x00007f833918ebe2 in virThreadHelper (data=<optimized out>) at util/threads-pthread.c:157 #11 0x0000003b57e07d90 in start_thread (arg=0x7f8332ce4700) at pthread_create.c:309 #12 0x0000003b576ef48d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:115 libvirtd.log: 2012-01-30 18:26:32.285+00005659: debug : virEventRunDefaultImpl:244 : running default event implementation 2012-01-30 18:26:32.285+00005664: debug : virEventPollRemoveHandle:186 : mark delete 38 72 2012-01-30 18:26:32.285+00005664: debug : virEventPollInterruptLocked:678 : Skip interrupt, 0 957384768 2012-01-30 18:26:32.285+00005664: debug : virNetClientFree:264 : client=0x7f831c0e2d20 refs=2 2012-01-30 18:26:32.285+00005659: debug : virEventPollCleanupTimeouts:488 : Cleanup 8 2012-01-30 18:26:32.285+00005659: debug : virEventPollCleanupHandles:535 : Cleanup 41 2012-01-30 18:26:32.285+00005664: debug : virEventPollRemoveTimeout:279 : Remove timer 40 2012-01-30 18:26:32.285+00005664: debug : virEventPollInterruptLocked:682 : Interrupting 2012-01-30 18:26:32.285+00005659: debug : virNetClientFree:264 : client=0x7f831c0e2d20 refs=1 2012-01-30 18:26:32.285+00005664: debug : virDomainObjUnref:1289 : obj=0x7f8314002990 refs=3 2012-01-30 18:26:32.285+00005659: debug : virNetSocketFree:670 : sock=0x7f831c188f00 fd=72 2012-01-30 18:26:32.285+00005659: debug : virEventPollRemoveHandle:173 : Remove handle w=88 2012-01-30 18:26:32.285+00005664: debug : virDomainObjUnref:1289 : obj=0x7f8314002990 refs=2 2012-01-30 18:26:32.285+00005664: debug : virDomainObjUnref:1289 : obj=0x7f8314002990 refs=1 2012-01-30 18:26:32.285+00005659: debug : virEventPollMakePollFDs:369 : Prepare n=0 w=1, f=7 e=1 d=0 2012-01-30 18:26:32.285+00005659: debug : virEventPollMakePollFDs:369 : Prepare n=1 w=2, f=9 e=1 d=0 2012-01-30 18:26:32.285+00005659: debug : virEventPollMakePollFDs:369 : Prepare n=2 w=3, f=12 e=1 d=0 2012-01-30 18:26:32.285+00005659: debug : virEventPollMakePollFDs:369 : Prepare n=3 w=4, f=13 e=1 d=0 2012-01-30 18:26:32.285+00005659: debug : virEventPollMakePollFDs:369 : Prepare n=4 w=5, f=14 e=1 d=0 2012-01-30 18:26:32.285+00005664: debug : virDomainFree:2169 : dom=0x7f831c0ccf70, (VM: name=test-024, uuid=8d6f5443-a9d6-4b2c-920c-478ee0075da2) 2012-01-30 18:26:32.285+00005659: debug : virEventPollMakePollFDs:369 : Prepare n=5 w=6, f=6 e=1 d=0 2012-01-30 18:26:32.285+00005664: debug : virUnrefDomain:276 : unref domain 0x7f831c0ccf70 test-024 1 ====== end of log ===== Version-Release number of selected component (if applicable): libvirt-0.9.6-4.fc16.x86_64 vdsm-4.9.3.2-0.fc16.x86_64 How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: Attached full libvirtd.log and full backtrace.
Created attachment 558391 [details] logs
This bug is likely fixed by the following upstream commit (in 0.9.7): commit 9bc9999b6eb815268798120d7fe8834d822f098d Author: Michal Privoznik <mprivozn> Date: Tue Oct 11 10:40:36 2011 +0200 qemu: Check for domain being active on successful job acquire As this is needed. Although some functions check for domain being active before obtaining job, we need to check it after, because obtaining job unlocks domain object, during which a state of domain can be changed.
I'm marking this as POST, so that the above patch can be included in the next F16 libvirt build.
libvirt-0.9.6-5.fc16 has been submitted as an update for Fedora 16. https://admin.fedoraproject.org/updates/libvirt-0.9.6-5.fc16
Package libvirt-0.9.6-5.fc16: * should fix your issue, * was pushed to the Fedora 16 testing repository, * should be available at your local mirror within two days. Update it with: # su -c 'yum update --enablerepo=updates-testing libvirt-0.9.6-5.fc16' as soon as you are able to. Please go to the following url: https://admin.fedoraproject.org/updates/FEDORA-2012-3067/libvirt-0.9.6-5.fc16 then log in and leave karma (feedback).
libvirt-0.9.6-5.fc16 has been pushed to the Fedora 16 stable repository. If problems still persist, please make note of it in this bug report.