Bug 722730 - libvirtd segfaults at pthread_mutex_lock when qemu monitor command is run while guest is shuting down.
Summary: libvirtd segfaults at pthread_mutex_lock when qemu monitor command is run whi...
Keywords:
Status: CLOSED DUPLICATE of bug 697762
Alias: None
Product: Virtualization Tools
Classification: Community
Component: libvirt
Version: unspecified
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: ---
Assignee: Libvirt Maintainers
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-07-17 06:41 UTC by motohiro.kanda.nx
Modified: 2011-07-26 13:47 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2011-07-26 13:47:35 UTC


Attachments (Terms of Use)

Description motohiro.kanda.nx 2011-07-17 06:41:28 UTC
Description of problem:
libvirtd segfaults at pthread_mutex_lock called from qemuDomainObjEnterMonitorWithDriver when virsh qemu-monitor-command is run while guest is shuting down.

Version-Release number of selected component (if applicable):
0.9.2

How reproducible:
Once in 10 tries. 

Steps to Reproduce:
1. virsh start domain

2. Run
while true
do virsh qemu-monitor-command domain "info version or something else. I used mempeek x."
done

3. virsh shutdown domain
  
Actual results:
libvirtd crashes.

Expected results:


Additional info:
I added an assertion at qemu/qemu_domain.c:663 qemuDomainObjEnterMonitorWithDriver
assert(priv && priv->mon);

and printf at qemu_process.c:664 qemuProcessHandleMonitorDestroy
    if (priv->mon == mon) {
        VIR_ERROR("mon=%p priv->mon is now NULL.", mon);
        priv->mon = NULL; }



And here is the log
$ sbin/libvirtd
14:36:12.292: 15888: info : libvirt version: 0.9.2
...
14:38:31.566: 15889: error : qemuMonitorIORead:486 : Unable to read from monitor: Connection reset by peer
14:38:31.566: 15892: error : qemuMonitorTextArbitraryCommand:2686 : operation failed: failed to run cmd 

'x/8xb 0xffffffff805e3010'
14:38:31.605: 15889: error : qemuProcessHandleMonitorDestroy:664 : mon=0x20cf860 priv->mon is now NULL.
libvirtd: qemu/qemu_domain.c:663: qemuDomainObjEnterMonitorWithDriver: Assertion `priv && priv->mon' failed.
Caught abort signal dumping internal log buffer:


    ====== start of log =====
...
14:38:31.566: 15889: error : qemuMonitorIORead:486 : Unable to read from monitor: Connection reset by peer
14:38:31.566: 15889: debug : qemuMonitorIO:609 : Error on monitor Unable to read from monitor: Connection 

reset by peer
14:38:31.566: 15889: debug : virEventPollUpdateHandle:144 : Update handle w=8 e=12
14:38:31.566: 15889: debug : virEventPollInterruptLocked:686 : Skip interrupt, 1 1111071056
14:38:31.566: 15889: debug : qemuMonitorIO:643 : Triggering error callback
14:38:31.566: 15889: debug : qemuProcessHandleMonitorError:164 : Received error on 0x7faf4c060710 'kanda2'
14:38:31.566: 15889: debug : virEventPollUpdateTimeout:244 : Updating timer 1 timeout with 0 ms freq
14:38:31.566: 15889: debug : virEventPollInterruptLocked:686 : Skip interrupt, 1 1111071056
14:38:31.566: 15889: debug : virEventPollDispatchHandles:467 : i=8 w=15
14:38:31.566: 15889: debug : virEventPollDispatchHandles:467 : i=9 w=865
14:38:31.566: 15889: debug : virEventPollDispatchHandles:467 : i=10 w=1475
14:38:31.566: 15889: debug : virEventPollDispatchHandles:467 : i=11 w=1476
14:38:31.566: 15889: debug : virEventPollCleanupTimeouts:498 : Cleanup 1
14:38:31.566: 15889: debug : virEventPollCleanupHandles:545 : Cleanup 12
14:38:31.566: 15889: debug : virEventRunDefaultImpl:188 : running default event implementation
14:38:31.566: 15889: debug : virEventPollCleanupTimeouts:498 : Cleanup 1
14:38:31.566: 15889: debug : virEventPollCleanupHandles:545 : Cleanup 12
14:38:31.566: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=0 w=1, f=4 e=1 d=0
14:38:31.566: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=1 w=2, f=6 e=1 d=0
14:38:31.566: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=2 w=3, f=10 e=0 d=0
14:38:31.566: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=3 w=4, f=10 e=1 d=0
14:38:31.566: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=4 w=5, f=11 e=1 d=0
14:38:31.566: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=5 w=6, f=8 e=25 d=0
14:38:31.566: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=6 w=7, f=12 e=1 d=0
14:38:31.566: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=7 w=8, f=15 e=24 d=0
14:38:31.566: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=8 w=15, f=13 e=1 d=0
14:38:31.566: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=9 w=865, f=14 e=1 d=0
14:38:31.566: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=10 w=1475, f=16 e=1 d=0
14:38:31.566: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=11 w=1476, f=17 e=1 d=0
14:38:31.566: 15889: debug : virEventPollCalculateTimeout:310 : Calculate expiry of 1 timers
14:38:31.566: 15889: debug : virEventPollCalculateTimeout:316 : Got a timeout scheduled for 1310881111566
14:38:31.566: 15889: debug : virEventPollCalculateTimeout:342 : Timeout at 1310881111566 due in 0 ms
14:38:31.566: 15889: debug : virEventPollRunOnce:605 : Poll on 11 handles 0x20d0030 timeout 0
14:38:31.566: 15889: debug : virEventPollRunOnce:616 : Poll got 1 event(s)
14:38:31.566: 15889: debug : virEventPollDispatchTimeouts:406 : Dispatch 1
14:38:31.566: 15889: debug : virEventPollUpdateTimeout:244 : Updating timer 1 timeout with -1 ms freq
14:38:31.566: 15889: debug : virEventPollInterruptLocked:686 : Skip interrupt, 1 1111071056
14:38:31.566: 15889: debug : virEventPollDispatchHandles:453 : Dispatch 11
14:38:31.566: 15889: debug : virEventPollDispatchHandles:467 : i=0 w=1
14:38:31.566: 15889: debug : virEventPollDispatchHandles:467 : i=1 w=2
14:38:31.566: 15889: debug : virEventPollDispatchHandles:467 : i=3 w=4
14:38:31.566: 15889: debug : virEventPollDispatchHandles:467 : i=4 w=5
14:38:31.566: 15889: debug : virEventPollDispatchHandles:467 : i=5 w=6
14:38:31.566: 15889: debug : virEventPollDispatchHandles:467 : i=6 w=7
14:38:31.566: 15889: debug : virEventPollDispatchHandles:467 : i=7 w=8
14:38:31.566: 15889: debug : virEventPollDispatchHandles:480 : Dispatch n=7 f=15 w=8 e=16 0x20cf860
14:38:31.566: 15889: debug : qemuMonitorIO:609 : Error on monitor Unable to read from monitor: Connection 

reset by peer
14:38:31.566: 15889: debug : virEventPollUpdateHandle:144 : Update handle w=8 e=12
14:38:31.566: 15889: debug : virEventPollInterruptLocked:686 : Skip interrupt, 1 1111071056
14:38:31.566: 15889: debug : qemuMonitorIO:632 : Triggering EOF callback
14:38:31.566: 15889: debug : qemuProcessHandleMonitorEOF:113 : Received EOF on 0x7faf4c060710 'kanda2'
14:38:31.566: 15889: debug : qemuProcessStop:2573 : Shutting down VM 'kanda2' pid=15918 migrated=0
14:38:31.566: 15892: debug : qemuMonitorSend:810 : Send command resulted in error Unable to read from 

monitor: Connection reset by peer
14:38:31.566: 15892: debug : virEventPollUpdateHandle:144 : Update handle w=8 e=12
14:38:31.566: 15892: debug : virEventPollInterruptLocked:690 : Interrupting
14:38:31.566: 15892: debug : qemuMonitorTextCommandWithHandler:242 : Receive command reply ret=-1 rxLength=0 

rxBuffer='(null)'
14:38:31.566: 15892: error : qemuMonitorTextArbitraryCommand:2686 : operation failed: failed to run cmd 

'x/8xb 0xffffffff805e3010'
14:38:31.575: 15889: debug : qemuMonitorClose:757 : mon=0x20cf860
14:38:31.575: 15889: debug : virEventPollRemoveHandle:171 : Remove handle w=8
14:38:31.575: 15889: debug : virEventPollRemoveHandle:184 : mark delete 7 15
14:38:31.575: 15889: debug : virEventPollInterruptLocked:686 : Skip interrupt, 1 1111071056
14:38:31.578: 15889: debug : qemuProcessKill:2522 : vm=kanda2 pid=15918
14:38:31.605: 15889: debug : virEventPollUpdateTimeout:244 : Updating timer 1 timeout with 0 ms freq
14:38:31.605: 15889: debug : virEventPollInterruptLocked:686 : Skip interrupt, 1 1111071056
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=8 w=15
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=9 w=865
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=10 w=1475
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=11 w=1476
14:38:31.605: 15889: debug : virEventPollCleanupTimeouts:498 : Cleanup 1
14:38:31.605: 15889: debug : virEventPollCleanupHandles:545 : Cleanup 12
14:38:31.605: 15889: debug : qemuMonitorFree:210 : mon=0x20cf860

XXX This is mine.

14:38:31.605: 15889: error : qemuProcessHandleMonitorDestroy:664 : mon=0x20cf860 priv->mon is now NULL.

14:38:31.605: 15889: debug : virDomainObjUnref:1121 : obj=0x7faf4c060710 refs=3
14:38:31.605: 15889: debug : virEventRunDefaultImpl:188 : running default event implementation
14:38:31.605: 15889: debug : virEventPollCleanupTimeouts:498 : Cleanup 1
14:38:31.605: 15889: debug : virEventPollCleanupHandles:545 : Cleanup 11
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=0 w=1, f=4 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=1 w=2, f=6 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=2 w=3, f=10 e=0 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=3 w=4, f=10 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=4 w=5, f=11 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=5 w=6, f=8 e=25 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=6 w=7, f=12 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=7 w=15, f=13 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=8 w=865, f=14 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=9 w=1475, f=16 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=10 w=1476, f=17 e=1 d=0
14:38:31.605: 15889: debug : virEventPollCalculateTimeout:310 : Calculate expiry of 1 timers
14:38:31.605: 15889: debug : virEventPollCalculateTimeout:316 : Got a timeout scheduled for 1310881111605
14:38:31.605: 15889: debug : virEventPollCalculateTimeout:342 : Timeout at 1310881111605 due in 0 ms
14:38:31.605: 15889: debug : virEventPollRunOnce:605 : Poll on 10 handles 0x20d0030 timeout 0
14:38:31.605: 15889: debug : virEventPollRunOnce:616 : Poll got 1 event(s)
14:38:31.605: 15889: debug : virEventPollDispatchTimeouts:406 : Dispatch 1
14:38:31.605: 15889: debug : virEventPollUpdateTimeout:244 : Updating timer 1 timeout with -1 ms freq
14:38:31.605: 15889: debug : virEventPollInterruptLocked:686 : Skip interrupt, 1 1111071056
14:38:31.605: 15889: debug : virEventPollDispatchHandles:453 : Dispatch 10
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=0 w=1
14:38:31.605: 15889: debug : virEventPollDispatchHandles:480 : Dispatch n=0 f=4 w=1 e=1 (nil)
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=1 w=2
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=3 w=4
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=4 w=5
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=5 w=6
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=6 w=7
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=7 w=15
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=8 w=865
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=9 w=1475
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=10 w=1476
14:38:31.605: 15889: debug : virEventPollCleanupTimeouts:498 : Cleanup 1
14:38:31.605: 15889: debug : virEventPollCleanupHandles:545 : Cleanup 11
14:38:31.605: 15889: debug : virEventRunDefaultImpl:188 : running default event implementation
14:38:31.605: 15889: debug : virEventPollCleanupTimeouts:498 : Cleanup 1
14:38:31.605: 15889: debug : virEventPollCleanupHandles:545 : Cleanup 11
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=0 w=1, f=4 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=1 w=2, f=6 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=2 w=3, f=10 e=0 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=3 w=4, f=10 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=4 w=5, f=11 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=5 w=6, f=8 e=25 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=6 w=7, f=12 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=7 w=15, f=13 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=8 w=865, f=14 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=9 w=1475, f=16 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=10 w=1476, f=17 e=1 d=0
14:38:31.605: 15889: debug : virEventPollCalculateTimeout:310 : Calculate expiry of 1 timers
14:38:31.605: 15889: debug : virEventPollCalculateTimeout:342 : Timeout at 0 due in -1 ms
14:38:31.605: 15889: debug : virEventPollRunOnce:605 : Poll on 10 handles 0x20cf9d0 timeout -1
14:38:31.605: 15892: debug : virDomainObjUnref:1121 : obj=0x7faf4c060710 refs=2
14:38:31.605: 15892: debug : virDomainFree:2117 : dom=0x7faf4c013680, (VM: name=kanda2, uuid=4b597033-4e6e-

a5f6-e4a6-69f1121ac754),
14:38:31.605: 15892: debug : virUnrefDomain:276 : unref domain 0x7faf4c013680 kanda2 1
14:38:31.605: 15892: debug : virReleaseDomain:238 : release domain 0x7faf4c013680 kanda2 4b597033-4e6e-a5f6-

e4a6-69f1121ac754
14:38:31.605: 15892: debug : virReleaseDomain:246 : unref connection 0x7faf4c012e90 2
14:38:31.605: 15892: debug : remoteSerializeError:132 : prog=536903815 ver=1 proc=1 type=1 serial=3, 

msg=operation failed: failed to run cmd 'x/8xb 0xffffffff805e3010'
14:38:31.605: 15892: debug : virEventPollUpdateHandle:144 : Update handle w=1475 e=3
14:38:31.605: 15892: debug : virEventPollInterruptLocked:690 : Interrupting
14:38:31.605: 15889: debug : virEventPollRunOnce:616 : Poll got 1 event(s)
14:38:31.605: 15889: debug : virEventPollDispatchTimeouts:406 : Dispatch 1
14:38:31.605: 15889: debug : virEventPollDispatchHandles:453 : Dispatch 10
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=0 w=1
14:38:31.605: 15889: debug : virEventPollDispatchHandles:480 : Dispatch n=0 f=4 w=1 e=1 (nil)
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=1 w=2
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=3 w=4
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=4 w=5
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=5 w=6
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=6 w=7
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=7 w=15
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=8 w=865
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=9 w=1475
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=10 w=1476
14:38:31.605: 15889: debug : virEventPollCleanupTimeouts:498 : Cleanup 1
14:38:31.605: 15889: debug : virEventPollCleanupHandles:545 : Cleanup 11
14:38:31.605: 15889: debug : virEventRunDefaultImpl:188 : running default event implementation
14:38:31.605: 15889: debug : virEventPollCleanupTimeouts:498 : Cleanup 1
14:38:31.605: 15889: debug : virEventPollCleanupHandles:545 : Cleanup 11
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=0 w=1, f=4 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=1 w=2, f=6 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=2 w=3, f=10 e=0 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=3 w=4, f=10 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=4 w=5, f=11 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=5 w=6, f=8 e=25 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=6 w=7, f=12 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=7 w=15, f=13 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=8 w=865, f=14 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=9 w=1475, f=16 e=5 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=10 w=1476, f=17 e=1 d=0
14:38:31.605: 15889: debug : virEventPollCalculateTimeout:310 : Calculate expiry of 1 timers
14:38:31.605: 15889: debug : virEventPollCalculateTimeout:342 : Timeout at 0 due in -1 ms
14:38:31.605: 15889: debug : virEventPollRunOnce:605 : Poll on 10 handles 0x20d0030 timeout -1
14:38:31.605: 15889: debug : virEventPollRunOnce:616 : Poll got 1 event(s)
14:38:31.605: 15889: debug : virEventPollDispatchTimeouts:406 : Dispatch 1
14:38:31.605: 15889: debug : virEventPollDispatchHandles:453 : Dispatch 10
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=0 w=1
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=1 w=2
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=3 w=4
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=4 w=5
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=5 w=6
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=6 w=7
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=7 w=15
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=8 w=865
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=9 w=1475
14:38:31.605: 15889: debug : virEventPollDispatchHandles:480 : Dispatch n=9 f=16 w=1475 e=4 0x2039300
14:38:31.605: 15889: debug : virEventPollUpdateHandle:144 : Update handle w=1475 e=1
14:38:31.605: 15889: debug : virEventPollInterruptLocked:686 : Skip interrupt, 1 1111071056
14:38:31.605: 15889: debug : virEventPollDispatchHandles:467 : i=10 w=1476
14:38:31.605: 15889: debug : virEventPollCleanupTimeouts:498 : Cleanup 1
14:38:31.605: 15889: debug : virEventPollCleanupHandles:545 : Cleanup 11
14:38:31.605: 15889: debug : virEventRunDefaultImpl:188 : running default event implementation
14:38:31.605: 15889: debug : virEventPollCleanupTimeouts:498 : Cleanup 1
14:38:31.605: 15889: debug : virEventPollCleanupHandles:545 : Cleanup 11
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=0 w=1, f=4 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=1 w=2, f=6 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=2 w=3, f=10 e=0 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=3 w=4, f=10 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=4 w=5, f=11 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=5 w=6, f=8 e=25 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=6 w=7, f=12 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=7 w=15, f=13 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=8 w=865, f=14 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=9 w=1475, f=16 e=1 d=0
14:38:31.605: 15889: debug : virEventPollMakePollFDs:374 : Prepare n=10 w=1476, f=17 e=1 d=0
14:38:31.605: 15889: debug : virEventPollCalculateTimeout:310 : Calculate expiry of 1 timers
14:38:31.605: 15889: debug : virEventPollCalculateTimeout:342 : Timeout at 0 due in -1 ms
14:38:31.605: 15889: debug : virEventPollRunOnce:605 : Poll on 10 handles 0x20d0030 timeout -1


     ====== end of log =====

Aborted (core dumped)


(gdb) bt
#0  0x00007faf5553ded5 in raise () from /lib/libc.so.6
#1  0x00007faf5553f385 in abort () from /lib/libc.so.6
#2  0x00007faf55536dc9 in __assert_fail () from /lib/libc.so.6
#3  0x0000000000469547 in qemuDomainObjEnterMonitorWithDriver (
    driver=0x7faf4c002370, obj=0x7faf4c060710) at qemu/qemu_domain.c:663
#4  0x0000000000446a9b in qemuDomainMonitorCommand (
    domain=<value optimized out>,
    cmd=0x7faf4c013880 "x/8xb 0xffffffff805e3010", result=0x41979f10, flags=0)
    at qemu/qemu_driver.c:7885
#5  0x00007faf5826fa08 in virDomainQemuMonitorCommand (domain=0x7faf4c013840,
    cmd=0x7faf4c013880 "x/8xb 0xffffffff805e3010", result=0x41979f10, flags=0)
    at libvirt-qemu.c:69
#6  0x0000000000423316 in qemuDispatchMonitorCommand (
    server=<value optimized out>, client=<value optimized out>,
    conn=<value optimized out>, hdr=<value optimized out>, rerr=0x41979e30,
    args=<value optimized out>, ret=0x41979f10) at remote.c:2649
#7  0x0000000000436e87 in remoteDispatchClientRequest (server=0x2039300,
    client=0x20d8930, msg=0x20dc750) at dispatch.c:516
#8  0x000000000041f903 in qemudWorker (data=0x203ccf0) at libvirtd.c:1619
#9  0x00007faf55a69fc7 in start_thread () from /lib/libpthread.so.0
#10 0x00007faf555db5ad in clone () from /lib/libc.so.6


(gdb) info thread
  7 process 15892  0x00007faf55a6dd29 in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib/libpthread.so.0
  6 process 15891  0x00007faf55a6dd29 in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib/libpthread.so.0
  5 process 15893  0x00007faf55a6dd29 in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib/libpthread.so.0
  4 process 15894  0x00007faf55a6dd29 in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib/libpthread.so.0
  3 process 15888  0x00007faf55a6a715 in pthread_join ()
   from /lib/libpthread.so.0
  2 process 15889  0x00007faf555d2b66 in poll () from /lib/libc.so.6
* 1 process 15890  0x00007faf5553ded5 in raise () from /lib/libc.so.6

(gdb) frame 3
#3  0x0000000000469547 in qemuDomainObjEnterMonitorWithDriver (
    driver=0x7faf4c002370, obj=0x7faf4c060710) at qemu/qemu_domain.c:663
663     assert(priv && priv->mon); // XXX
(gdb) l
658     void qemuDomainObjEnterMonitorWithDriver(struct qemud_driver *driver,
659                                              virDomainObjPtr obj)
660     {
661         qemuDomainObjPrivatePtr priv = obj->privateData;
662
663     assert(priv && priv->mon); // XXX
664         qemuMonitorLock(priv->mon);
665         qemuMonitorRef(priv->mon);
666         virDomainObjUnlock(obj);
667         qemuDriverUnlock(driver);


(gdb) p priv
$1 = (qemuDomainObjPrivatePtr) 0x7faf4c00bd70
(gdb) p priv->mon
$2 = (qemuMonitorPtr) 0x0

(gdb) p *priv
$1 = {jobCond = {cond = {__data = {__lock = 0, __futex = 52, __total_seq = 26,
        __wakeup_seq = 26, __woken_seq = 26, __mutex = 0x7faf4c060710,
        __nwaiters = 0, __broadcast_seq = 0},
      __size = "\000\000\000\0004\000\000\000\032\000\000\000\000\000\000\000\032\000\000\000\000\000\000

\000\032\000\000\000\000\000\000\000\020\a\006Lッ\177\000\000\000\000\000\000\000\000\000", __align = 

223338299392}}, signalCond = {
    cond = {__data = {__lock = 0, __futex = 0, __total_seq = 0,
        __wakeup_seq = 0, __woken_seq = 0, __mutex = 0x0, __nwaiters = 0,
        __broadcast_seq = 0}, __size = '\0' <repeats 47 times>, __align = 0}},
  jobActive = QEMU_JOB_UNSPECIFIED, jobSignals = 0, jobSignalsData = {
    migrateDowntime = 0, migrateBandwidth = 0, statDevName = 0x0,
    blockStat = 0x0, statRetCode = 0x0, infoDevName = 0x0, blockInfo = 0x0,
    infoRetCode = 0x0}, jobInfo = {type = 0, timeElapsed = 0,
    timeRemaining = 0, dataTotal = 0, dataProcessed = 0, dataRemaining = 0,
    memTotal = 0, memProcessed = 0, memRemaining = 0, fileTotal = 0,
    fileProcessed = 0, fileRemaining = 0}, jobStart = 1310881111531,
  mon = 0x0, monConfig = 0x0, monJSON = 0, gotShutdown = false, nvcpupids = 0,
  vcpupids = 0x0, pciaddrs = 0x2027140, persistentAddrs = 1, qemuCaps = 0x0,
  lockState = 0x0}
(gdb) p *obj
$2 = {lock = {lock = {__data = {__lock = 1, __count = 0, __owner = 15890,
        __nusers = 1, __kind = 0, __spins = 0, __list = {__prev = 0x0,
          __next = 0x0}},
      __size = "\001\000\000\000\000\000\000\000\022>\000\000\001", '\0' <repeats 26 times>, __align = 1}}, 

refs = 2, pid = -1, state = {state = 5,
    reason = 1}, autostart = 0, persistent = 1, updated = 0, def = 0x20cfa30,
  newDef = 0x0, snapshots = {objs = 0x7faf4c008950}, current_snapshot = 0x0,
  privateData = 0x7faf4c00bd70,
  privateDataFreeFunc = 0x46a7d0 <qemuDomainObjPrivateFree>, taint = 0}

(gdb) thread apply all bt

Thread 7 (process 15892):
#0  0x00007faf55a6dd29 in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib/libpthread.so.0
#1  0x00007faf57e687ca in virCondWait (c=0x203932c, m=0x80)
    at util/threads-pthread.c:117
#2  0x000000000041f86d in qemudWorker (data=0x203cd20) at libvirtd.c:1597
#3  0x00007faf55a69fc7 in start_thread () from /lib/libpthread.so.0
#4  0x00007faf555db5ad in clone () from /lib/libc.so.6
#5  0x0000000000000000 in ?? ()

Thread 6 (process 15891):
#0  0x00007faf55a6dd29 in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib/libpthread.so.0
#1  0x00007faf57e687ca in virCondWait (c=0x203932c, m=0x80)
    at util/threads-pthread.c:117
#2  0x000000000041f86d in qemudWorker (data=0x203cd08) at libvirtd.c:1597
#3  0x00007faf55a69fc7 in start_thread () from /lib/libpthread.so.0
#4  0x00007faf555db5ad in clone () from /lib/libc.so.6
#5  0x0000000000000000 in ?? ()

Thread 5 (process 15893):
#0  0x00007faf55a6dd29 in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib/libpthread.so.0
---Type <return> to continue, or q <return> to quit---
#1  0x00007faf57e687ca in virCondWait (c=0x203932c, m=0x80)
    at util/threads-pthread.c:117
#2  0x000000000041f86d in qemudWorker (data=0x203cd38) at libvirtd.c:1597
#3  0x00007faf55a69fc7 in start_thread () from /lib/libpthread.so.0
#4  0x00007faf555db5ad in clone () from /lib/libc.so.6
#5  0x0000000000000000 in ?? ()

Thread 4 (process 15894):
#0  0x00007faf55a6dd29 in pthread_cond_wait@@GLIBC_2.3.2 ()
   from /lib/libpthread.so.0
#1  0x00007faf57e687ca in virCondWait (c=0x203932c, m=0x80)
    at util/threads-pthread.c:117
#2  0x000000000041f86d in qemudWorker (data=0x203cd50) at libvirtd.c:1597
#3  0x00007faf55a69fc7 in start_thread () from /lib/libpthread.so.0
#4  0x00007faf555db5ad in clone () from /lib/libc.so.6
#5  0x0000000000000000 in ?? ()

Thread 3 (process 15888):
#0  0x00007faf55a6a715 in pthread_join () from /lib/libpthread.so.0
#1  0x0000000000422c85 in main (argc=<value optimized out>,
    argv=<value optimized out>) at libvirtd.c:3418

Thread 2 (process 15889):
#0  0x00007faf555d2b66 in poll () from /lib/libc.so.6
---Type <return> to continue, or q <return> to quit---
#1  0x00007faf57e5645a in virEventPollRunOnce () at util/event_poll.c:606
#2  0x00007faf57e554c5 in virEventRunDefaultImpl () at util/event.c:191
#3  0x000000000041e609 in qemudOneLoop () at libvirtd.c:2277
#4  0x000000000041eb48 in qemudRunLoop (opaque=0x2039300) at libvirtd.c:2387
#5  0x00007faf55a69fc7 in start_thread () from /lib/libpthread.so.0
#6  0x00007faf555db5ad in clone () from /lib/libc.so.6
#7  0x0000000000000000 in ?? ()

Thread 1 (process 15890):
#0  0x00007faf5553ded5 in raise () from /lib/libc.so.6
#1  0x00007faf5553f385 in abort () from /lib/libc.so.6
#2  0x00007faf55536dc9 in __assert_fail () from /lib/libc.so.6
#3  0x0000000000469547 in qemuDomainObjEnterMonitorWithDriver (
    driver=0x7faf4c002370, obj=0x7faf4c060710) at qemu/qemu_domain.c:663
#4  0x0000000000446a9b in qemuDomainMonitorCommand (
    domain=<value optimized out>,
    cmd=0x7faf4c013880 "x/8xb 0xffffffff805e3010", result=0x41979f10, flags=0)
    at qemu/qemu_driver.c:7885
#5  0x00007faf5826fa08 in virDomainQemuMonitorCommand (domain=0x7faf4c013840,
    cmd=0x7faf4c013880 "x/8xb 0xffffffff805e3010", result=0x41979f10, flags=0)
    at libvirt-qemu.c:69
#6  0x0000000000423316 in qemuDispatchMonitorCommand (
    server=<value optimized out>, client=<value optimized out>,
    conn=<value optimized out>, hdr=<value optimized out>, rerr=0x41979e30,
---Type <return> to continue, or q <return> to quit---
    args=<value optimized out>, ret=0x41979f10) at remote.c:2649
#7  0x0000000000436e87 in remoteDispatchClientRequest (server=0x2039300,
    client=0x20d8930, msg=0x20dc750) at dispatch.c:516
#8  0x000000000041f903 in qemudWorker (data=0x203ccf0) at libvirtd.c:1619
#9  0x00007faf55a69fc7 in start_thread () from /lib/libpthread.so.0
#10 0x00007faf555db5ad in clone () from /lib/libc.so.6
#11 0x0000000000000000 in ?? ()

Comment 1 Daniel Veillard 2011-07-18 03:56:28 UTC
Could you try again with 0.9.3, this area got changes in that release.
You can also try the rpms from my yum repo for the upcoming RHEL-6.2
at http://veillard.com/libvirt/6.2,

  Thanks,

Daniel

Comment 2 Osier Yang 2011-07-18 12:15:12 UTC
this is duplicated with https://bugzilla.redhat.com/show_bug.cgi?id=697762

Comment 3 Alex Jia 2011-07-19 07:32:41 UTC
(In reply to comment #1)
> Could you try again with 0.9.3, this area got changes in that release.
> You can also try the rpms from my yum repo for the upcoming RHEL-6.2
> at http://veillard.com/libvirt/6.2,
> 
>   Thanks,
> 
> Daniel

Hi Daniel, 

It's fine for libvirt-0.9.3-5.el6.x86_64, libvirtd works well.

Alex

Comment 4 Alex Jia 2011-07-19 07:40:50 UTC
(In reply to comment #0)

> 
> How reproducible:
> Once in 10 tries. 
> 
> Steps to Reproduce:
> 1. virsh start domain
> 
> 2. Run
> while true
> do virsh qemu-monitor-command domain "info version or something else. I used
> mempeek x."
You are missing --hmp option before "info mempeek" or you deliberately miss this option, anyway, I tried this 2 kinds of case and many times, libvirtd is always running state and works well for libvirt-0.9.3-5.el6.x86_64.

Alex
> done
> 
> 3. virsh shutdown domain
> 
> Actual results:
> libvirtd crashes.

Comment 5 motohiro.kanda.nx 2011-07-21 06:27:56 UTC
My app runs on a distribution which does not have polkit-1.
Stock libvirt-0.9.3 and 0.9.3-7.el6.src.rpm fails to build if HAVE_POLKIT0 is enabled.
libvirtd.c:580: error: 'auth_unix_rw' undeclared
libvirtd.c:588: error: 'server' undeclared
Will you please drop me a code which goes well with polkit 0.9 so that I can
test if this case is fixed?
Thanks.

Comment 6 Osier Yang 2011-07-26 10:13:40 UTC
(In reply to comment #5)
> My app runs on a distribution which does not have polkit-1.
> Stock libvirt-0.9.3 and 0.9.3-7.el6.src.rpm fails to build if HAVE_POLKIT0 is
> enabled.
> libvirtd.c:580: error: 'auth_unix_rw' undeclared
> libvirtd.c:588: error: 'server' undeclared
> Will you please drop me a code which goes well with polkit 0.9 so that I can
> test if this case is fixed?
> Thanks.

Hi Motohiro, you can compile it with "--with-polkit=no" 

Osier

Comment 7 Osier Yang 2011-07-26 13:47:35 UTC
As said in comment 2, I close this as DUPLICATE with https://bugzilla.redhat.com/show_bug.cgi?id=697762, as both the two bugs are talking about libvirtd crash because of priv->mon is freed, but the other thread intends to use it afterwards.

Motohiro, please refer to 697762 for the process.

*** This bug has been marked as a duplicate of bug 697762 ***


Note You need to log in before you can comment on or make changes to this bug.