Bug 1353528 - virt-manager can hang if remote host goes down
Summary: virt-manager can hang if remote host goes down
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Virtualization Tools
Classification: Community
Component: virt-manager
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Cole Robinson
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2016-07-07 11:41 UTC by Dr. David Alan Gilbert
Modified: 2018-02-28 20:30 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-02-28 20:30:27 UTC


Attachments (Terms of Use)

Description Dr. David Alan Gilbert 2016-07-07 11:41:01 UTC
Description of problem:
virt-manager is hung solid; not redrawing either the machine window I have open or the window listing all the machines/connections.  I'm not quite sure when it hung but it's been well over 10 mins now;  I have rebooted one of the hosts but I think I reconnected after that.
See backtrace below.

Last thing in virt-manager.log is:


[Thu, 07 Jul 2016 11:54:37 virt-manager 4338] DEBUG (connection:1052) domain=f23q35efib status=Shutoff added
[Thu, 07 Jul 2016 11:54:37 virt-manager 4338] DEBUG (connection:1052) interface=brpair status=Active added
[Thu, 07 Jul 2016 11:54:38 virt-manager 4338] DEBUG (connection:1052) interface=lo status=Active added
[Thu, 07 Jul 2016 11:54:38 virt-manager 4338] DEBUG (connection:1052) domain=rhel5 status=Shutoff added
[Thu, 07 Jul 2016 11:54:39 virt-manager 4338] DEBUG (connection:1052) domain=rdma status=Shutoff added
[Thu, 07 Jul 2016 11:54:40 virt-manager 4338] DEBUG (connection:1052) domain=f23-i440fx status=Shutoff added
[Thu, 07 Jul 2016 11:54:54 virt-manager 4338] DEBUG (connection:1052) pool=home status=Active added
[Thu, 07 Jul 2016 11:54:54 virt-manager 4338] DEBUG (connection:1052) pool=boot-scratch status=Active added
[Thu, 07 Jul 2016 11:54:56 virt-manager 4338] DEBUG (connection:1052) pool=localhome status=Active added
[Thu, 07 Jul 2016 11:55:02 virt-manager 4338] DEBUG (connection:569) conn=qemu+ssh://root@vl403/system changed to state=Active

Version-Release number of selected component (if applicable):
virt-manager-1.4.0-3.fc24.noarch
libvirt-client-1.3.3.1-4.fc24.x86_64
libusbx-1.0.21-0.1.git448584a.fc24.x86_64
spice-glib-0.32-1.fc24.x86_64

How reproducible:
unknown - just happened now

Steps to Reproduce:
1. Unknown; I'm connected to ~3 remote hosts (all running various rhel7.x variants) all over ssh and a long VPN
2.
3.

Actual results:
solid hang

Expected results:
happiness

Additional info:
(gdb) thread apply all bt f

Thread 8 (Thread 0x7f97a4d79700 (LWP 4708)):
#0  0x00007f97fa56432d in poll () at ../sysdeps/unix/syscall-template.S:84
#1  0x00007f97c8ea53fd in handle_events.part () at /lib64/libusb-1.0.so.0
#2  0x00007f97c8ea6420 in libusb_handle_events_timeout_completed () at /lib64/libusb-1.0.so.0
#3  0x00007f97c8ea651f in libusb_handle_events () at /lib64/libusb-1.0.so.0
#4  0x00007f97cbbdb320 in spice_usb_device_manager_usb_ev_thread () at /lib64/libspice-client-glib-2.0.so.8
#5  0x00007f97f145ad38 in g_thread_proxy () at /lib64/libglib-2.0.so.0
#6  0x00007f97faf475ca in start_thread (arg=0x7f97a4d79700) at pthread_create.c:333
        __res = <optimized out>
        pd = 0x7f97a4d79700
        now = <optimized out>
        unwind_buf = 
              {cancel_jmp_buf = {{jmp_buf = {140289282381568, 7585381372103377334, 140733429243471, 4096, 140289282381568, 140289282382272, -7607712799563300426, -7607910438184140362}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
        pagesize_m1 = <optimized out>
        sp = <optimized out>
        freesize = <optimized out>
#7  0x00007f97fa56fead in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109

Thread 7 (Thread 0x7f97caffd700 (LWP 4707)):
#0  0x00007f97fa56432d in poll () at ../sysdeps/unix/syscall-template.S:84
#1  0x00007f97c8eab9f1 in linux_udev_event_thread_main () at /lib64/libusb-1.0.so.0
#2  0x00007f97faf475ca in start_thread (arg=0x7f97caffd700) at pthread_create.c:333
        __res = <optimized out>
        pd = 0x7f97caffd700
        now = <optimized out>
        unwind_buf = 
              {cancel_jmp_buf = {{jmp_buf = {140289922553600, 7585381372103377334, 140733429243359, 4096, 140289922553600, 140289922554304, -7607945550283521610, -7607910438184140362}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
        pagesize_m1 = <optimized out>
        sp = <optimized out>
        freesize = <optimized out>
#3  0x00007f97fa56fead in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109

Thread 6 (Thread 0x7f97ca7fc700 (LWP 4391)):
#0  0x00007f97faf4f0c7 in do_futex_wait (private=0, abstime=0x0, expected=0, futex_word=0x7f97b00008c0)
    at ../sysdeps/unix/sysv/linux/futex-internal.h:205
        __ret = -512
        oldtype = 0
        err = <optimized out>
#1  0x00007f97faf4f0c7 in do_futex_wait (sem=sem@entry=0x7f97b00008c0, abstime=0x0) at sem_waitcommon.c:111
#2  0x00007f97faf4f174 in __new_sem_wait_slow (sem=0x7f97b00008c0, abstime=0x0) at sem_waitcommon.c:181
        _buffer = 
          {__routine = 0x7f97faf4f080 <__sem_wait_cleanup>, __arg = 0x7f97b00008c0, __canceltype = 2, __prev = 0x0}
        err = <optimized out>
        d = 0
#3  0x00007f97faf4f21a in __new_sem_wait (sem=<optimized out>) at sem_wait.c:29
#4  0x00007f97fb26cf85 in PyThread_acquire_lock () at /lib64/libpython2.7.so.1.0
#5  0x00007f97fb270de2 in lock_PyThread_acquire_lock () at /lib64/libpython2.7.so.1.0
#6  0x00007f97fb23eaac in PyEval_EvalFrameEx () at /lib64/libpython2.7.so.1.0
#7  0x00007f97fb24176c in PyEval_EvalCodeEx () at /lib64/libpython2.7.so.1.0
#8  0x00007f97fb23e6be in PyEval_EvalFrameEx () at /lib64/libpython2.7.so.1.0
#9  0x00007f97fb24176c in PyEval_EvalCodeEx () at /lib64/libpython2.7.so.1.0
#10 0x00007f97fb23e6be in PyEval_EvalFrameEx () at /lib64/libpython2.7.so.1.0
#11 0x00007f97fb24176c in PyEval_EvalCodeEx () at /lib64/libpython2.7.so.1.0
#12 0x00007f97fb1ca91d in function_call () at /lib64/libpython2.7.so.1.0
#13 0x00007f97fb1a5ed3 in PyObject_Call () at /lib64/libpython2.7.so.1.0
#14 0x00007f97fb23bbb7 in PyEval_EvalFrameEx () at /lib64/libpython2.7.so.1.0
#15 0x00007f97fb23e792 in PyEval_EvalFrameEx () at /lib64/libpython2.7.so.1.0
#16 0x00007f97fb23e792 in PyEval_EvalFrameEx () at /lib64/libpython2.7.so.1.0
#17 0x00007f97fb24176c in PyEval_EvalCodeEx () at /lib64/libpython2.7.so.1.0
#18 0x00007f97fb1ca83c in function_call () at /lib64/libpython2.7.so.1.0
---Type <return> to continue, or q <return> to quit---
#19 0x00007f97fb1a5ed3 in PyObject_Call () at /lib64/libpython2.7.so.1.0
#20 0x00007f97fb1b4d2c in instancemethod_call () at /lib64/libpython2.7.so.1.0
#21 0x00007f97fb1a5ed3 in PyObject_Call () at /lib64/libpython2.7.so.1.0
#22 0x00007f97fb237847 in PyEval_CallObjectWithKeywords () at /lib64/libpython2.7.so.1.0
#23 0x00007f97fb2711d2 in t_bootstrap () at /lib64/libpython2.7.so.1.0
#24 0x00007f97faf475ca in start_thread (arg=0x7f97ca7fc700) at pthread_create.c:333
        __res = <optimized out>
        pd = 0x7f97ca7fc700
        now = <optimized out>
        unwind_buf = 
              {cancel_jmp_buf = {{jmp_buf = {140289914160896, 7585381372103377334, 140733429243679, 4096, 140289914160896, 140289914161600, -7607946651405762122, -7607910438184140362}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
        pagesize_m1 = <optimized out>
        sp = <optimized out>
        freesize = <optimized out>
#25 0x00007f97fa56fead in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109

Thread 5 (Thread 0x7f97d966a700 (LWP 4343)):
#0  0x00007f97faf4fafd in __lll_lock_wait () at ../sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
#1  0x00007f97faf49a0d in __GI___pthread_mutex_lock (mutex=mutex@entry=0x7f97bc003ee0)
    at ../nptl/pthread_mutex_lock.c:80
        type = 4210359037
        id = <optimized out>
#2  0x00007f97ee059f05 in virMutexLock (m=m@entry=0x7f97bc003ee0) at util/virthread.c:89
#3  0x00007f97ee04023a in virObjectLock (anyobj=anyobj@entry=0x7f97bc003ed0) at util/virobject.c:323
        obj = 0x7f97bc003ed0
        __func__ = "virObjectLock"
#4  0x00007f97ee163421 in virNetClientSendWithReply (client=client@entry=0x7f97bc003ed0, msg=msg@entry=0x7f97c4364930) at rpc/virnetclient.c:2010
        ret = <optimized out>
#5  0x00007f97ee163be2 in virNetClientProgramCall (prog=prog@entry=0x7f97bc12f300, client=client@entry=0x7f97bc003ed0, serial=serial@entry=325, proc=proc@entry=6, noutfds=noutfds@entry=0, outfds=outfds@entry=0x0, ninfds=0x0, infds=0x0, args_filter=0x7f97fa5a3a80 <__GI_xdr_void>, args=0x0, ret_filter=0x7f97ee157ce0 <xdr_remote_node_get_info_ret>, ret=0x7f97d9668d10) at rpc/virnetclientprogram.c:329
        msg = 0x7f97c4364930
        i = <optimized out>
        __FUNCTION__ = "virNetClientProgramCall"
#6  0x00007f97ee13b5e4 in callFull (priv=priv@entry=0x7f97bc004c90, flags=flags@entry=0, fdin=fdin@entry=0x0, fdinlen=fdinlen@entry=0, fdout=fdout@entry=0x0, fdoutlen=fdoutlen@entry=0x0, proc_nr=6, args_filter=0x7f97fa5a3a80 <__GI_xdr_void>, args=0x0, ret_filter=0x7f97ee157ce0 <xdr_remote_node_get_info_ret>, ret=0x7f97d9668d10 "", conn=<optimized out>) at remote/remote_driver.c:6054
        rv = <optimized out>
        prog = 0x7f97bc12f300
        counter = 325
        client = 0x7f97bc003ed0
#7  0x00007f97ee1456e1 in remoteNodeGetInfo (conn=<optimized out>, ret=0x7f97d9668d10 "", ret_filter=<optimized out>, args=0x0, args_filter=<optimized out>, proc_nr=6, flags=0, priv=0x7f97bc004c90) at remote/remote_driver.c:6076
        rv = -1
        priv = 0x7f97bc004c90
        ret = 
          {model = '\000' <repeats 31 times>, memory = 0, cpus = 0, mhz = 0, nodes = 0, sockets = 0, cores = 0, threads = 0}
#8  0x00007f97ee1456e1 in remoteNodeGetInfo (conn=<optimized out>, result=0x7f97d9668db0)
    at remote/remote_client_bodies.h:6088
        rv = -1
        priv = 0x7f97bc004c90
        ret = 
          {model = '\000' <repeats 31 times>, memory = 0, cpus = 0, mhz = 0, nodes = 0, sockets = 0, cores = 0, threads = 0}
#9  0x00007f97ee11855c in virNodeGetInfo (conn=0x7f97bc0045d0, info=0x7f97d9668db0) at libvirt-host.c:365
        ret = <optimized out>
        __func__ = "virNodeGetInfo"
        __FUNCTION__ = "virNodeGetInfo"
#10 0x00007f97ee55ddef in libvirt_virNodeGetInfo () at /usr/lib64/python2.7/site-packages/libvirtmod.so
#11 0x00007f97fb23eaac in PyEval_EvalFrameEx () at /lib64/libpython2.7.so.1.0
---Type <return> to continue, or q <return> to quit--- 
#12 0x00007f97fb23e792 in PyEval_EvalFrameEx () at /lib64/libpython2.7.so.1.0
#13 0x00007f97fb24176c in PyEval_EvalCodeEx () at /lib64/libpython2.7.so.1.0
#14 0x00007f97fb1ca91d in function_call () at /lib64/libpython2.7.so.1.0
#15 0x00007f97fb1a5ed3 in PyObject_Call () at /lib64/libpython2.7.so.1.0
#16 0x00007f97fb23bbb7 in PyEval_EvalFrameEx () at /lib64/libpython2.7.so.1.0
#17 0x00007f97fb24176c in PyEval_EvalCodeEx () at /lib64/libpython2.7.so.1.0
#18 0x00007f97fb1ca91d in function_call () at /lib64/libpython2.7.so.1.0
#19 0x00007f97fb1a5ed3 in PyObject_Call () at /lib64/libpython2.7.so.1.0
#20 0x00007f97fb23bbb7 in PyEval_EvalFrameEx () at /lib64/libpython2.7.so.1.0
#21 0x00007f97fb24176c in PyEval_EvalCodeEx () at /lib64/libpython2.7.so.1.0
#22 0x00007f97fb1ca91d in function_call () at /lib64/libpython2.7.so.1.0
#23 0x00007f97fb1a5ed3 in PyObject_Call () at /lib64/libpython2.7.so.1.0
#24 0x00007f97fb23bbb7 in PyEval_EvalFrameEx () at /lib64/libpython2.7.so.1.0
#25 0x00007f97fb23e792 in PyEval_EvalFrameEx () at /lib64/libpython2.7.so.1.0
#26 0x00007f97fb23e792 in PyEval_EvalFrameEx () at /lib64/libpython2.7.so.1.0
#27 0x00007f97fb24176c in PyEval_EvalCodeEx () at /lib64/libpython2.7.so.1.0
#28 0x00007f97fb1ca83c in function_call () at /lib64/libpython2.7.so.1.0
#29 0x00007f97fb1a5ed3 in PyObject_Call () at /lib64/libpython2.7.so.1.0
#30 0x00007f97fb1b4d2c in instancemethod_call () at /lib64/libpython2.7.so.1.0
#31 0x00007f97fb1a5ed3 in PyObject_Call () at /lib64/libpython2.7.so.1.0
#32 0x00007f97fb237847 in PyEval_CallObjectWithKeywords () at /lib64/libpython2.7.so.1.0
#33 0x00007f97fb2711d2 in t_bootstrap () at /lib64/libpython2.7.so.1.0
#34 0x00007f97faf475ca in start_thread (arg=0x7f97d966a700) at pthread_create.c:333
        __res = <optimized out>
        pd = 0x7f97d966a700
        now = <optimized out>
        unwind_buf = 
              {cancel_jmp_buf = {{jmp_buf = {140290164172544, 7585381372103377334, 140733429251663, 4096, 140290164172544, 140290164173248, -7607983958565437002, -7607910438184140362}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
        pagesize_m1 = <optimized out>
        sp = <optimized out>
        freesize = <optimized out>
#35 0x00007f97fa56fead in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109

Thread 4 (Thread 0x7f97da8e4700 (LWP 4342)):
#0  0x00007f97fa56432d in poll () at ../sysdeps/unix/syscall-template.S:84
#1  0x00007f97f1434a46 in g_main_context_iterate.isra () at /lib64/libglib-2.0.so.0
#2  0x00007f97f1434dd2 in g_main_loop_run () at /lib64/libglib-2.0.so.0
#3  0x00007f97f0d3cf76 in gdbus_shared_thread_func () at /lib64/libgio-2.0.so.0
#4  0x00007f97f145ad38 in g_thread_proxy () at /lib64/libglib-2.0.so.0
#5  0x00007f97faf475ca in start_thread (arg=0x7f97da8e4700) at pthread_create.c:333
        __res = <optimized out>
        pd = 0x7f97da8e4700
        now = <optimized out>
        unwind_buf = 
              {cancel_jmp_buf = {{jmp_buf = {140290183546624, 7585381372103377334, 140290200329151, 4096, 140290183546624, 140290183547328, -7607981410576088650, -7607910438184140362}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
        pagesize_m1 = <optimized out>
        sp = <optimized out>
        freesize = <optimized out>
#6  0x00007f97fa56fead in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109

Thread 3 (Thread 0x7f97db0e5700 (LWP 4341)):
#0  0x00007f97fa56432d in poll () at ../sysdeps/unix/syscall-template.S:84
#1  0x00007f97f1434a46 in g_main_context_iterate.isra () at /lib64/libglib-2.0.so.0
#2  0x00007f97f1434b5c in g_main_context_iteration () at /lib64/libglib-2.0.so.0
#3  0x00007f97f1434ba1 in glib_worker_main () at /lib64/libglib-2.0.so.0
#4  0x00007f97f145ad38 in g_thread_proxy () at /lib64/libglib-2.0.so.0
#5  0x00007f97faf475ca in start_thread (arg=0x7f97db0e5700) at pthread_create.c:333
        __res = <optimized out>
        pd = 0x7f97db0e5700
        now = <optimized out>
        unwind_buf = 
---Type <return> to continue, or q <return> to quit---
              {cancel_jmp_buf = {{jmp_buf = {140290191939328, 7585381372103377334, 140290200328799, 4096, 140290191939328, 140290191940032, -7607980309453848138, -7607910438184140362}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
        pagesize_m1 = <optimized out>
        sp = <optimized out>
        freesize = <optimized out>
#6  0x00007f97fa56fead in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109

Thread 2 (Thread 0x7f97db8e6700 (LWP 4340)):
#0  0x00007f97fa56432d in poll () at ../sysdeps/unix/syscall-template.S:84
#1  0x00007f97f1434a46 in g_main_context_iterate.isra () at /lib64/libglib-2.0.so.0
#2  0x00007f97f1434b5c in g_main_context_iteration () at /lib64/libglib-2.0.so.0
#3  0x00007f97db8edfad in dconf_gdbus_worker_thread () at /usr/lib64/gio/modules/libdconfsettings.so
#4  0x00007f97f145ad38 in g_thread_proxy () at /lib64/libglib-2.0.so.0
#5  0x00007f97faf475ca in start_thread (arg=0x7f97db8e6700) at pthread_create.c:333
        __res = <optimized out>
        pd = 0x7f97db8e6700
        now = <optimized out>
        unwind_buf = 
              {cancel_jmp_buf = {{jmp_buf = {140290200332032, 7585381372103377334, 140733429248639, 4096, 140290200332032, 140290200332736, -7607979210479091274, -7607910438184140362}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}}
        not_first_call = <optimized out>
        pagesize_m1 = <optimized out>
        sp = <optimized out>
        freesize = <optimized out>
#6  0x00007f97fa56fead in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109

Thread 1 (Thread 0x7f97fb714700 (LWP 4338)):
#0  0x00007f97faf4fdad in read () at ../sysdeps/unix/syscall-template.S:84
#1  0x00007f97ee01775b in saferead (__nbytes=1006, __buf=0x55d9e1afe573, __fd=60) at /usr/include/bits/unistd.h:44
        r = <optimized out>
        nread = 19
#2  0x00007f97ee01775b in saferead (fd=fd@entry=60, buf=0x55d9e1afe573, count=1006, count@entry=1025)
    at util/virfile.c:1029
        r = <optimized out>
        nread = 19
#3  0x00007f97ee017871 in saferead_lim (fd=60, max_len=max_len@entry=1025, length=length@entry=0x7fff0e0effa8)
    at util/virfile.c:1305
        count = <optimized out>
        requested = 1025
        buf = 0x55d9e1afe560 "Ncat: Broken pipe.\n\341\331U"
        alloc = 8193
        size = 0
        save_errno = <optimized out>
        __FUNCTION__ = "saferead_lim"
#4  0x00007f97ee017c90 in virFileReadLimFD (fd=<optimized out>, maxlen=maxlen@entry=1024, buf=buf@entry=0x7fff0e0effe0) at util/virfile.c:1356
        len = 140290509950635
        s = 0x7f97ee2636dd "Event fired %p %d"
#5  0x00007f97ee170c53 in virNetSocketReadWire (sock=sock@entry=0x7f97bc005200, buf=buf@entry=0x55d9dfc34190 "", len=len@entry=4) at rpc/virnetsocket.c:1591
        errout = 0x0
        ret = 0
        __FUNCTION__ = "virNetSocketReadWire"
#6  0x00007f97ee173b9e in virNetSocketRead (sock=0x7f97bc005200, buf=0x55d9dfc34190 "", len=4)
    at rpc/virnetsocket.c:1769
        ret = <optimized out>
#7  0x00007f97ee160323 in virNetClientIOHandleInput (client=0x7f97bc003ed0) at rpc/virnetclient.c:1247
        wantData = <optimized out>
        ret = <optimized out>
        ret = <optimized out>
#8  0x00007f97ee160323 in virNetClientIOHandleInput (client=client@entry=0x7f97bc003ed0) at rpc/virnetclient.c:1269
        ret = <optimized out>
#9  0x00007f97ee162498 in virNetClientIncomingEvent (sock=0x7f97bc005200, events=9, opaque=0x7f97bc003ed0)
    at rpc/virnetclient.c:1851
---Type <return> to continue, or q <return> to quit---
        client = 0x7f97bc003ed0
        __func__ = "virNetClientIncomingEvent"
#10 0x00007f97d966c7cd in gvir_event_handle_dispatch () at /lib64/libvirt-glib-1.0.so.0
#11 0x00007f97f1434703 in g_main_context_dispatch () at /lib64/libglib-2.0.so.0
#12 0x00007f97f1434ab0 in g_main_context_iterate.isra () at /lib64/libglib-2.0.so.0
#13 0x00007f97f1434b5c in g_main_context_iteration () at /lib64/libglib-2.0.so.0
#14 0x00007f97f0d0658d in g_application_run () at /lib64/libgio-2.0.so.0
#15 0x00007f97f11e8c58 in ffi_call_unix64 () at /lib64/libffi.so.6
#16 0x00007f97f11e86ba in ffi_call () at /lib64/libffi.so.6
#17 0x00007f97f1dacd9c in pygi_invoke_c_callable () at /usr/lib64/python2.7/site-packages/gi/_gi.so
#18 0x00007f97f1dae89a in pygi_function_cache_invoke () at /usr/lib64/python2.7/site-packages/gi/_gi.so
#19 0x00007f97f1da2649 in _callable_info_call () at /usr/lib64/python2.7/site-packages/gi/_gi.so
#20 0x00007f97fb1a5ed3 in PyObject_Call () at /lib64/libpython2.7.so.1.0
#21 0x00007f97fb23d5a6 in PyEval_EvalFrameEx () at /lib64/libpython2.7.so.1.0
#22 0x00007f97fb23e792 in PyEval_EvalFrameEx () at /lib64/libpython2.7.so.1.0
#23 0x00007f97fb23e792 in PyEval_EvalFrameEx () at /lib64/libpython2.7.so.1.0
#24 0x00007f97fb24176c in PyEval_EvalCodeEx () at /lib64/libpython2.7.so.1.0
#25 0x00007f97fb241859 in PyEval_EvalCode () at /lib64/libpython2.7.so.1.0
#26 0x00007f97fb25b08f in run_mod () at /lib64/libpython2.7.so.1.0
#27 0x00007f97fb25c2a2 in PyRun_FileExFlags () at /lib64/libpython2.7.so.1.0
#28 0x00007f97fb25d4b5 in PyRun_SimpleFileExFlags () at /lib64/libpython2.7.so.1.0
#29 0x00007f97fb26f4a0 in Py_Main () at /lib64/libpython2.7.so.1.0
#30 0x00007f97fa48d731 in __libc_start_main (main=
    0x55d9dc59a7b0 <main>, argc=2, argv=0x7fff0e0f0cb8, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7fff0e0f0ca8) at ../csu/libc-start.c:289
        result = <optimized out>
        unwind_buf = 
              {cancel_jmp_buf = {{jmp_buf = {0, -4398037060364648010, 94394193127360, 140733429255344, 0, 0, -7585561277949918794, -7607911577311493706}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x7fff0e0f0cd0, 0x7f97fb74d128}, data = {prev = 0x0, cleanup = 0x0, canceltype = 235867344}}}
        not_first_call = <optimized out>
#31 0x000055d9dc59a7e9 in _start ()

Comment 1 Cole Robinson 2016-07-13 19:02:46 UTC
Likely some hang to do with the remote host going down... I'd think the libvirt connection KeepAlive APIs would break things off if it hangs, but maybe there's some way we can get stuck, I'll need to try to reproduce

Comment 2 David H. Gutteridge 2017-06-20 00:10:55 UTC
I can intermittently reproduce this issue with a local VM that has a habit of hanging. Sometimes this also causes virt-manager to become unresponsive indefinitely; I cannot use it to control the host, so I have to kill the underlying qemu process. (This is with virt-manager 1.4.1 on Fedora 25.) I don't have any sample logs on hand, nor have I attached a gdb session. I did try to profile the qemu process kernel side, since this is the underlying cause, but haven't come up with anything useful. (Frankly, of late, I've avoided that particular VM image in frustration.)

Comment 3 David H. Gutteridge 2017-06-20 00:13:29 UTC
(I meant to write "I cannot use it to control the VM", of course. Ahem.)

Comment 4 Victor Toso 2017-06-20 11:14:40 UTC
Hi David, this could be a deadlock in spice-gtk which has a fix in the mailing list [0]. I notice that you are on f24, could you check using spice-gtk from this scratch build works for you?

https://koji.fedoraproject.org/koji/taskinfo?taskID=20070259

If one wants to give it a try on f25 -> f27, should be doable with this other scratch-build:

https://koji.fedoraproject.org/koji/taskinfo?taskID=20070247

[0] https://lists.freedesktop.org/archives/spice-devel/2017-June/038343.html

Comment 5 David H. Gutteridge 2017-09-02 18:38:33 UTC
(In reply to Victor Toso from comment #4)
> Hi David, this could be a deadlock in spice-gtk which has a fix in the
> mailing list [0]. I notice that you are on f24, could you check using
> spice-gtk from this scratch build works for you?

Sorry for the delay; I hadn't had time to look at this until recently. I'm running Fedora 26 now, with spice-gtk 0.34, which I see from the Git change log has the referenced fix included. I haven't been able to duplicate virt-manager hangs since I began testing earlier this week. (I have still been able to duplicate the VM freezing, though that happens less frequently and requires more time with equivalent loads, which I chalk up to a much newer version of QEMU.)

Comment 6 Cole Robinson 2018-02-28 20:30:27 UTC
Since last report makes it sound like virt-manager hangs are fixed, I'm closing this. If there's still issues, please open new bugs against spice-gtk


Note You need to log in before you can comment on or make changes to this bug.