Version-Release number of selected component: virt-manager-0.10.0-5.git1ffcc0cc.fc20 Additional info: reporter: libreport-2.1.10 backtrace_rating: 4 cmdline: /usr/bin/python /usr/share/virt-manager/virt-manager crash_function: g_socket_details_from_fd executable: /usr/bin/python2.7 kernel: 3.12.6-300.fc20.x86_64 runlevel: N 5 type: CCpp uid: 1000 Truncated backtrace: Thread no. 1 (10 frames) #2 g_socket_details_from_fd at gsocket.c:368 #3 g_socket_constructed at gsocket.c:586 #4 g_object_new_internal at gobject.c:1785 #5 g_object_new_valist at gobject.c:2002 #6 g_initable_new_valist at ginitable.c:227 #7 g_initable_new at ginitable.c:149 #8 g_socket_new_from_fd at gsocket.c:1072 #9 spice_channel_coroutine at spice-channel.c:2228 #10 coroutine_trampoline at coroutine_ucontext.c:63 #11 continuation_trampoline at continuation.c:55
Created attachment 846880 [details] File: backtrace
Created attachment 846881 [details] File: cgroup
Created attachment 846882 [details] File: core_backtrace
Created attachment 846883 [details] File: dso_list
Created attachment 846884 [details] File: environ
Created attachment 846885 [details] File: limits
Created attachment 846886 [details] File: maps
Created attachment 846887 [details] File: open_fds
Created attachment 846888 [details] File: proc_pid_status
Created attachment 846889 [details] File: var_log_messages
Cole, not sure this is a spice-gtk issue. Looking at the backtrace, the error which happened is "creating GSocket from fd 11: Socket operation on non-socket". fd 11 is a pipe here. This fd comes from the client application (ie virt-manager) through the spice_channel_open_fd API or the SpiceChannel::open-fd signal. Dunno it's virt-manager passing a bad file descriptor, or if we used to accept such file descriptors, and something changed in spice-gtk or gio and made that invalid. I'm not sure how this bug can be reproduced.
Alexander, is this reproducible at all, or have you hit it more than once?
I got similar backtrace today and ABRT redirected me here. I was using F20 virt-manager to connect to libvirt on RHEL6.5 and managing one VM there heavily (restoring snapshots, playing with grub/kernel options inside the VM etc. - one of my VM died). I cannot reproduce it and I don't remember exactly _what_ action caused the crash :(. Management of RHEL 6 libvirt with F20 virt-manager works surprisingly well most of the time.
So doesn't sound like it's happening at graphical connect time or anything, so not sure what virt-manager could be doing to tickle this. But I don't have any ideas where the bug might be
Strange bug. I was installing 2 Fedora 20 VMs (one x86_64 and one i686) remotely in other computer that also has Fedora 20 x86_64. After the installation was done I have hit the 'reboot' button in anaconda and then closed the 2 virt-manager windows of the VMs. It was connecting trough VNC. It took like 2 secs to close the windows and the error got reported.
I just triggered this one. It happened when the remote guest rebooted, I had two virt-managers open on that host at the time, and the other virt-manager picked up the spice stream instead of the one that crashed. Dave
Just triggered again (on 1.0.0-3.fc20 this time) - this was at the end of an install, just after the guest had rebooted and was just opening the remote display again.
For me it looks like the fd is non-existent: #0 0x00000037ba2504e9 in g_logv (log_domain=0x3212ef36b8 "GLib-GIO", log_level=G_LOG_LEVEL_ERROR, format=<optimized out>, args=args@entry=0x7f566dffc3a0) at gmessages.c:989 domain = 0x0 data = 0x0 depth = 1 log_func = 0x37ba24fc80 <g_log_default_handler> domain_fatal_mask = <optimized out> masquerade_fatal = 0 test_level = <optimized out> was_fatal = <optimized out> was_recursion = <optimized out> msg = 0x8fadc00 "creating GSocket from fd 39: Bad file descriptor\n" msg_alloc = 0x8fadc00 "creating GSocket from fd 39: Bad file descriptor\n" i = 2 #1 0x00000037ba25063f in g_log (log_domain=log_domain@entry=0x3212ef36b8 "GLib-GIO", log_level=log_level@entry=G_LOG_LEVEL_ERROR, format=format@entry=0x3212f00720 "creating GSocket from fd %d: %s\n") at gmessages.c:1025 args = {{gp_offset = 40, fp_offset = 48, overflow_arg_area = 0x7f566dffc480, reg_save_area = 0x7f566dffc3c0}} #2 0x0000003212e73437 in g_socket_details_from_fd (socket=0x47e3bb0) at gsocket.c:368 fd = 39 value = 0 family = 0 address = {ss_family = 56192, __ss_align = 16, __ss_padding = .... } addrlen = 0 errsv = 9 #3 g_socket_constructed (object=0x47e3bb0) at gsocket.c:586 socket = 0x47e3bb0 #4 0x00000037bb2155ea in g_object_new_internal (class=class@entry=0x7f56b40135b0, params=params@entry=0x7f566dffc6f0, n_params=1) at gobject.c:1785 nqueue = 0x4b40580 object = 0x47e3bb0 __FUNCTION__ = "g_object_new_internal" #5 0x00000037bb217814 in g_object_new_valist (object_type=object_type@entry=140010363892864, first_property_name=first_property_name@entry=0x3212f0980e "fd", var_args=var_args@entry=0x7f566dffc870) at gobject.c:2002 stack_params = {{pspec = 0x7f56b4013820, value = 0x7f566dffc640}, {pspec = 0x0, value = 0x0} <repeats 15 times>} params = 0x7f566dffc6f0 name = <optimized out> n_params = 1 class = <optimized out> unref_class = <optimized out> object = <optimized out> __PRETTY_FUNCTION__ = "g_object_new_valist" __FUNCTION__ = "g_object_new_valist" #6 0x0000003212e59c19 in g_initable_new_valist (object_type=140010363892864, first_property_name=first_property_name@entry=0x3212f0980e "fd", var_args=var_args@entry=0x7f566dffc870, cancellable=cancellable@entry=0x0, error=error@entry=0x0) at ginitable.c:227 obj = <optimized out> __PRETTY_FUNCTION__ = "g_initable_new_valist" #7 0x0000003212e59d19 in g_initable_new (object_type=<optimized out>, cancellable=cancellable@entry=0x0, error=error@entry=0x0, first_property_name=first_property_name@entry=0x3212f0980e "fd") at ginitable.c:149 object = <optimized out> var_args = {{gp_offset = 48, fp_offset = 48, overflow_arg_area = 0x7f566dffc950, reg_save_area = 0x7f566dffc890}} #8 0x0000003212e70e52 in g_socket_new_from_fd (fd=<optimized out>, error=error@entry=0x0) at gsocket.c:1072 No locals. #9 0x00007f56b1ad5806 in spice_channel_coroutine (data=0x5a79130) at spice-channel.c:2228 channel = 0x5a79130 c = 0x5a78750 verify = <optimized out> rc = <optimized out> delay_val = 1 switch_tls = 0 switch_protocol = 0 (gdb) p c->fd $4 = 39 yet the open_fds that abrt captured only goes to 38, but looking at virt-manager's log: [Tue, 04 Mar 2014 10:49:00 virt-manager 5674] DEBUG (console:187) Close tunnel PID=5899 OUTFD=85 ERRFD=87 [Tue, 04 Mar 2014 10:49:00 virt-manager 5674] DEBUG (console:187) Close tunnel PID=5901 OUTFD=89 ERRFD=91 [Tue, 04 Mar 2014 10:49:00 virt-manager 5674] DEBUG (console:1348) Viewer disconnected [Tue, 04 Mar 2014 10:49:01 virt-manager 5674] DEBUG (console:1469) Starting connect process for proto=spice trans=ssh connhost=vl403 connuser=root connport=None gaddr=127.0.0.1 gport=5900 gtlsport=None gsocket=None [Tue, 04 Mar 2014 10:49:03 virt-manager 5674] DEBUG (console:271) Creating SSH tunnel: ssh -l root vl403 sh -c 'nc -q 2>&1 | grep "requires an argument" >/dev/null;if [ $? -eq 0 ] ; then CMD="nc -q 0 127.0.0.1 5900";else CMD="nc 127.0.0.1 5900";fi;eval "$CMD";' [Tue, 04 Mar 2014 10:49:03 virt-manager 5674] DEBUG (console:291) Open tunnel PID=6172 OUTFD=39 ERRFD=43 and that's the last message before it died. so maybe the tunnel dropped for some reason during the process of opening it (I do seem to see it opening multiple tunnels for one machine - why? It takes ages to initially open the connection). I don't think it's reasonable for a dead fd on a channel to cause it to hit a trap/kill the application when I have multiple VMs open - surely it should just return an error upwards.
Same crash again, on virt-manager-1.0.1-1.fc20.noarch
*** This bug has been marked as a duplicate of bug 1135546 ***