Description of problem: libvirt crashes at the first connection with Virt-manager. Version-Release number of selected component (if applicable): 0.7.5 How reproducible: Always. Steps to Reproduce: 1. Try to connect locally with Virt-manager (QEMU/KVM driver). Actual results: Libvirt crashes at the first connection. If I start libvirt again and try to reconnect with Virt-manager it works great. Expected results: Libvirt should run normally. Additional info: Kernel log: libvirtd[20333] general protection ip:7f7fee5727c1 sp:7f7fe9a83d58 error:0 in libc-2.10.2.so[7f7fee4f9000+14a000]
GDB output: Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7ffefca9f910 (LWP 27863)] 0x00007fff0158d7c1 in strlen () from /lib/libc.so.6
Created attachment 385782 [details] backtrace of libvirtd crash on SIGSEGV I'm seeing similar behavior, though I can't say with certainty that it's 100% reproducible with the first virt-manager connection. However, I do find that libvirtd is crashing fairly often. I've captured a back trace following the Ubuntu guide here: https://wiki.ubuntu.com/Backtrace. I've used the script at the bottom of that page to produce the attached backtrace.
I can confirm it's the same bug, I generated a similar backtrace.
Created attachment 385921 [details] SIGABRT on host USB device connect Also seeing a crash with a SIGABRT when connecting an external USB drive to the host system, backtrace attached.
Further details, it would appear that the connection crash isn't with the first virt-manager connection after system boot, but rather with the first connection attempt from a given instance of virt-manager. That is if I start virt-manager and attempt to connect, libvirtd crashes. If I restart libvirtd without closing virt-manager and then attempt a connection it connects properly. However, if I then close virt-manager, reopen it, and then attempt a connection libvirtd crashes. As this crash is 100% reproducible, it would be nice to have the priority and severity of this bug report increased and some acknowledgement that it's been at least seen. For now I have to keep libvirtd propped up with a shell script that continually relaunches it if it exits.
Additional debugging of this problem appears to have been done in the Debian BTS: http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=565983 Their current conclusion is that this is some sort of hashmap corruption: "So, somehow the hashmap contains a node with corrupted ->name; I tried setting a watchpoint on ->name, but the node is deallocated almost immediatly in remoteDispatchNodeDeviceLookupByName."
It sounds like you can easily reproduce the crash, so could you try and collect a trace using valgrind. As you say it looks like memory corruption / mistaken free, and so valgrind is usually quite alot more helpful at diagnosing the problem than gdb traces. Just stop the libvirtd daemon, and then run it manually with # valgrind --leak-check=full /usr/sbin/libvirt it'll take quite a long time to startup, but once it does then try your virt-manager test case again.
While I can reproduce the problem 100% of the time when running either normally or under gdb, I do not seem to be able to reproduce the problem under valgrind. Startup doesn't appear to take that long really, only a few seconds. After which it seems quite responsive to connection requests and does not segfault on the first connection from virt-manager. Ideas?
Hmm, that's rather unfortunate :-( Can you edit /etc/libvirt/libvirtd.conf and set log_level=1 log_outputs="1:/var/log/libvirt.log" and then restart libvirtd & try and make it crash. it is possible the log file may end up with some helpful info leading upto the crash
Created attachment 386846 [details] debug output The logging change you requested keep going to STDERR not the log file. So, I redirected that to a file and have it attached.
Might be worth seeing if this commit fixes the crash: http://libvirt.org/git/?p=libvirt.git;a=commit;h=338e7c3c8d5b861f3ad376863519f3496736987e
I've applied the referenced patch to my build and so far things look very good. Thank you very, very much.
I confirm the problem is solved by the patch, thank you.