Hide Forgot
I noticed this in systemd Journal logs when I restarted libvirtd: Feb 03 12:33:16 tesla kernel: libvirtd[19169]: segfault at 0 ip 00007f265b231b4c sp 00007f263e762ad0 error 4 in libvirt.so.0.1002.11[7f265b175000+363000] Version ------- $ uname -r; rpm -q libvirt qemu 3.17.8-300.fc21.x86_64 libvirt-1.2.11-1.fc21.x86_64 qemu-2.1.2-7.fc21.x86_64 Reproducer ---------- I can consistently reproduce this on my Fedora 21 laptop, just by restarting libvirtd $ systemctl restart libvirtd Actual Result ------------- Status of libvirt daemon: ~~~~~~~ $ systemctl status libvirtd -l ● libvirtd.service - Virtualization daemon Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled) Active: failed (Result: start-limit) since Tue 2015-02-03 14:13:24 CET; 1min 53s ago Docs: man:libvirtd(8) http://libvirt.org Process: 8562 ExecStart=/usr/sbin/libvirtd $LIBVIRTD_ARGS (code=killed, signal=SEGV) Main PID: 8562 (code=killed, signal=SEGV) CGroup: /system.slice/libvirtd.service ├─1857 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/openstackvms.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper ├─1858 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/openstackvms.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper ├─1986 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper └─1987 /sbin/dnsmasq --conf-file=/var/lib/libvirt/dnsmasq/default.conf --leasefile-ro --dhcp-script=/usr/libexec/libvirt_leaseshelper Feb 03 14:13:23 tesla systemd[1]: libvirtd.service: main process exited, code=killed, status=11/SEGV Feb 03 14:13:23 tesla systemd[1]: Unit libvirtd.service entered failed state. Feb 03 14:13:23 tesla systemd[1]: libvirtd.service failed. Feb 03 14:13:24 tesla systemd[1]: start request repeated too quickly for libvirtd.service Feb 03 14:13:24 tesla systemd[1]: Failed to start Virtualization daemon. Feb 03 14:13:24 tesla systemd[1]: Unit libvirtd.service entered failed state. Feb 03 14:13:24 tesla systemd[1]: libvirtd.service failed. ~~~~~~~ From `journalctl`: $ journalctl -f [. . .] Feb 03 14:09:18 tesla dnsmasq[1986]: read /etc/hosts - 15 addresses Feb 03 14:09:18 tesla dnsmasq[1857]: read /etc/hosts - 15 addresses Feb 03 14:09:18 tesla dnsmasq[1986]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses Feb 03 14:09:18 tesla dnsmasq[1857]: read /var/lib/libvirt/dnsmasq/openstackvms.addnhosts - 0 addresses Feb 03 14:09:18 tesla dnsmasq-dhcp[1857]: read /var/lib/libvirt/dnsmasq/openstackvms.hostsfile Feb 03 14:09:18 tesla dnsmasq-dhcp[1986]: read /var/lib/libvirt/dnsmasq/default.hostsfile Feb 03 14:09:18 tesla kernel: SELinux: initialized (dev mqueue, type mqueue), uses transition SIDs Feb 03 14:09:18 tesla kernel: SELinux: initialized (dev proc, type proc), uses genfs_contexts Feb 03 14:09:18 tesla kernel: SELinux: initialized (dev mqueue, type mqueue), uses transition SIDs Feb 03 14:09:18 tesla kernel: SELinux: initialized (dev proc, type proc), uses genfs_contexts Feb 03 14:09:18 tesla kernel: libvirtd[23764]: segfault at 0 ip 00007f1e84aebb4c sp 00007f1e6801cad0 error 4 in libvirt.so.0.1002.11[7f1e84a2f000+363000] Feb 03 14:09:18 tesla abrt-hook-ccpp[23841]: Not saving repeating crash in '/usr/sbin/libvirtd' Feb 03 14:09:18 tesla systemd[1]: libvirtd.service: main process exited, code=killed, status=11/SEGV Feb 03 14:09:18 tesla systemd[1]: Unit libvirtd.service entered failed state. Feb 03 14:09:18 tesla systemd[1]: libvirtd.service failed. Feb 03 14:09:19 tesla systemd[1]: start request repeated too quickly for libvirtd.service Feb 03 14:09:19 tesla systemd[1]: Failed to start Virtualization daemon. Feb 03 14:09:19 tesla systemd[1]: Unit libvirtd.service entered failed state. Feb 03 14:09:19 tesla systemd[1]: libvirtd.service failed. [. . .] Failure from libvirt debug logs ------------------------------- [. . .] 2015-02-03 13:13:23.583+0000: 8573: debug : virPCIDeviceConfigOpen:312 : 8086 9c03 0000:00:1f.2: opened /sys/bus/pci/devices/0000:00:1f.2/config 2015-02-03 13:13:23.583+0000: 8573: debug : virPCIDeviceFindCapabilityOffset:540 : 8086 9c03 0000:00:1f.2: failed to find cap 0x10 2015-02-03 13:13:23.583+0000: 8573: debug : virPCIDeviceFindCapabilityOffset:533 : 8086 9c03 0000:00:1f.2: found cap 0x01 at 0x70 2015-02-03 13:13:23.583+0000: 8573: debug : virPCIDeviceFindCapabilityOffset:540 : 8086 9c03 0000:00:1f.2: failed to find cap 0x13 [. . .]
Please attach a backtrace of the crash.
There we go: $ gdb libvirtd $(pidof libvirtd) GNU gdb (GDB) Fedora 7.8.2-38.fc21 Copyright (C) 2014 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word"... Reading symbols from libvirtd...Reading symbols from /usr/lib/debug/usr/sbin/libvirtd.debug...done. done. (gdb) r Starting program: /usr/sbin/libvirtd [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". 2015-02-03 15:05:22.725+0000: 5728: info : libvirt version: 1.2.11, package: 1.fc21 (Unknown, 2014-12-13-04:59:24, intel-sharkbay-dh-07.ml3.eng.bos.redhat.com) 2015-02-03 15:05:22.725+0000: 5728: debug : virLogParseOutputs:1104 : outputs=1:file:/var/log/libvirt/libvirtd.log [New Thread 0x7fffe7701700 (LWP 5732)] [New Thread 0x7fffe6f00700 (LWP 5733)] [New Thread 0x7fffe66ff700 (LWP 5734)] [New Thread 0x7fffe5efe700 (LWP 5735)] [New Thread 0x7fffe56fd700 (LWP 5736)] [New Thread 0x7fffe4efc700 (LWP 5737)] [New Thread 0x7fffe46fb700 (LWP 5738)] [New Thread 0x7fffe3efa700 (LWP 5739)] [New Thread 0x7fffe36f9700 (LWP 5740)] [New Thread 0x7fffe2ef8700 (LWP 5741)] [New Thread 0x7fffda9dc700 (LWP 5742)] Detaching after fork from child process 5743. Detaching after fork from child process 5744. Detaching after fork from child process 5745. Detaching after fork from child process 5746. Detaching after fork from child process 5747. Detaching after fork from child process 5813. Detaching after fork from child process 5814. Detaching after fork from child process 5815. Detaching after fork from child process 5818. Detaching after fork from child process 5821. Detaching after fork from child process 5824. Detaching after fork from child process 5827. Detaching after fork from child process 5830. Detaching after fork from child process 5833. Detaching after fork from child process 5836. Detaching after fork from child process 5839. Detaching after fork from child process 5842. Detaching after fork from child process 5845. Detaching after fork from child process 5848. Detaching after fork from child process 5851. Detaching after fork from child process 5854. Detaching after fork from child process 5857. Detaching after fork from child process 5860. Detaching after fork from child process 5863. Detaching after fork from child process 5866. Detaching after fork from child process 5869. Detaching after fork from child process 5872. Detaching after fork from child process 5875. Detaching after fork from child process 5878. Detaching after fork from child process 5881. Detaching after fork from child process 5884. Detaching after fork from child process 5887. Detaching after fork from child process 5890. Detaching after fork from child process 5893. Detaching after fork from child process 5894. Detaching after fork from child process 5895. Detaching after fork from child process 5896. Program received signal SIGSEGV, Segmentation fault. [Switching to Thread 0x7fffda9dc700 (LWP 5742)] 0x00007ffff74aab4c in virStorageSourceParseBackingURI (path=<optimized out>, src=0x7fffd424bf00) at util/virstoragefile.c:2174 2174 if (VIR_STRDUP(src->path, Missing separate debuginfos, use: debuginfo-install audit-libs-2.4.1-1.fc21.x86_64 augeas-libs-1.3.0-1.fc21.x86_64 avahi-libs-0.6.31-30.fc21.x86_64 boost-system-1.55.0-8.fc21.x86_64 boost-thread-1.55.0-8.fc21.x86_64 bzip2-libs-1.0.6-14.fc21.x86_64 cyrus-sasl-gssapi-2.1.26-19.fc21.x86_64 cyrus-sasl-lib-2.1.26-19.fc21.x86_64 cyrus-sasl-md5-2.1.26-19.fc21.x86_64 cyrus-sasl-plain-2.1.26-19.fc21.x86_64 cyrus-sasl-scram-2.1.26-19.fc21.x86_64 dbus-libs-1.8.14-1.fc21.x86_64 device-mapper-libs-1.02.90-1.fc21.x86_64 elfutils-libelf-0.161-2.fc21.x86_64 elfutils-libs-0.161-2.fc21.x86_64 fuse-libs-2.9.3-4.fc21.x86_64 glusterfs-api-3.5.3-1.fc21.x86_64 glusterfs-libs-3.5.3-1.fc21.x86_64 gmp-6.0.0-7.fc21.x86_64 gnutls-3.3.12-1.fc21.x86_64 keyutils-libs-1.5.9-4.fc21.x86_64 libatomic_ops-7.4.2-4.fc21.x86_64 libblkid-2.25.2-2.fc21.x86_64 libcap-ng-0.7.4-7.fc21.x86_64 libcurl-7.37.0-12.fc21.x86_64 libdb-5.3.28-9.fc21.x86_64 libffi-3.1-6.fc21.x86_64 libgcc-4.9.2-1.fc21.x86_64 libgcrypt-1.6.1-7.fc21.x86_64 libgpg-error-1.13-3.fc21.x86_64 libidn-1.28-5.fc21.x86_64 libnl3-3.2.25-5.fc21.x86_64 libpcap-1.6.2-1.fc21.x86_64 libpciaccess-0.13.3-0.3.fc21.x86_64 librados2-0.80.7-3.fc21.x86_64 librbd1-0.80.7-3.fc21.x86_64 libselinux-2.3-5.fc21.x86_64 libsepol-2.3-4.fc21.x86_64 libssh2-1.4.3-16.fc21.x86_64 libstdc++-4.9.2-1.fc21.x86_64 libtasn1-4.2-1.fc21.x86_64 libuuid-2.25.2-2.fc21.x86_64 libwsman1-2.4.6-3.fc21.x86_64 libxml2-2.9.1-6.fc21.x86_64 libxslt-1.1.28-8.fc21.x86_64 netcf-libs-0.2.6-2.fc21.x86_64 nettle-2.7.1-5.fc21.x86_64 nspr-4.10.7-1.fc21.x86_64 nss-3.17.3-2.fc21.x86_64 nss-mdns-0.10-15.fc21.x86_64 nss-softokn-freebl-3.17.3-1.fc21.x86_64 nss-util-3.17.3-1.fc21.x86_64 numactl-libs-2.0.9-4.fc21.x86_64 openldap-2.4.40-2.fc21.x86_64 openssl-libs-1.0.1k-1.fc21.x86_64 p11-kit-0.22.1-1.fc21.x86_64 pcre-8.35-8.fc21.x86_64 python-libs-2.7.8-7.fc21.x86_64 systemd-libs-216-16.fc21.x86_64 trousers-0.3.13-3.fc21.x86_64 xen-libs-4.4.1-12.fc21.x86_64 xz-libs-5.1.2-14alpha.fc21.x86_64 yajl-2.1.0-3.fc21.x86_64 zlib-1.2.8-7.fc21.x86_64 (gdb) thread apply all bt full Thread 12 (Thread 0x7fffda9dc700 (LWP 5742)): #0 0x00007ffff74aab4c in virStorageSourceParseBackingURI (path=<optimized out>, src=0x7fffd424bf00) at util/virstoragefile.c:2174 ret = -1 uri = 0x7fffd424d1f0 scheme = 0x7fffd424eff0 #1 virStorageSourceNewFromBackingAbsolute (path=<optimized out>) at util/virstoragefile.c:2526 ret = 0x7fffd424bf00 #2 virStorageSourceNewFromBacking (parent=parent@entry=0x7fffd4247ad0) at util/virstoragefile.c:2552 st = {st_dev = 140736861158288, st_ino = 0, st_nlink = 64774, st_mode = 2363323, st_uid = 0, st_gid = 1, __pad0 = 0, st_rdev = 33188, st_size = 0, st_blksize = 0, st_blocks = 197632, st_atim = {tv_sec = 4096, tv_nsec = 392}, st_mtim = {tv_sec = 1422971631, tv_nsec = 224412497}, st_ctim = {tv_sec = 1422654908, tv_nsec = 92150077}, __glibc_reserved = {1422654908, 92150077, 0}} ret = <optimized out> #3 0x00007fffe10a9895 in virStorageBackendProbeTarget (encryption=0x7fffd424fe98, target=0x7fffd424fe48) at storage/storage_backend_fs.c:99 fd = 23 ret = -1 backingStoreFormat = -1 rc = <optimized out> meta = 0x7fffd4247ad0 sb = {st_dev = 64774, st_ino = 2363323, st_nlink = 1, st_mode = 33188, st_uid = 0, st_gid = 0, __pad0 = 0, st_rdev = 0, st_size = 197632, st_blksize = 4096, st_blocks = 392, st_atim = {tv_sec = 1422971631, tv_nsec = 224412497}, st_mtim = {tv_sec = 1422654908, tv_nsec = 92150077}, st_ctim = {tv_sec = 1422654908, tv_nsec = 92150077}, __glibc_reserved = {0, 0, 0}} #4 virStorageBackendFileSystemRefresh (conn=<optimized out>, pool=0x7fffd40d9530) at storage/storage_backend_fs.c:880 dir = 0x7fffd42223c0 ent = 0x7fffd4222478 sb = {f_bsize = 64774, f_frsize = 2363323, f_blocks = 1, f_bfree = 33188, f_bavail = 0, f_files = 0, f_ffree = 197632, f_favail = 4096, f_fsid = 392, f_flag = 1422971631, f_namemax = 224412497, __f_spare = {1422654908, 0, 92150077, 0, 1422654908, 0}} vol = 0x7fffd424fe10 direrr = <optimized out> __FUNCTION__ = "virStorageBackendFileSystemRefresh" #5 0x00007fffe109e0db in storageDriverAutostart () at storage/storage_driver.c:128 pool = 0x7fffd40d9530 backend = 0x7fffe12cf500 <virStorageBackendDirectory> started = true i = 1 conn = 0x7fffd421b0e0 __func__ = "storageDriverAutostart" #6 0x00007fffe109e3aa in storageStateAutoStart () at storage/storage_driver.c:218 No locals. #7 0x00007ffff753e37f in virStateInitialize (privileged=true, callback=0x55555556aa70 <daemonInhibitCallback>, opaque=0x5555557ea8f0) at libvirt.c:758 i = 3 __func__ = "virStateInitialize" #8 0x000055555556aacb in daemonRunStateInit (opaque=opaque@entry=0x5555557ea8f0) at libvirtd.c:918 srv = 0x5555557ea8f0 sysident = 0x7fffd4000910 __func__ = "daemonRunStateInit" #9 0x00007ffff74b026e in virThreadHelper (data=<optimized out>) at util/virthread.c:197 args = 0x0 ---Type <return> to continue, or q <return> to quit--- local = {func = 0x55555556aa90 <daemonRunStateInit>, opaque = 0x5555557ea8f0} #10 0x00007ffff42b252a in start_thread (arg=0x7fffda9dc700) at pthread_create.c:310 __res = <optimized out> pd = 0x7fffda9dc700 now = <optimized out> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140736861161216, 3769109327082755901, 140737488346113, 4096, 140736861161216, 140736861161920, -3769189847049103555, -3769098780620941507}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} not_first_call = <optimized out> pagesize_m1 = <optimized out> sp = <optimized out> freesize = <optimized out> #11 0x00007ffff3fee79d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 No locals. Thread 11 (Thread 0x7fffe2ef8700 (LWP 5741)): #0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 No locals. #1 0x00007ffff74b04b6 in virCondWait (c=c@entry=0x5555557eab78, m=m@entry=0x5555557eaab8) at util/virthread.c:153 ret = <optimized out> #2 0x00007ffff74b096b in virThreadPoolWorker (opaque=opaque@entry=0x5555557de030) at util/virthreadpool.c:104 data = 0x0 pool = 0x5555557eaa80 cond = 0x5555557eab78 priority = true job = 0x0 #3 0x00007ffff74b026e in virThreadHelper (data=<optimized out>) at util/virthread.c:197 args = 0x0 local = {func = 0x7ffff74b0780 <virThreadPoolWorker>, opaque = 0x5555557de030} #4 0x00007ffff42b252a in start_thread (arg=0x7fffe2ef8700) at pthread_create.c:310 __res = <optimized out> pd = 0x7fffe2ef8700 now = <optimized out> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737000736512, 3769109327082755901, 140737488345601, 4096, 140737000736512, 140737000737216, -3769137781308057795, -3769098780620941507}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} not_first_call = <optimized out> pagesize_m1 = <optimized out> sp = <optimized out> freesize = <optimized out> #5 0x00007ffff3fee79d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 No locals. Thread 10 (Thread 0x7fffe36f9700 (LWP 5740)): #0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 No locals. #1 0x00007ffff74b04b6 in virCondWait (c=c@entry=0x5555557eab78, m=m@entry=0x5555557eaab8) at util/virthread.c:153 ret = <optimized out> #2 0x00007ffff74b096b in virThreadPoolWorker (opaque=opaque@entry=0x5555557de1d0) at util/virthreadpool.c:104 data = 0x0 ---Type <return> to continue, or q <return> to quit--- pool = 0x5555557eaa80 cond = 0x5555557eab78 priority = true job = 0x0 #3 0x00007ffff74b026e in virThreadHelper (data=<optimized out>) at util/virthread.c:197 args = 0x0 local = {func = 0x7ffff74b0780 <virThreadPoolWorker>, opaque = 0x5555557de1d0} #4 0x00007ffff42b252a in start_thread (arg=0x7fffe36f9700) at pthread_create.c:310 __res = <optimized out> pd = 0x7fffe36f9700 now = <optimized out> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737009129216, 3769109327082755901, 140737488345601, 4096, 140737009129216, 140737009129920, -3769136675890849987, -3769098780620941507}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} not_first_call = <optimized out> pagesize_m1 = <optimized out> sp = <optimized out> freesize = <optimized out> #5 0x00007ffff3fee79d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 No locals. Thread 9 (Thread 0x7fffe3efa700 (LWP 5739)): #0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 No locals. #1 0x00007ffff74b04b6 in virCondWait (c=c@entry=0x5555557eab78, m=m@entry=0x5555557eaab8) at util/virthread.c:153 ret = <optimized out> #2 0x00007ffff74b096b in virThreadPoolWorker (opaque=opaque@entry=0x5555557de030) at util/virthreadpool.c:104 data = 0x0 pool = 0x5555557eaa80 cond = 0x5555557eab78 priority = true job = 0x0 #3 0x00007ffff74b026e in virThreadHelper (data=<optimized out>) at util/virthread.c:197 args = 0x0 local = {func = 0x7ffff74b0780 <virThreadPoolWorker>, opaque = 0x5555557de030} #4 0x00007ffff42b252a in start_thread (arg=0x7fffe3efa700) at pthread_create.c:310 __res = <optimized out> pd = 0x7fffe3efa700 now = <optimized out> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737017521920, 3769109327082755901, 140737488345601, 4096, 140737017521920, 140737017522624, -3769135576916093123, -3769098780620941507}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} not_first_call = <optimized out> pagesize_m1 = <optimized out> sp = <optimized out> freesize = <optimized out> #5 0x00007ffff3fee79d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 No locals. Thread 8 (Thread 0x7fffe46fb700 (LWP 5738)): ---Type <return> to continue, or q <return> to quit--- #0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 No locals. #1 0x00007ffff74b04b6 in virCondWait (c=c@entry=0x5555557eab78, m=m@entry=0x5555557eaab8) at util/virthread.c:153 ret = <optimized out> #2 0x00007ffff74b096b in virThreadPoolWorker (opaque=opaque@entry=0x5555557de1d0) at util/virthreadpool.c:104 data = 0x0 pool = 0x5555557eaa80 cond = 0x5555557eab78 priority = true job = 0x0 #3 0x00007ffff74b026e in virThreadHelper (data=<optimized out>) at util/virthread.c:197 args = 0x0 local = {func = 0x7ffff74b0780 <virThreadPoolWorker>, opaque = 0x5555557de1d0} #4 0x00007ffff42b252a in start_thread (arg=0x7fffe46fb700) at pthread_create.c:310 __res = <optimized out> pd = 0x7fffe46fb700 now = <optimized out> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737025914624, 3769109327082755901, 140737488345601, 4096, 140737025914624, 140737025915328, -3769134475793852611, -3769098780620941507}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} not_first_call = <optimized out> pagesize_m1 = <optimized out> sp = <optimized out> freesize = <optimized out> #5 0x00007ffff3fee79d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 No locals. Thread 7 (Thread 0x7fffe4efc700 (LWP 5737)): #0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 No locals. #1 0x00007ffff74b04b6 in virCondWait (c=c@entry=0x5555557eab78, m=m@entry=0x5555557eaab8) at util/virthread.c:153 ret = <optimized out> #2 0x00007ffff74b096b in virThreadPoolWorker (opaque=opaque@entry=0x5555557de030) at util/virthreadpool.c:104 data = 0x0 pool = 0x5555557eaa80 cond = 0x5555557eab78 priority = true job = 0x0 #3 0x00007ffff74b026e in virThreadHelper (data=<optimized out>) at util/virthread.c:197 args = 0x0 local = {func = 0x7ffff74b0780 <virThreadPoolWorker>, opaque = 0x5555557de030} #4 0x00007ffff42b252a in start_thread (arg=0x7fffe4efc700) at pthread_create.c:310 __res = <optimized out> pd = 0x7fffe4efc700 now = <optimized out> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737034307328, 3769109327082755901, 140737488345601, 4096, 140737034307328, 140737034308032, -3769133376819095747, -3769098780620941507}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} not_first_call = <optimized out> pagesize_m1 = <optimized out> ---Type <return> to continue, or q <return> to quit--- sp = <optimized out> freesize = <optimized out> #5 0x00007ffff3fee79d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 No locals. Thread 6 (Thread 0x7fffe56fd700 (LWP 5736)): #0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 No locals. #1 0x00007ffff74b04b6 in virCondWait (c=c@entry=0x5555557eaae0, m=m@entry=0x5555557eaab8) at util/virthread.c:153 ret = <optimized out> #2 0x00007ffff74b096b in virThreadPoolWorker (opaque=opaque@entry=0x5555557de030) at util/virthreadpool.c:104 data = 0x0 pool = 0x5555557eaa80 cond = 0x5555557eaae0 priority = false job = 0x0 #3 0x00007ffff74b026e in virThreadHelper (data=<optimized out>) at util/virthread.c:197 args = 0x0 local = {func = 0x7ffff74b0780 <virThreadPoolWorker>, opaque = 0x5555557de030} #4 0x00007ffff42b252a in start_thread (arg=0x7fffe56fd700) at pthread_create.c:310 __res = <optimized out> pd = 0x7fffe56fd700 now = <optimized out> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737042700032, 3769109327082755901, 140737488345601, 4096, 140737042700032, 140737042700736, -3769132279991822531, -3769098780620941507}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} not_first_call = <optimized out> pagesize_m1 = <optimized out> sp = <optimized out> freesize = <optimized out> #5 0x00007ffff3fee79d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 No locals. Thread 5 (Thread 0x7fffe5efe700 (LWP 5735)): #0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 No locals. #1 0x00007ffff74b04b6 in virCondWait (c=c@entry=0x5555557eaae0, m=m@entry=0x5555557eaab8) at util/virthread.c:153 ret = <optimized out> #2 0x00007ffff74b096b in virThreadPoolWorker (opaque=opaque@entry=0x5555557de1d0) at util/virthreadpool.c:104 data = 0x0 pool = 0x5555557eaa80 cond = 0x5555557eaae0 priority = false job = 0x0 #3 0x00007ffff74b026e in virThreadHelper (data=<optimized out>) at util/virthread.c:197 args = 0x0 local = {func = 0x7ffff74b0780 <virThreadPoolWorker>, opaque = 0x5555557de1d0} #4 0x00007ffff42b252a in start_thread (arg=0x7fffe5efe700) at pthread_create.c:310 __res = <optimized out> ---Type <return> to continue, or q <return> to quit--- pd = 0x7fffe5efe700 now = <optimized out> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737051092736, 3769109327082755901, 140737488345601, 4096, 140737051092736, 140737051093440, -3769131181017065667, -3769098780620941507}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} not_first_call = <optimized out> pagesize_m1 = <optimized out> sp = <optimized out> freesize = <optimized out> #5 0x00007ffff3fee79d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 No locals. Thread 4 (Thread 0x7fffe66ff700 (LWP 5734)): #0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 No locals. #1 0x00007ffff74b04b6 in virCondWait (c=c@entry=0x5555557eaae0, m=m@entry=0x5555557eaab8) at util/virthread.c:153 ret = <optimized out> #2 0x00007ffff74b096b in virThreadPoolWorker (opaque=opaque@entry=0x5555557de030) at util/virthreadpool.c:104 data = 0x0 pool = 0x5555557eaa80 cond = 0x5555557eaae0 priority = false job = 0x0 #3 0x00007ffff74b026e in virThreadHelper (data=<optimized out>) at util/virthread.c:197 args = 0x0 local = {func = 0x7ffff74b0780 <virThreadPoolWorker>, opaque = 0x5555557de030} #4 0x00007ffff42b252a in start_thread (arg=0x7fffe66ff700) at pthread_create.c:310 __res = <optimized out> pd = 0x7fffe66ff700 now = <optimized out> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737059485440, 3769109327082755901, 140737488345601, 4096, 140737059485440, 140737059486144, -3769130079894825155, -3769098780620941507}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} not_first_call = <optimized out> pagesize_m1 = <optimized out> sp = <optimized out> freesize = <optimized out> #5 0x00007ffff3fee79d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 No locals. Thread 3 (Thread 0x7fffe6f00700 (LWP 5733)): #0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 No locals. #1 0x00007ffff74b04b6 in virCondWait (c=c@entry=0x5555557eaae0, m=m@entry=0x5555557eaab8) at util/virthread.c:153 ret = <optimized out> #2 0x00007ffff74b096b in virThreadPoolWorker (opaque=opaque@entry=0x5555557de1d0) at util/virthreadpool.c:104 data = 0x0 pool = 0x5555557eaa80 cond = 0x5555557eaae0 priority = false ---Type <return> to continue, or q <return> to quit--- job = 0x0 #3 0x00007ffff74b026e in virThreadHelper (data=<optimized out>) at util/virthread.c:197 args = 0x0 local = {func = 0x7ffff74b0780 <virThreadPoolWorker>, opaque = 0x5555557de1d0} #4 0x00007ffff42b252a in start_thread (arg=0x7fffe6f00700) at pthread_create.c:310 __res = <optimized out> pd = 0x7fffe6f00700 now = <optimized out> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737067878144, 3769109327082755901, 140737488345601, 4096, 140737067878144, 140737067878848, -3769128980920068291, -3769098780620941507}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} not_first_call = <optimized out> pagesize_m1 = <optimized out> sp = <optimized out> freesize = <optimized out> #5 0x00007ffff3fee79d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 No locals. Thread 2 (Thread 0x7fffe7701700 (LWP 5732)): #0 pthread_cond_wait@@GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185 No locals. #1 0x00007ffff74b04b6 in virCondWait (c=c@entry=0x5555557eaae0, m=m@entry=0x5555557eaab8) at util/virthread.c:153 ret = <optimized out> #2 0x00007ffff74b096b in virThreadPoolWorker (opaque=opaque@entry=0x5555557de1d0) at util/virthreadpool.c:104 data = 0x0 pool = 0x5555557eaa80 cond = 0x5555557eaae0 priority = false job = 0x0 #3 0x00007ffff74b026e in virThreadHelper (data=<optimized out>) at util/virthread.c:197 args = 0x0 local = {func = 0x7ffff74b0780 <virThreadPoolWorker>, opaque = 0x5555557de1d0} #4 0x00007ffff42b252a in start_thread (arg=0x7fffe7701700) at pthread_create.c:310 __res = <optimized out> pd = 0x7fffe7701700 now = <optimized out> unwind_buf = {cancel_jmp_buf = {{jmp_buf = {140737076270848, 3769109327082755901, 140737488345601, 4096, 140737076270848, 140737076271552, -3769127961402206403, -3769098780620941507}, mask_was_saved = 0}}, priv = {pad = {0x0, 0x0, 0x0, 0x0}, data = {prev = 0x0, cleanup = 0x0, canceltype = 0}}} not_first_call = <optimized out> pagesize_m1 = <optimized out> sp = <optimized out> freesize = <optimized out> #5 0x00007ffff3fee79d in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:109 No locals. Thread 1 (Thread 0x7ffff7f82880 (LWP 5728)): #0 0x00007ffff3fe31fd in poll () at ../sysdeps/unix/syscall-template.S:81 No locals. #1 0x00007ffff7474b82 in poll (__timeout=-1, __nfds=7, __fds=<optimized out>) at /usr/include/bits/poll2.h:46 ---Type <return> to continue, or q <return> to quit--- No locals. #2 virEventPollRunOnce () at util/vireventpoll.c:641 fds = 0x555555806720 ret = <optimized out> timeout = -1 nfds = 7 __func__ = "virEventPollRunOnce" __FUNCTION__ = "virEventPollRunOnce" #3 0x00007ffff74737b1 in virEventRunDefaultImpl () at util/virevent.c:308 __func__ = "virEventRunDefaultImpl" #4 0x000055555559a84d in virNetServerRun (srv=0x5555557ea8f0) at rpc/virnetserver.c:1139 timerid = -1 timerActive = false i = <optimized out> __FUNCTION__ = "virNetServerRun" __func__ = "virNetServerRun" #5 0x000055555556a86b in main (argc=<optimized out>, argv=<optimized out>) at libvirtd.c:1503 srv = 0x5555557ea8f0 remote_config_file = 0x5555557ea360 "/etc/libvirt/libvirtd.conf" statuswrite = -1 ret = 1 pid_file_fd = 5 pid_file = 0x5555557ea5b0 "/var/run/libvirtd.pid" sock_file = 0x5555557f6250 "/var/run/libvirt/libvirt-sock" sock_file_ro = 0x5555557f6220 "/var/run/libvirt/libvirt-sock-ro" timeout = -1 verbose = 0 godaemon = 0 ipsock = 0 config = 0x5555557e48a0 privileged = <optimized out> implicit_conf = <optimized out> run_dir = 0x5555557ea540 "/var/run/libvirt" old_umask = <optimized out> opts = {{name = 0x55555559d4de "verbose", has_arg = 0, flag = 0x7fffffffdd28, val = 118}, {name = 0x55555559d4e6 "daemon", has_arg = 0, flag = 0x7fffffffdd2c, val = 100}, { name = 0x55555559d4ed "listen", has_arg = 0, flag = 0x7fffffffdd30, val = 108}, {name = 0x55555559d5f5 "config", has_arg = 1, flag = 0x0, val = 102}, { name = 0x55555559d555 "timeout", has_arg = 1, flag = 0x0, val = 116}, {name = 0x55555559d4f4 "pid-file", has_arg = 1, flag = 0x0, val = 112}, {name = 0x55555559d4fd "version", has_arg = 0, flag = 0x0, val = 86}, {name = 0x55555559d505 "help", has_arg = 0, flag = 0x0, val = 104}, {name = 0x0, has_arg = 0, flag = 0x0, val = 0}} __func__ = "main" (gdb)
Some additional information --------------------------- $ for i in `find /var/lib/libvirt/images`; do qemu-img info $i | grep "backing file:"; done backing file: nbd://localhost backing file: ./cirros-0.3.3-x86_64-disk.img (actual path: /var/lib/libvirt/images/./cirros-0.3.3-x86_64-disk.img) backing file: nbd://localhost backing file: nbd://localhost backing file: nbd://localhost A related thread with upstream QEMU, where QEMU segfaults when booting an overlay with backing_file hosted over NBD: nbd.c:nbd_receive_request():L756: read failed http://lists.nongnu.org/archive/html/qemu-devel/2015-01/msg04397.html The root cause is identified here: http://lists.nongnu.org/archive/html/qemu-devel/2015-01/msg04700.html
Proposed patch: http://www.redhat.com/archives/libvir-list/2015-February/msg00063.html
commit fdb80ed4f6563928b9942a0d1450e0c725aa6c06 Author: Peter Krempa <pkrempa> Date: Tue Feb 3 18:03:41 2015 +0100 util: storage: Fix parsing of nbd:// URI without path If a storage file would be backed with a NBD device without path (nbd://localhost) libvirt would crash when parsing the backing path for the disk as the URI structure's path element is NULL in such case but the NBD parser would access it shamelessly. v1.2.12-74-gfdb80ed
libvirt-1.2.9.2-1.fc21 has been submitted as an update for Fedora 21. https://admin.fedoraproject.org/updates/libvirt-1.2.9.2-1.fc21
Package libvirt-1.2.9.2-1.fc21: * should fix your issue, * was pushed to the Fedora 21 testing repository, * should be available at your local mirror within two days. Update it with: # su -c 'yum update --enablerepo=updates-testing libvirt-1.2.9.2-1.fc21' as soon as you are able to. Please go to the following url: https://admin.fedoraproject.org/updates/FEDORA-2015-1892/libvirt-1.2.9.2-1.fc21 then log in and leave karma (feedback).
libvirt-1.2.9.2-1.fc21 has been pushed to the Fedora 21 stable repository. If problems still persist, please make note of it in this bug report.