Summary: | libvirtd segfault when reloading while starting up | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Hao Liu <hliu> |
Component: | libvirt | Assignee: | Pavel Hrdina <phrdina> |
Status: | CLOSED ERRATA | QA Contact: | Virtualization Bugs <virt-bugs> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 7.1 | CC: | dyuan, hliu, lhuang, mzhan, rbalakri |
Target Milestone: | rc | Keywords: | Upstream |
Target Release: | 7.2 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | libvirt-1.2.13-1.el7 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-11-19 06:07:22 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: |
Description
Hao Liu
2015-01-08 01:03:14 UTC
Tested with the following command on other version of libvirt. 1. On newest RHEL6 it work fine for at least several minutes. # while (( 1 )); do service libvirtd restart; service libvirtd reload; virsh list; done 2. On RHEL7.0 with libvirt-1.1.1-29.el7.x86_64 # while ((1)); do systemctl reset-failed; systemctl restart libvirtd; systemctl reload libvirtd; virsh list; done Most time its fine with following line logged: journal: internal error: qemu state driver is not active But it also fails occasionally with: Thread 8 (Thread 0x7f0e57909880 (LWP 15422)): #0 0x00007f0e5403cac0 in _int_realloc () from /lib64/libc.so.6 #1 0x00007f0e5403d702 in realloc () from /lib64/libc.so.6 #2 0x00007f0e55e35f91 in xmlParseComment () from /lib64/libxml2.so.2 #3 0x00007f0e55e3f4f3 in xmlParseContent () from /lib64/libxml2.so.2 #4 0x00007f0e55e3fd33 in xmlParseElement () from /lib64/libxml2.so.2 #5 0x00007f0e55e404aa in xmlParseDocument () from /lib64/libxml2.so.2 #6 0x00007f0e55e40787 in xmlDoRead () from /lib64/libxml2.so.2 #7 0x00007f0e55eefbce in xmlRelaxNGParse () from /lib64/libxml2.so.2 #8 0x00007f0e415afc36 in rng_parse () from /lib64/libnetcf.so.1 #9 0x00007f0e415ae787 in ncf_init () from /lib64/libnetcf.so.1 #10 0x00007f0e417c26ba in netcfStateReload () at interface/interface_backend_netcf.c:130 #11 0x00007f0e56f57068 in virStateReload () at libvirt.c:902 #12 0x00007f0e57955492 in daemonReloadHandler (srv=srv@entry=0x7f0e59a40e10, sig=sig@entry=0x7fff4af18180, opaque=opaque@entry=0x0) at libvirtd.c:798 #13 0x00007f0e56fbd60a in virNetServerSignalEvent (watch=watch@entry=2, fd=<optimized out>, events=events@entry=1, opaque=opaque@entry=0x7f0e59a40e10) at rpc/virnetserver.c:881 #14 0x00007f0e56eb4a0d in virEventPollDispatchHandles (fds=<optimized out>, nfds=<optimized out>) at util/vireventpoll.c:498 #15 virEventPollRunOnce () at util/vireventpoll.c:645 #16 0x00007f0e56eb316d in virEventRunDefaultImpl () at util/virevent.c:273 #17 0x00007f0e56fbf12d in virNetServerRun (srv=0x7f0e59a40e10) at rpc/virnetserver.c:1117 #18 0x00007f0e579549af in main (argc=<optimized out>, argv=<optimized out>) at libvirtd.c:1517 ... Thread 1 (Thread 0x7f0e3ef03700 (LWP 15434)): #0 0x00007f0e54005a94 in vfprintf () from /lib64/libc.so.6 #1 0x00007f0e540ca495 in __vasprintf_chk () from /lib64/libc.so.6 #2 0x00007f0e415af573 in xasprintf () from /lib64/libnetcf.so.1 #3 0x00007f0e415af9b7 in parse_stylesheet () from /lib64/libnetcf.so.1 #4 0x00007f0e415b38a9 in drv_init () from /lib64/libnetcf.so.1 #5 0x00007f0e417c49ea in netcfStateInitialize (privileged=<optimized out>, callback=<optimized out>, opaque=<optimized out>) at interface/interface_backend_netcf.c:89 #6 0x00007f0e56f56e8a in virStateInitialize (privileged=true, callback=callback@entry=0x7f0e57955260 <daemonInhibitCallback>, opaque=opaque@entry=0x7f0e59a40e10) at libvirt.c:848 #7 0x00007f0e579552bb in daemonRunStateInit (opaque=opaque@entry=0x7f0e59a40e10) at libvirtd.c:908 #8 0x00007f0e56ee1f4e in virThreadHelper (data=<optimized out>) at util/virthreadpthread.c:194 #9 0x00007f0e5478cdf5 in start_thread () from /lib64/libpthread.so.0 #10 0x00007f0e540b31ad in clone () from /lib64/libc.so.6 Could it be a regression? Patch proposed upstream: https://www.redhat.com/archives/libvir-list/2015-February/msg00643.html fixed upstream commit 5c756e580f0ad4fd19f801e770d54167d1159162 Author: Pavel Hrdina <phrdina> Date: Wed Feb 18 16:10:58 2015 +0100 daemon: Fix segfault by reloading daemon right after start I can produce this bug with build libvirt-1.2.8-11.el7.x86_64 1. execute reload libvirtd while starting up libvirtd, libvirtd crashed # while ((1)); do systemctl reset-failed; systemctl restart libvirtd; systemctl reload libvirtd; virsh list; done ... Id Name State ---------------------------------------------------- error: failed to connect to the hypervisor error: no valid connection error: Cannot recv data: Connection reset by peer ... 2. check core dump info from abrt # abrt-cli list | head The Autoreporting feature is disabled. Please consider enabling it by issuing 'abrt-auto-reporting enabled' as a user with root privileges id e4059b9e7bceb686adcbf0a69ec06f112caeb00e reason: libvirtd killed by SIGSEGV time: Fri 26 Jun 2015 02:33:24 PM CST cmdline: /usr/sbin/libvirtd package: libvirt-daemon-1.2.8-11.el7 uid: 0 (root) count: 1 Directory: /var/tmp/abrt/ccpp-2015-06-26-14:33:24-22682 Run 'abrt-cli report /var/tmp/abrt/ccpp-2015-06-26-14:33:24-22682' for creating a case in Red Hat Customer Portal 3. check backtrace using gdb #cd /var/tmp/abrt/ccpp-2015-06-26-14:33:24-22682 # gdb -c coredump ... Verify this bug with build libvirt-1.2.16-1.el7.x86_64 execute reload libvirtd while starting up it for 10-20 minutes # while ((1)); do systemctl reset-failed; systemctl restart libvirtd; systemctl reload libvirtd; virsh list; done Id Name State ---------------------------------------------------- 18 vm1 running Id Name State ---------------------------------------------------- 18 vm1 running Id Name State ---------------------------------------------------- 18 vm1 running Id Name State ---------------------------------------------------- 18 vm1 running Id Name State ---------------------------------------------------- 18 vm1 running Id Name State ---------------------------------------------------- 18 vm1 running .... no libvirtd crash happened again so move to verified Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-2202.html |