RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2027400 - virtqemud crashed when start->reload->restart virtqemud
Summary: virtqemud crashed when start->reload->restart virtqemud
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: libvirt
Version: 9.0
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: rc
: ---
Assignee: Michal Privoznik
QA Contact: yafu
URL:
Whiteboard:
Depends On:
Blocks: 2035714
TreeView+ depends on / blocked
 
Reported: 2021-11-29 14:37 UTC by yafu
Modified: 2022-05-17 13:06 UTC (History)
7 users (show)

Fixed In Version: libvirt-8.0.0-0rc1.1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2035714 (view as bug list)
Environment:
Last Closed: 2022-05-17 12:45:52 UTC
Type: Bug
Target Upstream Version: 8.0.0
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-104161 0 None None None 2021-11-29 14:42:30 UTC
Red Hat Product Errata RHBA-2022:2390 0 None None None 2022-05-17 12:46:20 UTC

Description yafu 2021-11-29 14:37:34 UTC
Description of problem:
virtqemud crashed when start->reload->restart virtqemud

Version-Release number of selected component (if applicable):
libvirt-7.9.0-1.el9.x86_64

How reproducible:
100%

Steps to Reproduce:
1.Make sure virtqemud is inactive:
# systemctl status virtqemud
○ virtqemud.service - Virtualization qemu daemon
     Loaded: loaded (/usr/lib/systemd/system/virtqemud.service; enabled; vendor preset: enabled)
     Active: inactive (dead) since Mon 2021-11-29 09:33:32 EST; 27s ago

2.Do start->reload->restart virtqemud:
# systemctl start virtqemud; systemctl reload virtqemud; systemctl restart virtqemud

3.Check the coredump info:
# coredumpctl list
TIME                           PID UID GID SIG     COREFILE EXE                   SIZE
Mon 2021-11-29 09:28:48 EST 276109   0   0 SIGSEGV present  /usr/sbin/virtqemud 590.0K

4.Check the backtrace:
Core was generated by `/usr/sbin/virtqemud --timeout 120'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  ___pthread_mutex_lock (mutex=mutex@entry=0x48) at pthread_mutex_lock.c:76
Missing separate debuginfos, use: dnf debuginfo-install libpsl-0.21.1-5.el9.x86_64 p11-kit-0.24.0-4.el9.x86_64 sssd-client-2.5.2-5.el9.x86_64 yajl-2.1.0-20.el9.x86_64
--Type <RET> for more, q to quit, c to continue without paging--
76	  unsigned int type = PTHREAD_MUTEX_TYPE_ELISION (mutex);
[Current thread is 1 (Thread 0x7f7988be6ac0 (LWP 276109))]
(gdb) t a a bt

Thread 13 (Thread 0x7f798890f640 (LWP 276110) (Exiting)):
#0  futex_wait (private=0, expected=2, futex_word=0x7f798a9e8048 <_rtld_local+4168>) at ../sysdeps/nptl/futex-internal.h:146
#1  __GI___lll_lock_wait_private (futex=futex@entry=0x7f798a9e8048 <_rtld_local+4168>) at lowlevellock.c:35
#2  0x00007f798a001b6c in __GI___nptl_deallocate_stack (pd=pd@entry=0x7f798890f640) at nptl-stack.c:113
#3  0x00007f798a001d7d in __GI___nptl_free_tcb (pd=0x7f798890f640) at nptl_free_tcb.c:42
#4  0x00007f798a004ac1 in start_thread (arg=<optimized out>) at pthread_create.c:566
#5  0x00007f798a089850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 12 (Thread 0x7f795bfff640 (LWP 276126)):
#0  0x00007f798a07c99f in __GI___poll (fds=0x7f7940006340, nfds=1, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007f798a4a85ac in g_main_context_poll (priority=<optimized out>, n_fds=1, fds=0x7f7940006340, timeout=<optimized out>, context=0x7f794000c300) at ../glib/gmain.c:4434
#2  g_main_context_iterate.constprop.0 (context=context@entry=0x7f794000c300, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at ../glib/gmain.c:4126
#3  0x00007f798a4516d3 in g_main_context_iteration (context=0x7f794000c300, may_block=may_block@entry=1) at ../glib/gmain.c:4196
#4  0x00007f798a451721 in glib_worker_main (data=<optimized out>) at ../glib/gmain.c:6089
#5  0x00007f798a482662 in g_thread_proxy (data=0x7f7940009400) at ../glib/gthread.c:826
#6  0x00007f798a004af7 in start_thread (arg=<optimized out>) at pthread_create.c:435
#7  0x00007f798a089850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 11 (Thread 0x7f797ce0a640 (LWP 276125)):
#0  0x00007f798a08235b in mprotect () at ../sysdeps/unix/syscall-template.S:117
#1  0x00007f798a012c27 in grow_heap (diff=4096, h=0x7f7940000000) at /usr/src/debug/glibc-2.34-8.el9.x86_64/malloc/arena.c:535
#2  sysmalloc (nb=nb@entry=1040, av=0x7f7940000020) at malloc.c:2535
#3  0x00007f798a013dcb in _int_malloc (av=av@entry=0x7f7940000020, bytes=bytes@entry=1024) at malloc.c:4295
#4  0x00007f798a01536e in __libc_calloc (n=n@entry=1, elem_size=elem_size@entry=1024) at malloc.c:3567
#5  0x00007f798a45c931 in g_malloc0 (n_bytes=n_bytes@entry=1024) at ../glib/gmem.c:136
#6  0x00007f798a62eac7 in virHostCPUReadSignature (arch=VIR_ARCH_X86_64, cpuinfo=0x7f794001e020, signature=0x7f7940080ca0) at ../src/util/virhostcpu.c:1426
#7  0x00007f798a633b02 in virHostCPUGetSignature (signature=signature@entry=0x7f7940080ca0) at ../src/util/virhostcpu.c:1522
#8  0x00007f797f689a66 in virQEMUCapsCacheNew (libDir=0x7f794002a2a0 "/var/lib/libvirt/qemu", cacheDir=<optimized out>, runUid=107, runGid=107) at ../src/qemu/qemu_capabilities.c:5563
#9  0x00007f797f6d2542 in qemuStateInitialize (privileged=true, root=<optimized out>, callback=<optimized out>, opaque=<optimized out>) at ../src/qemu/qemu_driver.c:822
--Type <RET> for more, q to quit, c to continue without paging--
#10 0x00007f798a8167cf in virStateInitialize (opaque=0x5593cf817820, callback=0x5593cdef3010 <daemonInhibitCallback>, root=0x0, mandatory=<optimized out>, privileged=true) at ../src/libvirt.c:656
#11 virStateInitialize (privileged=<optimized out>, mandatory=true, root=0x0, callback=0x5593cdef3010 <daemonInhibitCallback>, opaque=0x5593cf817820) at ../src/libvirt.c:638
#12 0x00005593cdef3277 in daemonRunStateInit (opaque=0x5593cf817820) at ../src/remote/remote_daemon.c:609
#13 0x00007f798a673af9 in virThreadHelper (data=<optimized out>) at ../src/util/virthread.c:241
#14 0x00007f798a004af7 in start_thread (arg=<optimized out>) at pthread_create.c:435
#15 0x00007f798a089850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 10 (Thread 0x7f795b7fe640 (LWP 276128)):
#0  0x00007f798a07c99f in __GI___poll (fds=0x7f7940020970, nfds=2, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007f798a4a85ac in g_main_context_poll (priority=<optimized out>, n_fds=2, fds=0x7f7940020970, timeout=<optimized out>, context=0x7f794001d9b0) at ../glib/gmain.c:4434
#2  g_main_context_iterate.constprop.0 (context=0x7f794001d9b0, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at ../glib/gmain.c:4126
#3  0x00007f798a453563 in g_main_loop_run (loop=0x7f794001daa0) at ../glib/gmain.c:4329
#4  0x00007f798a2df5ea in gdbus_shared_thread_func (user_data=0x7f794001d980) at ../gio/gdbusprivate.c:280
#5  0x00007f798a482662 in g_thread_proxy (data=0x7f7940018800) at ../glib/gthread.c:826
#6  0x00007f798a004af7 in start_thread (arg=<optimized out>) at pthread_create.c:435
#7  0x00007f798a089850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 9 (Thread 0x7f7984907640 (LWP 276119) (Exiting)):
#0  futex_wait (private=0, expected=2, futex_word=0x7f798a9e8048 <_rtld_local+4168>) at ../sysdeps/nptl/futex-internal.h:146
#1  __GI___lll_lock_wait_private (futex=futex@entry=0x7f798a9e8048 <_rtld_local+4168>) at lowlevellock.c:35
#2  0x00007f798a001b6c in __GI___nptl_deallocate_stack (pd=pd@entry=0x7f7984907640) at nptl-stack.c:113
#3  0x00007f798a001d7d in __GI___nptl_free_tcb (pd=0x7f7984907640) at nptl_free_tcb.c:42
#4  0x00007f798a004ac1 in start_thread (arg=<optimized out>) at pthread_create.c:566
#5  0x00007f798a089850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 8 (Thread 0x7f7985108640 (LWP 276118) (Exiting)):
#0  futex_wait (private=0, expected=2, futex_word=0x7f798a9e8048 <_rtld_local+4168>) at ../sysdeps/nptl/futex-internal.h:146
#1  __GI___lll_lock_wait_private (futex=futex@entry=0x7f798a9e8048 <_rtld_local+4168>) at lowlevellock.c:35
#2  0x00007f798a001b6c in __GI___nptl_deallocate_stack (pd=pd@entry=0x7f7985108640) at nptl-stack.c:113
#3  0x00007f798a001d7d in __GI___nptl_free_tcb (pd=0x7f7985108640) at nptl_free_tcb.c:42
#4  0x00007f798a004ac1 in start_thread (arg=<optimized out>) at pthread_create.c:566
--Type <RET> for more, q to quit, c to continue without paging--
#5  0x00007f798a089850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 7 (Thread 0x7f7985909640 (LWP 276117) (Exiting)):
#0  futex_wait (private=0, expected=2, futex_word=0x7f798a9e8048 <_rtld_local+4168>) at ../sysdeps/nptl/futex-internal.h:146
#1  __GI___lll_lock_wait_private (futex=futex@entry=0x7f798a9e8048 <_rtld_local+4168>) at lowlevellock.c:35
#2  0x00007f798a001b6c in __GI___nptl_deallocate_stack (pd=pd@entry=0x7f7985909640) at nptl-stack.c:113
#3  0x00007f798a001d7d in __GI___nptl_free_tcb (pd=0x7f7985909640) at nptl_free_tcb.c:42
#4  0x00007f798a004ac1 in start_thread (arg=<optimized out>) at pthread_create.c:566
#5  0x00007f798a089850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 6 (Thread 0x7f798690b640 (LWP 276115) (Exiting)):
#0  futex_wait (private=0, expected=2, futex_word=0x7f798a9e8048 <_rtld_local+4168>) at ../sysdeps/nptl/futex-internal.h:146
#1  __GI___lll_lock_wait_private (futex=futex@entry=0x7f798a9e8048 <_rtld_local+4168>) at lowlevellock.c:35
#2  0x00007f798a001b6c in __GI___nptl_deallocate_stack (pd=pd@entry=0x7f798690b640) at nptl-stack.c:113
#3  0x00007f798a001d7d in __GI___nptl_free_tcb (pd=0x7f798690b640) at nptl_free_tcb.c:42
#4  0x00007f798a004ac1 in start_thread (arg=<optimized out>) at pthread_create.c:566
#5  0x00007f798a089850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 5 (Thread 0x7f798790d640 (LWP 276112) (Exiting)):
#0  futex_wait (private=0, expected=2, futex_word=0x7f798a9e8048 <_rtld_local+4168>) at ../sysdeps/nptl/futex-internal.h:146
#1  __GI___lll_lock_wait_private (futex=futex@entry=0x7f798a9e8048 <_rtld_local+4168>) at lowlevellock.c:35
#2  0x00007f798a001b6c in __GI___nptl_deallocate_stack (pd=pd@entry=0x7f798790d640) at nptl-stack.c:113
#3  0x00007f798a001d7d in __GI___nptl_free_tcb (pd=0x7f798790d640) at nptl_free_tcb.c:42
#4  0x00007f798a004ac1 in start_thread (arg=<optimized out>) at pthread_create.c:566
#5  0x00007f798a089850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 4 (Thread 0x7f797ffff640 (LWP 276114) (Exiting)):
#0  futex_wait (private=0, expected=2, futex_word=0x7f798a9e8048 <_rtld_local+4168>) at ../sysdeps/nptl/futex-internal.h:146
#1  __GI___lll_lock_wait_private (futex=futex@entry=0x7f798a9e8048 <_rtld_local+4168>) at lowlevellock.c:35
#2  0x00007f798a001b6c in __GI___nptl_deallocate_stack (pd=pd@entry=0x7f797ffff640) at nptl-stack.c:113
#3  0x00007f798a001d7d in __GI___nptl_free_tcb (pd=0x7f797ffff640) at nptl_free_tcb.c:42
#4  0x00007f798a004ac1 in start_thread (arg=<optimized out>) at pthread_create.c:566
#5  0x00007f798a089850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 3 (Thread 0x7f798710c640 (LWP 276113) (Exiting)):
--Type <RET> for more, q to quit, c to continue without paging--
#0  futex_wait (private=0, expected=2, futex_word=0x7f798a9e8048 <_rtld_local+4168>) at ../sysdeps/nptl/futex-internal.h:146
#1  __GI___lll_lock_wait_private (futex=futex@entry=0x7f798a9e8048 <_rtld_local+4168>) at lowlevellock.c:35
#2  0x00007f798a001b6c in __GI___nptl_deallocate_stack (pd=pd@entry=0x7f798710c640) at nptl-stack.c:113
#3  0x00007f798a001d7d in __GI___nptl_free_tcb (pd=0x7f798710c640) at nptl_free_tcb.c:42
#4  0x00007f798a004ac1 in start_thread (arg=<optimized out>) at pthread_create.c:566
#5  0x00007f798a089850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 2 (Thread 0x7f798810e640 (LWP 276111) (Exiting)):
#0  futex_wait (private=0, expected=2, futex_word=0x7f798a9e8048 <_rtld_local+4168>) at ../sysdeps/nptl/futex-internal.h:146
#1  __GI___lll_lock_wait_private (futex=futex@entry=0x7f798a9e8048 <_rtld_local+4168>) at lowlevellock.c:35
#2  0x00007f798a001b6c in __GI___nptl_deallocate_stack (pd=pd@entry=0x7f798810e640) at nptl-stack.c:113
#3  0x00007f798a001d7d in __GI___nptl_free_tcb (pd=0x7f798810e640) at nptl_free_tcb.c:42
#4  0x00007f798a004ac1 in start_thread (arg=<optimized out>) at pthread_create.c:566
#5  0x00007f798a089850 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 1 (Thread 0x7f7988be6ac0 (LWP 276109)):
#0  ___pthread_mutex_lock (mutex=mutex@entry=0x48) at pthread_mutex_lock.c:76
#1  0x00007f798a66caa9 in virMutexLock (m=m@entry=0x48) at ../src/util/virthread.c:91
#2  0x00007f798a66d0c9 in virThreadPoolStop (pool=0x0) at ../src/util/virthreadpool.c:503
#3  0x00007f797f6cd2dd in qemuStateShutdownPrepare () at ../src/qemu/qemu_driver.c:1039
#4  0x00007f798a80d9b0 in virStateShutdownPrepare () at ../src/libvirt.c:691
#5  0x00007f798a73597d in virNetDaemonRun (dmn=0x5593cf817820) at ../src/rpc/virnetdaemon.c:865
#6  0x00005593cdef233d in main (argc=<optimized out>, argv=<optimized out>) at ../src/remote/remote_daemon.c:1213


Actual results:


Expected results:


Additional info:

Comment 1 Peter Krempa 2021-11-29 15:10:47 UTC
This will be the same root cause as was reported earlier upstream in https://gitlab.com/libvirt/libvirt/-/issues/218 where also the root cause is partially analyzed.

Comment 2 Michal Privoznik 2021-12-09 14:44:55 UTC
Patches posted on the list:

https://listman.redhat.com/archives/libvir-list/2021-December/msg00352.html

Comment 5 Michal Privoznik 2021-12-10 12:55:10 UTC
Merged upstream as:

3179220e4f Revert "qemu: Avoid crash in qemuStateShutdownPrepare() and qemuStateShutdownWait()"
05e518f47a remote_daemon: Set shutdown callbacks only after init is done

v7.10.0-143-g3179220e4f

Comment 6 yafu 2021-12-23 07:57:22 UTC
Tested with libvirt-8.0.0-1.

Comment 9 yafu 2022-01-18 02:57:39 UTC
Verified with libvirt-8.0.0-1.el9.x86_64.

Test steps:
1.for i in {1..100}; do systemctl stop virtqemud;sleep 5; systemctl start virtqemud; systemctl reload virtqemud; systemctl restart virtqemud; done

2.Check coredump:
# coredumpctl list
No coredumps found.

Comment 11 errata-xmlrpc 2022-05-17 12:45:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (new packages: libvirt), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:2390


Note You need to log in before you can comment on or make changes to this bug.