RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2178866 - virt-admin coredumped when executing 'virt-admin srv-threadpool-info virtqemud'
Summary: virt-admin coredumped when executing 'virt-admin srv-threadpool-info virtqemud'
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: libvirt
Version: 9.3
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Ján Tomko
QA Contact: Lili Zhu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-03-16 01:30 UTC by yafu
Modified: 2023-11-07 09:39 UTC (History)
7 users (show)

Fixed In Version: libvirt-9.2.0-1.el9
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-11-07 08:31:00 UTC
Type: Bug
Target Upstream Version: 9.2.0
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-152009 0 None None None 2023-03-16 01:30:56 UTC
Red Hat Product Errata RHSA-2023:6409 0 None None None 2023-11-07 08:31:30 UTC

Description yafu 2023-03-16 01:30:11 UTC
Description of problem:
virt-admin coredumped when executing 'virt-admin srv-threadpool-info virtqemud'

Version-Release number of selected component (if applicable):
libvirt-9.1.0-1.el9.x86_64

How reproducible:
100%

Steps to Reproduce:
1.# virt-admin server-threadpool-info virtqemud
2023-03-16 01:25:00.789+0000: 43672: info : libvirt version: 9.1.0, package: 1.el9 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2023-03-13-09:31:29, )
2023-03-16 01:25:00.789+0000: 43672: info : hostname: ibm-x3250m6-04.lab.eng.pek2.redhat.com
2023-03-16 01:25:00.789+0000: 43672: warning : virObjectGetLockableObj:411 : Object 0x7ffc29ec7f78 ((null)) is not a virObjectLockable instance
Segmentation fault (core dumped)

2.
3.

Actual results:
virt-admin crashed when executing 'virt-admin srv-threadpool-info virtqemud'

Expected results:
virt-admin works as expected without core dumped.

Additional info:
1.The backtrace:
(gdb) t a a bt

Thread 2 (Thread 0x7f6c91dff640 (LWP 43673)):
#0  0x00007f6c9374296f in __GI___poll (fds=0x7f6c8c004fa0, nfds=2, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007f6c93f8849c in g_main_context_poll (priority=<optimized out>, n_fds=2, fds=0x7f6c8c004fa0, timeout=<optimized out>, context=0x7f6c8c001ca0) at ../glib/gmain.c:4434
#2  g_main_context_iterate.constprop.0 (context=context@entry=0x7f6c8c001ca0, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at ../glib/gmain.c:4126
#3  0x00007f6c93f315f3 in g_main_context_iteration (context=0x7f6c8c001ca0, context@entry=0x0, may_block=may_block@entry=1) at ../glib/gmain.c:4196
#4  0x00007f6c93ad7df4 in virEventGLibRunOnce () at ../src/util/vireventglib.c:515
#5  0x000055ea2973e9b5 in vshEventLoop ()
#6  0x00007f6c93b35e69 in virThreadHelper (data=<optimized out>) at ../src/util/virthread.c:256
#7  0x00007f6c9369f832 in start_thread (arg=<optimized out>) at pthread_create.c:443
#8  0x00007f6c9363f450 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 1 (Thread 0x7f6c92127f80 (LWP 43672)):
#0  ___pthread_mutex_lock (mutex=mutex@entry=0x18) at pthread_mutex_lock.c:81
#1  0x00007f6c93b2cff9 in virMutexLock (m=m@entry=0x18) at ../src/util/virthread.c:91
#2  0x00007f6c93b2d01e in virLockGuardLock (m=0x18) at ../src/util/virthread.c:102
#3  0x00007f6c94021d06 in remoteAdminConnectLookupServer (flags=0, name=0x55ea297b32a0 "virtqemud", conn=0x55ea297e5820) at src/admin/admin_client.h:111
#4  virAdmConnectLookupServer (conn=0x55ea297e5820, name=0x55ea297b32a0 "virtqemud", flags=0) at ../src/admin/libvirt-admin.c:782
#5  0x000055ea2973ed5d in cmdSrvThreadpoolInfo ()
#6  0x000055ea2973eb1f in vshCommandRun ()
#7  0x000055ea2973d0cd in main ()
(gdb) 

Thread 2 (Thread 0x7f6c91dff640 (LWP 43673)):
#0  0x00007f6c9374296f in __GI___poll (fds=0x7f6c8c004fa0, nfds=2, timeout=-1) at ../sysdeps/unix/sysv/linux/poll.c:29
#1  0x00007f6c93f8849c in g_main_context_poll (priority=<optimized out>, n_fds=2, fds=0x7f6c8c004fa0, timeout=<optimized out>, context=0x7f6c8c001ca0) at ../glib/gmain.c:4434
#2  g_main_context_iterate.constprop.0 (context=context@entry=0x7f6c8c001ca0, block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>) at ../glib/gmain.c:4126
#3  0x00007f6c93f315f3 in g_main_context_iteration (context=0x7f6c8c001ca0, context@entry=0x0, may_block=may_block@entry=1) at ../glib/gmain.c:4196
#4  0x00007f6c93ad7df4 in virEventGLibRunOnce () at ../src/util/vireventglib.c:515
#5  0x000055ea2973e9b5 in vshEventLoop ()
#6  0x00007f6c93b35e69 in virThreadHelper (data=<optimized out>) at ../src/util/virthread.c:256
#7  0x00007f6c9369f832 in start_thread (arg=<optimized out>) at pthread_create.c:443
#8  0x00007f6c9363f450 in clone3 () at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81

Thread 1 (Thread 0x7f6c92127f80 (LWP 43672)):
#0  ___pthread_mutex_lock (mutex=mutex@entry=0x18) at pthread_mutex_lock.c:81
#1  0x00007f6c93b2cff9 in virMutexLock (m=m@entry=0x18) at ../src/util/virthread.c:91
#2  0x00007f6c93b2d01e in virLockGuardLock (m=0x18) at ../src/util/virthread.c:102
#3  0x00007f6c94021d06 in remoteAdminConnectLookupServer (flags=0, name=0x55ea297b32a0 "virtqemud", conn=0x55ea297e5820) at src/admin/admin_client.h:111
#4  virAdmConnectLookupServer (conn=0x55ea297e5820, name=0x55ea297b32a0 "virtqemud", flags=0) at ../src/admin/libvirt-admin.c:782
#5  0x000055ea2973ed5d in cmdSrvThreadpoolInfo ()
#6  0x000055ea2973eb1f in vshCommandRun ()
#7  0x000055ea2973d0cd in main ()

Comment 1 Han Han 2023-03-17 03:01:24 UTC
Bisection shows the first bad commit is:
commit 778c3004609ede0a9df4cf3e01c031047530efb7
Author: Daniel P. Berrangé <berrange>
Date:   Thu Dec 22 10:28:50 2022 -0500

    rpc: use VIR_LOCK_GUARD in remote client code
    
    Using VIR_LOCK_GUARD helps to simplify the control flow
    logic.
    
    Reviewed-by: Ján Tomko <jtomko>
    Signed-off-by: Daniel P. Berrangé <berrange>

Comment 2 Ján Tomko 2023-03-17 15:01:45 UTC
Upstream patch:
https://listman.redhat.com/archives/libvir-list/2023-March/238847.html

Comment 3 Ján Tomko 2023-03-17 15:44:07 UTC
Pushed upstream as:
commit 50f0e8e7aa32c307e976a09387bccd5e40a285d9
Author:     Ján Tomko <jtomko>
CommitDate: 2023-03-17 16:42:55 +0100

    rpc: fix typo in admin code generation
    
    An extra '&' introduced a crash.
    
    https://bugzilla.redhat.com/show_bug.cgi?id=2178866
    
    Fixes: 778c3004609ede0a9df4cf3e01c031047530efb7
    Signed-off-by: Ján Tomko <jtomko>
    Reviewed-by: Peter Krempa <pkrempa>

git describe: v9.1.0-247-g50f0e8e7aa

Comment 4 Han Han 2023-03-20 04:10:17 UTC
Test passes on v9.1.0-248-g27d8bcc337
+ /root/libvirt/build/tools/virt-admin server-threadpool-info virtqemud
error: Server not found: No server named 'virtqemud'

Comment 5 Lili Zhu 2023-04-06 03:21:26 UTC
Tested with:
libvirt-9.2.0-1.el9.x86_64

1. check the threadpool info of each daemon with connecting to the daemon firstly
# for drv in qemu interface network nodedev nwfilter secret storage ; do virt-admin server-threadpool-info virt${drv}d; done
error: Server not found: No server named 'virtqemud'

error: Server not found: No server named 'virtinterfaced'

error: Server not found: No server named 'virtnetworkd'

error: Server not found: No server named 'virtnodedevd'

error: Server not found: No server named 'virtnwfilterd'

error: Server not found: No server named 'virtsecretd'

error: Server not found: No server named 'virtstoraged'

# virt-admin server-threadpool-info virtproxyd
minWorkers     : 5
maxWorkers     : 20
nWorkers       : 5
freeWorkers    : 5
prioWorkers    : 5
jobQueueDepth  : 0


2. check the threadpool info of each daemon
# for drv in qemu interface network nodedev nwfilter secret storage ; do virt-admin -c virt${drv}d:///system client-list virt${drv}d; done
 Id   Transport   Connected since
-----------------------------------

 Id   Transport   Connected since
-----------------------------------

 Id   Transport   Connected since
-----------------------------------

 Id   Transport   Connected since
-----------------------------------

 Id   Transport   Connected since
-----------------------------------

 Id   Transport   Connected since
-----------------------------------

 Id   Transport   Connected since
-----------------------------------


# virt-admin client-list virtproxyd
 Id   Transport   Connected since
-----------------------------------

(Not coredumped)

Comment 9 Lili Zhu 2023-05-19 08:09:27 UTC
Tested with:
libvirt-9.3.0-1.el9.x86_64

The testing steps are the same with those in Comment #5

Mark the bug as verified.

Comment 11 errata-xmlrpc 2023-11-07 08:31:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: libvirt security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6409


Note You need to log in before you can comment on or make changes to this bug.