RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1452106 - libvirtd crash sometimes while doing 'virsh qemu-attach'
Summary: libvirtd crash sometimes while doing 'virsh qemu-attach'
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.4
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Peter Krempa
QA Contact: yafu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-05-18 11:03 UTC by yafu
Modified: 2017-08-02 01:34 UTC (History)
9 users (show)

Fixed In Version: libvirt-3.2.0-6.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-08-02 01:34:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:1846 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2017-08-01 18:02:50 UTC

Description yafu 2017-05-18 11:03:55 UTC
Description of problem:
libvirtd crash sometimes when doing 'virsh qemu-attach'

Version-Release number of selected component (if applicable):
libvirt-3.2.0-5.virtcov.el7.x86_64
qemu-kvm-rhev-2.9.0-5.el7.x86_64


How reproducible:
50%

Steps to Reproduce:
1.Create a guest with qemu cmd:
# /usr/libexec/qemu-kvm -hdb /var/lib/libvirt/images/test3.img -monitor unix:/tmp/demo,server,nowait -name test3 &
[7] 12387

2.Check the qemu process:
#ps aux | grep qemu-kvm
root     12387 95.8  0.4 916476 32364 pts/2    Sl   13:12   0:04 /usr/libexec/qemu-kvm -hdb /var/lib/libvirt/images/test3.img -monitor unix:/tmp/demo,server,nowait -name test3

3.Do 'virsh qemu-attach':
#virsh qemu-attach 12387
error: Disconnected from qemu:///system due to I/O error
error: Failed to attach to pid 12387
error: End of file while reading data: Input/output error

Actual results:
libvirtd crash while doing 'virsh qemu-attach'

Expected results
libvirtd should not crash while doing 'virsh qemu-attach'

Additional info:
1.It's a regression issue, since there was a bug fixed in rhel7.3:
 https://bugzilla.redhat.com/show_bug.cgi?id=1350688

2.stack trace of the crashed libvirtd process:
(gdb) t a a bt

Thread 16 (Thread 0x7f8bd415c700 (LWP 7655)):
#0  0x00007f8be062cfd8 in __strchr_sse42 () from /lib64/libc.so.6
#1  0x00007f8b9db3e084 in qemuMonitorTextQueryCPUs (mon=mon@entry=0x7f8bc400db80, entries=entries@entry=0x7f8bd415b7e0, nentries=nentries@entry=0x7f8bd415b7e8) at qemu/qemu_monitor_text.c:569
#2  0x00007f8b9db2f54d in qemuMonitorGetCPUInfo (mon=0x7f8bc400db80, vcpus=vcpus@entry=0x7f8bd415b850, maxvcpus=maxvcpus@entry=1, hotplug=<optimized out>, hotplug@entry=true) at qemu/qemu_monitor.c:1970
#3  0x00007f8b9dadb1eb in qemuDomainRefreshVcpuInfo (driver=driver@entry=0x7f8b74179ed0, vm=vm@entry=0x7f8bc4001530, asyncJob=asyncJob@entry=0, state=state@entry=false) at qemu/qemu_domain.c:6711
#4  0x00007f8b9db0c6a6 in qemuProcessAttach (conn=conn@entry=0x7f8bc4000b30, driver=driver@entry=0x7f8b74179ed0, vm=0x7f8bc4001530, pid=pid@entry=8389, pidfile=<optimized out>, monConfig=0x0, monJSON=false)
    at qemu/qemu_process.c:6630
#5  0x00007f8b9db6f99f in qemuDomainQemuAttach (conn=0x7f8bc4000b30, pid_value=8389, flags=<optimized out>) at qemu/qemu_driver.c:15751
#6  0x00007f8be3c36d3c in virDomainQemuAttach (conn=0x7f8bc4000b30, pid_value=8389, flags=0) at libvirt-qemu.c:154
#7  0x000055779a408e18 in qemuDispatchDomainAttach (server=0x55779b99db80, msg=0x55779b9bd4e0, ret=0x7f8bc4001730, args=0x7f8bc4001670, rerr=0x7f8bd415bc00, client=0x55779b9bce30) at qemu_dispatch.h:168
#8  qemuDispatchDomainAttachHelper (server=0x55779b99db80, client=0x55779b9bce30, msg=0x55779b9bd4e0, rerr=0x7f8bd415bc00, args=0x7f8bc4001670, ret=0x7f8bc4001730) at qemu_dispatch.h:146
#9  0x00007f8be36fa3cc in virNetServerProgramDispatchCall (msg=0x55779b9bd4e0, client=0x55779b9bce30, server=0x55779b99db80, prog=0x55779b9b69b0) at rpc/virnetserverprogram.c:437
#10 virNetServerProgramDispatch (prog=0x55779b9b69b0, server=server@entry=0x55779b99db80, client=client@entry=0x55779b9bce30, msg=msg@entry=0x55779b9bd4e0) at rpc/virnetserverprogram.c:307
#11 0x000055779a420d3a in virNetServerProcessMsg (srv=srv@entry=0x55779b99db80, client=0x55779b9bce30, prog=<optimized out>, msg=0x55779b9bd4e0) at rpc/virnetserver.c:148
#12 0x000055779a421138 in virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x55779b99db80) at rpc/virnetserver.c:169
#13 0x00007f8be3572b11 in virThreadPoolWorker (opaque=opaque@entry=0x55779b99d710) at util/virthreadpool.c:167
#14 0x00007f8be35718e0 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#15 0x00007f8be08c2e25 in start_thread () from /lib64/libpthread.so.0
#16 0x00007f8be05f034d in clone () from /lib64/libc.so.6

Thread 15 (Thread 0x7f8bd395b700 (LWP 7656)):
#0  0x00007f8be08c6945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f8be3571cce in virCondWait (c=c@entry=0x55779b99de08, m=m@entry=0x55779b99dde0) at util/virthread.c:154
#2  0x00007f8be3572c73 in virThreadPoolWorker (opaque=opaque@entry=0x55779b99d490) at util/virthreadpool.c:124
#3  0x00007f8be35718e0 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f8be08c2e25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f8be05f034d in clone () from /lib64/libc.so.6

Thread 14 (Thread 0x7f8bd315a700 (LWP 7657)):
#0  0x00007f8be08c6945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f8be3571cce in virCondWait (c=c@entry=0x55779b99de08, m=m@entry=0x55779b99dde0) at util/virthread.c:154
#2  0x00007f8be3572c73 in virThreadPoolWorker (opaque=opaque@entry=0x55779b99d920) at util/virthreadpool.c:124
#3  0x00007f8be35718e0 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f8be08c2e25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f8be05f034d in clone () from /lib64/libc.so.6

Thread 13 (Thread 0x7f8bd2959700 (LWP 7658)):
#0  0x00007f8be08c6945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f8be3571cce in virCondWait (c=c@entry=0x55779b99de08, m=m@entry=0x55779b99dde0) at util/virthread.c:154
#2  0x00007f8be3572c73 in virThreadPoolWorker (opaque=opaque@entry=0x55779b98b490) at util/virthreadpool.c:124
#3  0x00007f8be35718e0 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f8be08c2e25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f8be05f034d in clone () from /lib64/libc.so.6

Thread 12 (Thread 0x7f8bd2158700 (LWP 7659)):
#0  0x00007f8be08c6945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f8be3571cce in virCondWait (c=c@entry=0x55779b99de08, m=m@entry=0x55779b99dde0) at util/virthread.c:154
#2  0x00007f8be3572c73 in virThreadPoolWorker (opaque=opaque@entry=0x55779b99d920) at util/virthreadpool.c:124
---Type <return> to continue, or q <return> to quit---
#3  0x00007f8be35718e0 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f8be08c2e25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f8be05f034d in clone () from /lib64/libc.so.6

Thread 11 (Thread 0x7f8bd1957700 (LWP 7660)):
#0  0x00007f8be08c6945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f8be3571cce in virCondWait (c=c@entry=0x55779b99dea8, m=m@entry=0x55779b99dde0) at util/virthread.c:154
#2  0x00007f8be3572c00 in virThreadPoolWorker (opaque=opaque@entry=0x55779b99d710) at util/virthreadpool.c:124
#3  0x00007f8be35718e0 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f8be08c2e25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f8be05f034d in clone () from /lib64/libc.so.6

Thread 10 (Thread 0x7f8bd1156700 (LWP 7661)):
#0  0x00007f8be08c6945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f8be3571cce in virCondWait (c=c@entry=0x55779b99dea8, m=m@entry=0x55779b99dde0) at util/virthread.c:154
#2  0x00007f8be3572c00 in virThreadPoolWorker (opaque=opaque@entry=0x55779b99d490) at util/virthreadpool.c:124
#3  0x00007f8be35718e0 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f8be08c2e25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f8be05f034d in clone () from /lib64/libc.so.6

Thread 9 (Thread 0x7f8bd0955700 (LWP 7662)):
#0  0x00007f8be08c6945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f8be3571cce in virCondWait (c=c@entry=0x55779b99dea8, m=m@entry=0x55779b99dde0) at util/virthread.c:154
#2  0x00007f8be3572c00 in virThreadPoolWorker (opaque=opaque@entry=0x55779b99d710) at util/virthreadpool.c:124
#3  0x00007f8be35718e0 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f8be08c2e25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f8be05f034d in clone () from /lib64/libc.so.6

Thread 8 (Thread 0x7f8bb7fff700 (LWP 7663)):
#0  0x00007f8be08c6945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f8be3571cce in virCondWait (c=c@entry=0x55779b99dea8, m=m@entry=0x55779b99dde0) at util/virthread.c:154
#2  0x00007f8be3572c00 in virThreadPoolWorker (opaque=opaque@entry=0x55779b98b490) at util/virthreadpool.c:124
#3  0x00007f8be35718e0 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f8be08c2e25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f8be05f034d in clone () from /lib64/libc.so.6

Thread 7 (Thread 0x7f8bb77fe700 (LWP 7664)):
#0  0x00007f8be08c6945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f8be3571cce in virCondWait (c=c@entry=0x55779b99dea8, m=m@entry=0x55779b99dde0) at util/virthread.c:154
#2  0x00007f8be3572c00 in virThreadPoolWorker (opaque=opaque@entry=0x55779b99d490) at util/virthreadpool.c:124
#3  0x00007f8be35718e0 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f8be08c2e25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f8be05f034d in clone () from /lib64/libc.so.6

Thread 6 (Thread 0x7f8b9d577700 (LWP 7665)):
#0  0x00007f8be08c6945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f8be3571cce in virCondWait (c=c@entry=0x55779b9b6d98, m=m@entry=0x55779b9b6d70) at util/virthread.c:154
#2  0x00007f8be3572c73 in virThreadPoolWorker (opaque=opaque@entry=0x55779b9b5a90) at util/virthreadpool.c:124
#3  0x00007f8be35718e0 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
---Type <return> to continue, or q <return> to quit---
#4  0x00007f8be08c2e25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f8be05f034d in clone () from /lib64/libc.so.6

Thread 5 (Thread 0x7f8b9cd76700 (LWP 7666)):
#0  0x00007f8be08c6945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f8be3571cce in virCondWait (c=c@entry=0x55779b9b6d98, m=m@entry=0x55779b9b6d70) at util/virthread.c:154
#2  0x00007f8be3572c73 in virThreadPoolWorker (opaque=opaque@entry=0x55779b9906b0) at util/virthreadpool.c:124
#3  0x00007f8be35718e0 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f8be08c2e25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f8be05f034d in clone () from /lib64/libc.so.6

Thread 4 (Thread 0x7f8b9c575700 (LWP 7667)):
#0  0x00007f8be08c6945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f8be3571cce in virCondWait (c=c@entry=0x55779b9b6d98, m=m@entry=0x55779b9b6d70) at util/virthread.c:154
#2  0x00007f8be3572c73 in virThreadPoolWorker (opaque=opaque@entry=0x55779b9b6bb0) at util/virthreadpool.c:124
#3  0x00007f8be35718e0 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f8be08c2e25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f8be05f034d in clone () from /lib64/libc.so.6

Thread 3 (Thread 0x7f8b8bd74700 (LWP 7668)):
#0  0x00007f8be08c6945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f8be3571cce in virCondWait (c=c@entry=0x55779b9b6d98, m=m@entry=0x55779b9b6d70) at util/virthread.c:154
#2  0x00007f8be3572c73 in virThreadPoolWorker (opaque=opaque@entry=0x55779b9905f0) at util/virthreadpool.c:124
#3  0x00007f8be35718e0 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f8be08c2e25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f8be05f034d in clone () from /lib64/libc.so.6

Thread 2 (Thread 0x7f8b9bd74700 (LWP 7669)):
#0  0x00007f8be08c6945 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f8be3571cce in virCondWait (c=c@entry=0x55779b9b6d98, m=m@entry=0x55779b9b6d70) at util/virthread.c:154
#2  0x00007f8be3572c73 in virThreadPoolWorker (opaque=opaque@entry=0x55779b990530) at util/virthreadpool.c:124
#3  0x00007f8be35718e0 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00007f8be08c2e25 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f8be05f034d in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7f8be4436880 (LWP 7654)):
#0  0x00007f8be05e5a3d in poll () from /lib64/libc.so.6
#1  0x00007f8be34fbabe in poll (__timeout=4835, __nfds=13, __fds=<optimized out>) at /usr/include/bits/poll2.h:46
#2  virEventPollRunOnce () at util/vireventpoll.c:641
#3  0x00007f8be34f9cfa in virEventRunDefaultImpl () at util/virevent.c:314
#4  0x00007f8be36f18b5 in virNetDaemonRun (dmn=dmn@entry=0x55779b99f9c0) at rpc/virnetdaemon.c:818
#5  0x000055779a3cdcdb in main (argc=<optimized out>, argv=<optimized out>) at libvirtd.c:1541

Comment 3 Peter Krempa 2017-05-19 07:42:17 UTC
Fixed upstream:

commit 6ff99e95771bb33531ea6733a823bc6a30158256
Author: Peter Krempa <pkrempa>
Date:   Thu May 18 13:27:24 2017 +0200

    qemu: monitor: Don't bother extracting vCPU halted state in text monitor
    
    The code causes the 'offset' variable to be overwritten (possibly with
    NULL if neither of the vCPUs is halted) which causes a crash since the
    variable is still used after that part.
    
    Additionally there's a bug, since strstr() would look up the '(halted)'
    string in the whole string rather than just the currently processed line
    the returned data is completely bogus.
    
    Rather than switching to single line parsing let's remove the code
    altogether since it has a commonly used JSON monitor alternative and
    the data itself is not very useful to report.
    
    The code was introduced in commit cc5e695bde

Comment 6 yafu 2017-05-27 05:17:54 UTC
Verified pass with libvirt-3.2.0-6.virtcov.el7.x86_64.

Test steps:
1.Check the pid of libvirtd:
#pgrep libvirtd
27092

2.Create 10 guests with qemu-cmd:
#for i in {1..10} ; do /usr/libexec/qemu-kvm -hdb /var/lib/libvirt/images/test$i.img -monitor unix:/tmp/demo$i,server,nowait -name test$i -device qxl-vga & done
VNC server running on ::1:5900
VNC server running on ::1:5901
VNC server running on ::1:5902
VNC server running on ::1:5903
VNC server running on ::1:5904
VNC server running on ::1:5905
VNC server running on ::1:5906
VNC server running on ::1:5907
VNC server running on ::1:5908
VNC server running on ::1:5909

3.Execute 'virsh qemu-attach` parallel:
#for pid in `pgrep qemu-kvm` ; do virsh qemu-attach $pid & done
Domain test8 attached to pid 29275

Domain test2 attached to pid 29269

Domain test1 attached to pid 29268

Domain test4 attached to pid 29271

Domain test5 attached to pid 29272

Domain test7 attached to pid 29274

Domain test9 attached to pid 29276

Domain test3 attached to pid 29270

Domain test6 attached to pid 29273

Domain test10 attached to pid 29277

4.Check the guest with 'virsh list':
#virsh list
# virsh list
 Id    Name                           State
----------------------------------------------------
 10    test8                          running
 11    test2                          running
 12    test1                          running
 13    test4                          running
 14    test5                          running
 15    test7                          running
 16    test3                          running
 17    test6                          running
 18    test9                          running
 19    test10                         running
 

5.Reattach the qemu proccess have attached:
#virsh qemu-attach 29277
virsh qemu-attach 29277
error: Failed to attach to pid 29277
error: operation failed: domain 'test10' already exists with uuid 1d730a93-ecd4-46d8-aa7c-5f519870c40b

6.Attach a non-existing pid:
# virsh qemu-attach 101010
error: Failed to attach to pid 101010
error: Failed to open file '/proc/101010/cmdline': No such file or directory

7.Attach invalid pid
#virsh qemu-attach 0
error: Failed to attach to pid 0
error: pid_value in virDomainQemuAttach must be greater than zero

8.Attach non qemu process pid:
[root@yafu-laptop scripts]# virsh qemu-attach 1
error: Failed to attach to pid 1
error: internal error: Child process (LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin /usr/lib/systemd/systemd -help) unexpected exit status 1: 2017-05-27 03:23:49.111+0000: 30384: debug : virFileClose:110 : Closed fd 29
2017-05-27 03:23:49.111+0000: 30384: debug : virFileClose:110 : Closed fd 31
2017-05-27 03:23:49.111+0000: 30384: debug : virFileClose:110 : Closed fd 26
2017-05-27 03:23:49.112+0000: 30384: debug : virExec:736 : Setting child uid:gid to 107:107 with caps 0

9.After all the steps, the libvirtd does not crash:
#pgrep libvirtd
27092


According to the test results above, move the bug to verified.

Comment 7 errata-xmlrpc 2017-08-02 01:34:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:1846


Note You need to log in before you can comment on or make changes to this bug.