Bug 1379218 - libvirtd crashes after qemu-attach in qemuDomainMachineIsPSeries
Summary: libvirtd crashes after qemu-attach in qemuDomainMachineIsPSeries
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: libvirt
Version: 7.3
Hardware: ppc64le
OS: Linux
unspecified
medium
Target Milestone: rc
: ---
Assignee: Andrea Bolognani
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks: 1401400
TreeView+ depends on / blocked
 
Reported: 2016-09-26 05:11 UTC by Dan Zheng
Modified: 2018-04-10 10:39 UTC (History)
8 users (show)

Fixed In Version: libvirt-3.9.0-1.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-10 10:39:40 UTC
Target Upstream Version:


Attachments (Terms of Use)
log for the libvirtd crash (40.23 KB, text/plain)
2016-09-26 10:49 UTC, Dan Zheng
no flags Details
libvirtd_crash.gdb.log (12.77 KB, text/plain)
2017-11-06 08:06 UTC, Dan Zheng
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2018:0704 0 normal SHIPPED_LIVE libvirt bug fix and enhancement update 2018-04-10 15:57:35 UTC

Description Dan Zheng 2016-09-26 05:11:17 UTC
Description of problem:
This bug is cloned from bug 1350688 which has already been fixed on X86_64.

The command execution fails and the libvirtd service will be crashed after qemu-attach <qemu_PID>.

Version-Release number of selected component (if applicable):
libvirt-2.0.0-10.el7.ppc64le

How reproducible:
100%

Steps to Reproduce:

1. Get the libvirtd pid
# systemctl status libvirtd
● libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2016-09-26 13:59:32 JST; 28s ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 5092 (libvirtd)


2. Start a qemu process background.
# /usr/libexec/qemu-kvm -m 2048 -drive file=/var/lib/libvirt/images/RHEL-7.3-ppc64le-latest.qcow2,index=0 -monitor unix:/tmp/ss,server,nowait -name test -uuid 1fdf7c78-866a-4dcf-b017-5a9299682e1f &
[1] 5239
# VNC server running on '::1;5900'

# ps -ef|grep qemu
root       5239   4545 67 14:05 pts/0    00:00:20 /usr/libexec/qemu-kvm -m 2048 -drive file=/var/lib/libvirt/images/RHEL-7.3-ppc64le-latest.qcow2,index=0 -monitor unix:/tmp/ss,server,nowait -name test -uuid 1fdf7c78-866a-4dcf-b017-5a9299682e1f

3. Attach to the qemu process, but failed.
# virsh qemu-attach  5239
error: Disconnected from qemu:///system due to I/O error
error: Failed to attach to pid 5239
error: End of file while reading data: Input/output error


4. Check libvirtd and found it was crashed and restarted.
# systemctl status libvirtd
● libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
   Active: active (running) since Mon 2016-09-26 14:06:34 JST; 36s ago
     Docs: man:libvirtd(8)
           http://libvirt.org
 Main PID: 5276 (libvirtd)



Actual results:
The command execution failed and libvirtd is restarted.

Expected results:
The process can be attached successfully, and the libvirtd service should not crashed

Additional info:

Thread 16 (Thread 0x3fff7512f080 (LWP 5277)):
#0  0x00003fff7cf0dd60 in pthread_cond_wait@@GLIBC_2.17 () from /lib64/libpthread.so.0
#1  0x00003fff7d8f7b8c in virCondWait (c=<optimized out>, m=<optimized out>) at util/virthread.c:154
#2  0x00003fff7d8f8cc8 in virThreadPoolWorker (opaque=<optimized out>) at util/virthreadpool.c:124
#3  0x00003fff7d8f759c in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00003fff7cf08728 in start_thread () from /lib64/libpthread.so.0
#5  0x00003fff7ce3d210 in clone () from /lib64/libc.so.6
Thread 15 (Thread 0x3fff7492f080 (LWP 5278)):
#0  0x00003fff7cf0dd60 in pthread_cond_wait@@GLIBC_2.17 () from /lib64/libpthread.so.0
#1  0x00003fff7d8f7b8c in virCondWait (c=<optimized out>, m=<optimized out>) at util/virthread.c:154
#2  0x00003fff7d8f8cc8 in virThreadPoolWorker (opaque=<optimized out>) at util/virthreadpool.c:124
#3  0x00003fff7d8f759c in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00003fff7cf08728 in start_thread () from /lib64/libpthread.so.0
#5  0x00003fff7ce3d210 in clone () from /lib64/libc.so.6
Thread 14 (Thread 0x3fff7412f080 (LWP 5279)):
#0  0x00003fff7cf0dd60 in pthread_cond_wait@@GLIBC_2.17 () from /lib64/libpthread.so.0
#1  0x00003fff7d8f7b8c in virCondWait (c=<optimized out>, m=<optimized out>) at util/virthread.c:154
#2  0x00003fff7d8f8cc8 in virThreadPoolWorker (opaque=<optimized out>) at util/virthreadpool.c:124
#3  0x00003fff7d8f759c in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00003fff7cf08728 in start_thread () from /lib64/libpthread.so.0
#5  0x00003fff7ce3d210 in clone () from /lib64/libc.so.6
Thread 13 (Thread 0x3fff7392f080 (LWP 5280)):
#0  0x00003fff7cf0dd60 in pthread_cond_wait@@GLIBC_2.17 () from /lib64/libpthread.so.0
#1  0x00003fff7d8f7b8c in virCondWait (c=<optimized out>, m=<optimized out>) at util/virthread.c:154
#2  0x00003fff7d8f8cc8 in virThreadPoolWorker (opaque=<optimized out>) at util/virthreadpool.c:124
#3  0x00003fff7d8f759c in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00003fff7cf08728 in start_thread () from /lib64/libpthread.so.0
#5  0x00003fff7ce3d210 in clone () from /lib64/libc.so.6
Thread 12 (Thread 0x3fff7312f080 (LWP 5281)):
#0  0x00003fff7cf0dd60 in pthread_cond_wait@@GLIBC_2.17 () from /lib64/libpthread.so.0
#1  0x00003fff7d8f7b8c in virCondWait (c=<optimized out>, m=<optimized out>) at util/virthread.c:154
#2  0x00003fff7d8f8cc8 in virThreadPoolWorker (opaque=<optimized out>) at util/virthreadpool.c:124
#3  0x00003fff7d8f759c in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00003fff7cf08728 in start_thread () from /lib64/libpthread.so.0
#5  0x00003fff7ce3d210 in clone () from /lib64/libc.so.6
Thread 11 (Thread 0x3fff7292f080 (LWP 5282)):
#0  0x00003fff7cf0dd60 in pthread_cond_wait@@GLIBC_2.17 () from /lib64/libpthread.so.0
#1  0x00003fff7d8f7b8c in virCondWait (c=<optimized out>, m=<optimized out>) at util/virthread.c:154
#2  0x00003fff7d8f8bf0 in virThreadPoolWorker (opaque=<optimized out>) at util/virthreadpool.c:124
#3  0x00003fff7d8f759c in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00003fff7cf08728 in start_thread () from /lib64/libpthread.so.0
#5  0x00003fff7ce3d210 in clone () from /lib64/libc.so.6
Thread 10 (Thread 0x3fff7212f080 (LWP 5283)):
#0  0x00003fff7cf0dd60 in pthread_cond_wait@@GLIBC_2.17 () from /lib64/libpthread.so.0
#1  0x00003fff7d8f7b8c in virCondWait (c=<optimized out>, m=<optimized out>) at util/virthread.c:154
#2  0x00003fff7d8f8bf0 in virThreadPoolWorker (opaque=<optimized out>) at util/virthreadpool.c:124
#3  0x00003fff7d8f759c in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00003fff7cf08728 in start_thread () from /lib64/libpthread.so.0
#5  0x00003fff7ce3d210 in clone () from /lib64/libc.so.6
Thread 9 (Thread 0x3fff7192f080 (LWP 5284)):
#0  0x00003fff7cf0dd60 in pthread_cond_wait@@GLIBC_2.17 () from /lib64/libpthread.so.0
#1  0x00003fff7d8f7b8c in virCondWait (c=<optimized out>, m=<optimized out>) at util/virthread.c:154
#2  0x00003fff7d8f8bf0 in virThreadPoolWorker (opaque=<optimized out>) at util/virthreadpool.c:124
#3  0x00003fff7d8f759c in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00003fff7cf08728 in start_thread () from /lib64/libpthread.so.0
#5  0x00003fff7ce3d210 in clone () from /lib64/libc.so.6
Thread 8 (Thread 0x3fff7112f080 (LWP 5285)):
#0  0x00003fff7cf0dd60 in pthread_cond_wait@@GLIBC_2.17 () from /lib64/libpthread.so.0
#1  0x00003fff7d8f7b8c in virCondWait (c=<optimized out>, m=<optimized out>) at util/virthread.c:154
#2  0x00003fff7d8f8bf0 in virThreadPoolWorker (opaque=<optimized out>) at util/virthreadpool.c:124
#3  0x00003fff7d8f759c in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00003fff7cf08728 in start_thread () from /lib64/libpthread.so.0
#5  0x00003fff7ce3d210 in clone () from /lib64/libc.so.6
Thread 7 (Thread 0x3fff7092f080 (LWP 5286)):
#0  0x00003fff7cf0dd60 in pthread_cond_wait@@GLIBC_2.17 () from /lib64/libpthread.so.0
#1  0x00003fff7d8f7b8c in virCondWait (c=<optimized out>, m=<optimized out>) at util/virthread.c:154
#2  0x00003fff7d8f8bf0 in virThreadPoolWorker (opaque=<optimized out>) at util/virthreadpool.c:124
#3  0x00003fff7d8f759c in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00003fff7cf08728 in start_thread () from /lib64/libpthread.so.0
#5  0x00003fff7ce3d210 in clone () from /lib64/libc.so.6
Thread 6 (Thread 0x3fff6b8ff080 (LWP 5287)):
#0  0x00003fff7cf0dd60 in pthread_cond_wait@@GLIBC_2.17 () from /lib64/libpthread.so.0
#1  0x00003fff7d8f7b8c in virCondWait (c=<optimized out>, m=<optimized out>) at util/virthread.c:154
#2  0x00003fff7d8f8cc8 in virThreadPoolWorker (opaque=<optimized out>) at util/virthreadpool.c:124
#3  0x00003fff7d8f759c in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00003fff7cf08728 in start_thread () from /lib64/libpthread.so.0
#5  0x00003fff7ce3d210 in clone () from /lib64/libc.so.6
Thread 5 (Thread 0x3fff6b0ff080 (LWP 5288)):
#0  0x00003fff7cf0dd60 in pthread_cond_wait@@GLIBC_2.17 () from /lib64/libpthread.so.0
#1  0x00003fff7d8f7b8c in virCondWait (c=<optimized out>, m=<optimized out>) at util/virthread.c:154
#2  0x00003fff7d8f8cc8 in virThreadPoolWorker (opaque=<optimized out>) at util/virthreadpool.c:124
#3  0x00003fff7d8f759c in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00003fff7cf08728 in start_thread () from /lib64/libpthread.so.0
#5  0x00003fff7ce3d210 in clone () from /lib64/libc.so.6
Thread 4 (Thread 0x3fff6a8ff080 (LWP 5289)):
#0  0x00003fff7cf0dd60 in pthread_cond_wait@@GLIBC_2.17 () from /lib64/libpthread.so.0
#1  0x00003fff7d8f7b8c in virCondWait (c=<optimized out>, m=<optimized out>) at util/virthread.c:154
#2  0x00003fff7d8f8cc8 in virThreadPoolWorker (opaque=<optimized out>) at util/virthreadpool.c:124
#3  0x00003fff7d8f759c in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00003fff7cf08728 in start_thread () from /lib64/libpthread.so.0
#5  0x00003fff7ce3d210 in clone () from /lib64/libc.so.6
Thread 3 (Thread 0x3fff6a0ff080 (LWP 5290)):
#0  0x00003fff7cf0dd60 in pthread_cond_wait@@GLIBC_2.17 () from /lib64/libpthread.so.0
#1  0x00003fff7d8f7b8c in virCondWait (c=<optimized out>, m=<optimized out>) at util/virthread.c:154
#2  0x00003fff7d8f8cc8 in virThreadPoolWorker (opaque=<optimized out>) at util/virthreadpool.c:124
#3  0x00003fff7d8f759c in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00003fff7cf08728 in start_thread () from /lib64/libpthread.so.0
#5  0x00003fff7ce3d210 in clone () from /lib64/libc.so.6
Thread 2 (Thread 0x3fff698ff080 (LWP 5291)):
#0  0x00003fff7cf0dd60 in pthread_cond_wait@@GLIBC_2.17 () from /lib64/libpthread.so.0
#1  0x00003fff7d8f7b8c in virCondWait (c=<optimized out>, m=<optimized out>) at util/virthread.c:154
#2  0x00003fff7d8f8cc8 in virThreadPoolWorker (opaque=<optimized out>) at util/virthreadpool.c:124
#3  0x00003fff7d8f759c in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#4  0x00003fff7cf08728 in start_thread () from /lib64/libpthread.so.0
#5  0x00003fff7ce3d210 in clone () from /lib64/libc.so.6
Thread 1 (Thread 0x3fff7c53d3e0 (LWP 5276)):
#0  0x00003fff7ce2d5d8 in poll () from /lib64/libc.so.6
#1  0x00003fff7d896358 in poll (__timeout=-1, __nfds=9, __fds=0x10008e000a0) at /usr/include/bits/poll2.h:46
#2  virEventPollRunOnce () at util/vireventpoll.c:641
#3  0x00003fff7d894764 in virEventRunDefaultImpl () at util/virevent.c:314
#4  0x00003fff7da52120 in virNetDaemonRun (dmn=0x10008df2ab0) at rpc/virnetdaemon.c:818
#5  0x0000000031b59ab4 in main (argc=<optimized out>, argv=<optimized out>) at libvirtd.c:1617

Comment 2 Peter Krempa 2016-09-26 06:49:22 UTC
The backtrace you've posted does not contain any hints of a crash. Please attach a proper one.

Comment 3 Dan Zheng 2016-09-26 10:49:26 UTC
Created attachment 1204777 [details]
log for the libvirtd crash

Comment 4 Peter Krempa 2016-09-26 10:53:36 UTC
A less useless backtrace than the posted in the bug summary would help as well. (The above one does not contain any trheads that would execute anything even remotely linked to the crash)

Comment 5 Dan Zheng 2016-09-27 08:16:21 UTC
Program received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x3ffface8f080 (LWP 68610)]
__strcmp_power8 () at ../sysdeps/powerpc/powerpc64/power8/strcmp.S:49
49              ld      r8,0(r3)
(gdb) set logging on
Copying output to gdb.txt.
(gdb) t a a bt
 
Thread 16 (Thread 0x3ffface8f080 (LWP 68610)):
#0  __strcmp_power8 () at ../sysdeps/powerpc/powerpc64/power8/strcmp.S:49
#1  0x00003fff9b6e8d84 in qemuDomainMachineIsPSeries (def=<optimized out>) at qemu/qemu_domain.c:5246
#2  0x00003fff9b6d8804 in qemuParseCommandLineDisk (old_style_ceph_args=<optimized out>, nvirtiodisk=0, dom=<optimized out>,
    val=0x3fffa40015e0 "file=/var/lib/libvirt/images/RHEL-7.3-ppc64le-latest.qcow2,index=0", xmlopt=<optimized out>)
    at qemu/qemu_parse_command.c:657
#3  qemuParseCommandLine (caps=0x3fff601cd780, xmlopt=0x3fff601d01c0, progenv=0x3fffa4003d70, progargv=0x3fffa4000aa0,
    pidfile=0x3ffface8e1c0, monConfig=0x3ffface8e1b8, monJSON=0x3ffface8e1af) at qemu/qemu_parse_command.c:2279
#4  0x00003fff9b6dbfa8 in qemuParseCommandLinePid (caps=0x3fff601cd780, xmlopt=0x3fff601d01c0, pid=<optimized out>,
    pidfile=0x3ffface8e1c0, monConfig=0x3ffface8e1b8, monJSON=0x3ffface8e1af) at qemu/qemu_parse_command.c:2747
#5  0x00003fff9b75f754 in qemuDomainQemuAttach (conn=0x3fffa4000b10, pid_value=<optimized out>, flags=<optimized out>)
    at qemu/qemu_driver.c:15684
#6  0x00003fffb59a1334 in virDomainQemuAttach (conn=0x3fffa4000b10, pid_value=<optimized out>, flags=<optimized out>) at libvirt-qemu.c:154
#7  0x0000000021cddd30 in qemuDispatchDomainAttach (server=0x1001e060c70, msg=<optimized out>, ret=0x3fffa4001520, args=0x3fffa4001460,
    rerr=0x3ffface8e4a0, client=0x1001e0704b0) at qemu_dispatch.h:168
#8  qemuDispatchDomainAttachHelper (server=0x1001e060c70, client=0x1001e0704b0, msg=<optimized out>, rerr=0x3ffface8e4a0,
    args=0x3fffa4001460, ret=0x3fffa4001520) at qemu_dispatch.h:146
#9  0x00003fffb57baabc in virNetServerProgramDispatchCall (msg=0x1001e06f3b0, client=0x1001e0704b0, server=0x1001e060c70,
    prog=0x1001e069b10) at rpc/virnetserverprogram.c:437
#10 virNetServerProgramDispatch (prog=0x1001e069b10, server=0x1001e060c70, client=0x1001e0704b0, msg=0x1001e06f3b0)
    at rpc/virnetserverprogram.c:307
#11 0x0000000021cfa370 in virNetServerProcessMsg (msg=<optimized out>, prog=<optimized out>, client=<optimized out>, srv=0x1001e060c70)
    at rpc/virnetserver.c:148
#12 virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x1001e060c70) at rpc/virnetserver.c:169
#13 0x00003fffb5658b9c in virThreadPoolWorker (opaque=<optimized out>) at util/virthreadpool.c:167
#14 0x00003fffb565759c in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#15 0x00003fffb4c68728 in start_thread (arg=0x3ffface8f080) at pthread_create.c:310
#16 0x00003fffb4b9d210 in clone () at ../sysdeps/unix/sysv/linux/powerpc/powerpc64/clone.S:109

Comment 6 Peter Krempa 2016-09-27 08:22:22 UTC
Thanks for the backtrace. This crashes in qemuDomainMachineIsPSeries since def->os.machine is not initialized and the function does not check it.

Comment 7 Andrea Bolognani 2017-10-10 14:29:28 UTC
Patch posted upstream.

  https://www.redhat.com/archives/libvir-list/2017-October/msg00406.html

Comment 8 Andrea Bolognani 2017-10-10 16:02:46 UTC
v2 patch posted upstream.

  https://www.redhat.com/archives/libvir-list/2017-October/msg00413.html

Comment 9 Andrea Bolognani 2017-10-11 06:46:00 UTC
Fix merged upstream.

commit 0e0e328dc1acc6a871910d17446013140a966080
Author: Andrea Bolognani <abologna>
Date:   Tue Oct 10 15:53:53 2017 +0200

    qemu: Don't crash when parsing command line lacking -M
    
    Parse the -M (or -machine) command line option before starting
    processing in earnest and have a fallback ready in case it's not
    present, so that while parsing other options we can rely on
    def->os.machine being initialized.
    
    Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1379218
    
    Signed-off-by: Andrea Bolognani <abologna>
    Reviewed-by: Daniel P. Berrange <berrange>

v3.8.0-69-g0e0e328dc

Comment 11 Dan Zheng 2017-11-06 08:05:43 UTC
Packages used:
qemu-kvm-rhev-2.10.0-4.el7.ppc64le
libvirt-3.9.0-1.el7.ppc64le
kernel-3.10.0-771.el7.ppc64le


As qemu command line will fail when missing if=none with 'file=', I used below command

 /usr/libexec/qemu-kvm -m 2048 -drive file=/var/lib/avocado/data/avocado-vt/images/jeos-25-64.qcow2,if=none,index=0 -monitor unix:/tmp/ss,server,nowait -name test -uuid 1fdf7c78-866a-4dcf-b017-5a9299682e1f

[1] 18948
# VNC server running on ::1:5900

# virsh qemu-attach 18948

But still libvirtd crashed.

See attachment libvirtd_crash.gdb.log.

Comment 12 Dan Zheng 2017-11-06 08:06:18 UTC
Created attachment 1348452 [details]
libvirtd_crash.gdb.log

Comment 13 Andrea Bolognani 2017-11-06 08:17:23 UTC
(In reply to Dan Zheng from comment #11)
> Packages used:
> qemu-kvm-rhev-2.10.0-4.el7.ppc64le
> libvirt-3.9.0-1.el7.ppc64le
> kernel-3.10.0-771.el7.ppc64le
> 
> 
> As qemu command line will fail when missing if=none with 'file=', I used
> below command
> 
>  /usr/libexec/qemu-kvm -m 2048 -drive
> file=/var/lib/avocado/data/avocado-vt/images/jeos-25-64.qcow2,if=none,
> index=0 -monitor unix:/tmp/ss,server,nowait -name test -uuid
> 1fdf7c78-866a-4dcf-b017-5a9299682e1f
> 
> [1] 18948
> # VNC server running on ::1:5900
> 
> # virsh qemu-attach 18948
> 
> But still libvirtd crashed.
> 
> See attachment libvirtd_crash.gdb.log.

This is a separate bug, one that it looks like it would reproduce
on x86_64 as well. Can you please verify whether that's the case
and file accordingly?

Comment 14 Dan Zheng 2017-11-07 08:29:16 UTC
Andrea,
Yes, you are right.
This is a new bug also on x86. I will file a new one.

Comment 15 Dan Zheng 2017-11-08 08:52:06 UTC
File a bug 1510781.
Verify this one.

Comment 19 errata-xmlrpc 2018-04-10 10:39:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:0704


Note You need to log in before you can comment on or make changes to this bug.