Bug 1192775 - Qemu and Libvirtd crash while do hot-plug guest agent with guest configured with virtio console
Summary: Qemu and Libvirtd crash while do hot-plug guest agent with guest configured w...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.1
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Amit Shah
QA Contact: Virtualization Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-02-15 09:09 UTC by vivian zhang
Modified: 2015-12-04 16:27 UTC (History)
12 users (show)

Fixed In Version: qemu-2.3
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-04 16:27:26 UTC
Target Upstream Version:


Attachments (Terms of Use)
libvirtd.log (431.14 KB, application/x-gzip)
2015-02-15 09:11 UTC, vivian zhang
no flags Details
guest qemu log (13.54 KB, text/plain)
2015-02-15 09:13 UTC, vivian zhang
no flags Details
guest xml (3.50 KB, text/plain)
2015-02-15 09:14 UTC, vivian zhang
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2015:2546 normal SHIPPED_LIVE qemu-kvm-rhev bug fix and enhancement update 2015-12-04 21:11:56 UTC

Description vivian zhang 2015-02-15 09:09:38 UTC
Description:
When guest configured with virtio console device, do hot-plug guest agent device to this guest will cause qemu and libvirtd crashed together

Version-Release number of selected component (if applicable):
qemu-kvm-rhev-2.1.2-23.el7.x86_64
3.10.0-229.el7.x86_64
libvirt-1.2.8-16.el7.x86_64
qemu-guest-agent-2.1.0-4.el7.x86_64


How reproducible:
100%

Steps:
1. prepare a healthy guest, start it with virtio console, but without guest agent setting in XML
# virsh dumpxml rl
...
 <console type='pty'>
      <source path='/dev/pts/4'/>
      <target type='virtio' port='1'/>
      <alias name='console1'/>
    </console>

...

# virsh list
 Id    Name                           State
----------------------------------------------------
 2     rl                             running


2. check libvirtd process id
# ps aux |grep libvirtd
root      8606  0.1  0.3 1140540 25296 ?       Ssl  16:13   0:00 /usr/sbin/libvirtd --listen
root     10389  0.0  0.0 112644   964 pts/0    S+   16:24   0:00 grep --color=auto libvirtd

3. prepare a guest agent setting XML
# cat /root/agent.xml
<channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/rl.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='2'/>
    </channel>

4. do hot-plug guest agent device to this guest, failed with libvirtd and qemu crash
# virsh attach-device rl /root/agent.xml
error: Failed to attach device from /root/agent.xml
error: End of file while reading data: Input/output error
error: Failed to reconnect to the hypervisor

5. recheck libvirtd process id, it changed already
# ps aux |grep libvirtd
root     12463  1.6  0.2 483096 18356 ?        tsl  16:29   0:01 /usr/sbin/libvirtd --listen
root     14146  0.0  0.0 112640   964 pts/2    S+   16:30   0:00 grep --color=auto libvirtd

6. do hot-plug guest agent to the guest without virtio console could success

7. for libvirtd crash, there is a bug tracking this Bug 1186765

8. some useful information when qemu and libvirt crashed

# gdb attach `pidof qemu-kvm`
(gdb) c
Continuing.

Program received signal SIGSEGV, Segmentation fault.
0x00007fead3f962e6 in __strcmp_ssse3 () from /lib64/libc.so.6
(gdb) t a a bt

Thread 4 (Thread 0x7feaca393700 (LWP 17421)):
#0  0x00007fead3f46257 in ioctl () from /lib64/libc.so.6
#1  0x00007feadb58ff25 in kvm_vcpu_ioctl (cpu=cpu@entry=0x7feade0afcf0, type=type@entry=44672) at /usr/src/debug/qemu-2.1.2/kvm-all.c:1853
#2  0x00007feadb58ffdc in kvm_cpu_exec (cpu=cpu@entry=0x7feade0afcf0) at /usr/src/debug/qemu-2.1.2/kvm-all.c:1722
#3  0x00007feadb57f2d2 in qemu_kvm_cpu_thread_fn (arg=0x7feade0afcf0) at /usr/src/debug/qemu-2.1.2/cpus.c:883
#4  0x00007feada09ddf5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007fead3f4f1ad in clone () from /lib64/libc.so.6

Thread 3 (Thread 0x7feac93ff700 (LWP 17449)):
#0  0x00007fead3f44b7d in poll () from /lib64/libc.so.6
#1  0x00007fead5146d37 in red_worker_main () from /lib64/libspice-server.so.1
#2  0x00007feada09ddf5 in start_thread () from /lib64/libpthread.so.0
#3  0x00007fead3f4f1ad in clone () from /lib64/libc.so.6

Thread 2 (Thread 0x7feacad95700 (LWP 17539)):
#0  0x00007feada0a38a0 in sem_timedwait () from /lib64/libpthread.so.0
#1  0x00007feadb7bdd07 in qemu_sem_timedwait (sem=sem@entry=0x7feaddf4c778, ms=ms@entry=10000) at util/qemu-thread-posix.c:257
#2  0x00007feadb768bdc in worker_thread (opaque=0x7feaddf4c6e0) at thread-pool.c:96
#3  0x00007feada09ddf5 in start_thread () from /lib64/libpthread.so.0
#4  0x00007fead3f4f1ad in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7feadb488a40 (LWP 17418)):
#0  0x00007fead3f962e6 in __strcmp_ssse3 () from /lib64/libc.so.6
#1  0x00007feadb5a0c50 in find_port_by_name (name=0x7feade3507e0 "org.qemu.guest_agent.0") at /usr/src/debug/qemu-2.1.2/hw/char/virtio-serial-bus.c:67
#2  virtser_port_device_realize (dev=0x7feade344050, errp=0x7fff6df2e870) at /usr/src/debug/qemu-2.1.2/hw/char/virtio-serial-bus.c:874
#3  0x00007feadb6beef8 in device_set_realized (obj=<optimized out>, value=<optimized out>, errp=0x7fff6df2e998) at hw/core/qdev.c:834
#4  0x00007feadb73b67e in property_set_bool (obj=0x7feade344050, v=<optimized out>, opaque=0x7feade350200, name=<optimized out>, errp=0x7fff6df2e998) at qom/object.c:1473
#5  0x00007feadb73de27 in object_property_set_qobject (obj=0x7feade344050, value=<optimized out>, name=0x7feadb7fd610 "realized", errp=0x7fff6df2e998) at qom/qom-qobject.c:24
#6  0x00007feadb73ca40 in object_property_set_bool (obj=obj@entry=0x7feade344050, value=value@entry=true, name=name@entry=0x7feadb7fd610 "realized", errp=errp@entry=0x7fff6df2e998) at qom/object.c:888
#7  0x00007feadb64b8cf in qdev_device_add (opts=opts@entry=0x7feade342930) at qdev-monitor.c:554
#8  0x00007feadb64bcaa in do_device_add (mon=<optimized out>, qdict=<optimized out>, ret_data=<optimized out>) at qdev-monitor.c:677
#9  0x00007feadb583847 in qmp_call_cmd (cmd=<optimized out>, params=0x7feade342a20, mon=0x7feaddf48360) at /usr/src/debug/qemu-2.1.2/monitor.c:5038
#10 handle_qmp_command (parser=<optimized out>, tokens=<optimized out>) at /usr/src/debug/qemu-2.1.2/monitor.c:5104
#11 0x00007feadb7ba2a2 in json_message_process_token (lexer=0x7feaddf48420, token=0x7feade340580, type=JSON_OPERATOR, x=189, y=94) at qobject/json-streamer.c:87
#12 0x00007feadb7cc05f in json_lexer_feed_char (lexer=lexer@entry=0x7feaddf48420, ch=<optimized out>, flush=flush@entry=false) at qobject/json-lexer.c:303
#13 0x00007feadb7cc12e in json_lexer_feed (lexer=0x7feaddf48420, buffer=<optimized out>, size=<optimized out>) at qobject/json-lexer.c:356
#14 0x00007feadb7ba439 in json_message_parser_feed (parser=<optimized out>, buffer=<optimized out>, size=<optimized out>) at qobject/json-streamer.c:110
#15 0x00007feadb5817df in monitor_control_read (opaque=<optimized out>, buf=<optimized out>, size=<optimized out>) at /usr/src/debug/qemu-2.1.2/monitor.c:5125
#16 0x00007feadb656de0 in qemu_chr_be_write (len=<optimized out>, buf=0x7fff6df2ebf0 "}\r", s=0x7feaddefb610) at qemu-char.c:213
#17 tcp_chr_read (chan=<optimized out>, cond=<optimized out>, opaque=0x7feaddefb610) at qemu-char.c:2729
#18 0x00007fead99a69ba in g_main_context_dispatch () from /lib64/libglib-2.0.so.0
#19 0x00007feadb775628 in glib_pollfds_poll () at main-loop.c:190
#20 os_host_main_loop_wait (timeout=<optimized out>) at main-loop.c:235
#21 main_loop_wait (nonblocking=<optimized out>) at main-loop.c:484
#22 0x00007feadb55909e in main_loop () at vl.c:2017
#23 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4606

(gdb) c
Continuing.
[Thread 0x7feaca393700 (LWP 17421) exited]
[Thread 0x7feac93ff700 (LWP 17449) exited]
[Thread 0x7feacad95700 (LWP 17539) exited]

Program terminated with signal SIGSEGV, Segmentation fault.
The program no longer exists.




gdb libvirtd `pidof libvirtd`
(gdb) c
Continuing.
Detaching after fork from child process 14152.
Detaching after fork from child process 14156.

Program received signal SIGABRT, Aborted.
[Switching to Thread 0x7f359a3c0700 (LWP 12464)]
0x00007f35a5dac5d7 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install sssd-client-1.12.2-58.el7.x86_64
(gdb) t a a bt

Thread 11 (Thread 0x7f359a3c0700 (LWP 12464)):
#0  0x00007f35a5dac5d7 in raise () from /lib64/libc.so.6
#1  0x00007f35a5dadcc8 in abort () from /lib64/libc.so.6
#2  0x00007f35a5dece07 in __libc_message () from /lib64/libc.so.6
#3  0x00007f35a5df41fd in _int_free () from /lib64/libc.so.6
#4  0x00007f35a8c5ceba in virFree (ptrptr=ptrptr@entry=0x7f359a3bf9a8) at util/viralloc.c:582
#5  0x00007f35a8cd0818 in virDomainChrDefFree (def=0x7f3584000b10) at conf/domain_conf.c:1659
#6  0x00007f35a8cdeca9 in virDomainDeviceDefFree (def=def@entry=0x7f3584000d20) at conf/domain_conf.c:1942
#7  0x00007f35923d07fb in qemuDomainAttachDeviceFlags (dom=<optimized out>, xml=<optimized out>, flags=<optimized out>) at qemu/qemu_driver.c:7646
#8  0x00007f35a8d53c96 in virDomainAttachDevice (domain=domain@entry=0x7f3584000c80,
    xml=0x7f3584000930 "<channel type='unix'>\n      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/rl.org.qemu.guest_agent.0'/>\n      <target type='virtio' name='org.qemu.guest_agent.0'/>\n      <address type="...) at libvirt.c:10385
#9  0x00007f35a97e5a90 in remoteDispatchDomainAttachDevice (server=<optimized out>, msg=<optimized out>, args=0x7f3584000c50, rerr=0x7f359a3bfc80, client=<optimized out>) at remote_dispatch.h:2485
#10 remoteDispatchDomainAttachDeviceHelper (server=<optimized out>, client=<optimized out>, msg=<optimized out>, rerr=0x7f359a3bfc80, args=0x7f3584000c50, ret=<optimized out>) at remote_dispatch.h:2463
#11 0x00007f35a8db2242 in virNetServerProgramDispatchCall (msg=0x7f35a9e3e610, client=0x7f35a9e3d420, server=0x7f35a9e25ab0, prog=0x7f35a9e300f0) at rpc/virnetserverprogram.c:437
#12 virNetServerProgramDispatch (prog=0x7f35a9e300f0, server=server@entry=0x7f35a9e25ab0, client=0x7f35a9e3d420, msg=0x7f35a9e3e610) at rpc/virnetserverprogram.c:307
#13 0x00007f35a97f33ed in virNetServerProcessMsg (msg=<optimized out>, prog=<optimized out>, client=<optimized out>, srv=0x7f35a9e25ab0) at rpc/virnetserver.c:172
#14 virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x7f35a9e25ab0) at rpc/virnetserver.c:193
#15 0x00007f35a8cb5e65 in virThreadPoolWorker (opaque=opaque@entry=0x7f35a9e1a780) at util/virthreadpool.c:145
#16 0x00007f35a8cb57fe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#17 0x00007f35a6546df5 in start_thread () from /lib64/libpthread.so.0
#18 0x00007f35a5e6d1ad in clone () from /lib64/libc.so.6

Thread 10 (Thread 0x7f3599bbf700 (LWP 12465)):
#0  0x00007f35a654a705 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f35a8cb5a46 in virCondWait (c=c@entry=0x7f35a9e25c30, m=m@entry=0x7f35a9e25c08) at util/virthread.c:153
#2  0x00007f35a8cb5efb in virThreadPoolWorker (opaque=opaque@entry=0x7f35a9e1a620) at util/virthreadpool.c:104
#3  0x00007f35a8cb57fe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#4  0x00007f35a6546df5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f35a5e6d1ad in clone () from /lib64/libc.so.6

Thread 9 (Thread 0x7f35993be700 (LWP 12466)):
#0  0x00007f35a654a705 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f35a8cb5a46 in virCondWait (c=c@entry=0x7f35a9e25c30, m=m@entry=0x7f35a9e25c08) at util/virthread.c:153
#2  0x00007f35a8cb5efb in virThreadPoolWorker (opaque=opaque@entry=0x7f35a9e1a780) at util/virthreadpool.c:104
#3  0x00007f35a8cb57fe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#4  0x00007f35a6546df5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f35a5e6d1ad in clone () from /lib64/libc.so.6

Thread 8 (Thread 0x7f3598bbd700 (LWP 12467)):
#0  0x00007f35a654a705 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f35a8cb5a46 in virCondWait (c=c@entry=0x7f35a9e25c30, m=m@entry=0x7f35a9e25c08) at util/virthread.c:153
#2  0x00007f35a8cb5efb in virThreadPoolWorker (opaque=opaque@entry=0x7f35a9e1a620) at util/virthreadpool.c:104
#3  0x00007f35a8cb57fe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#4  0x00007f35a6546df5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f35a5e6d1ad in clone () from /lib64/libc.so.6

Thread 7 (Thread 0x7f35983bc700 (LWP 12468)):
#0  0x00007f35a654a705 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#0  0x00007f35a654a705 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
---Type <return> to continue, or q <return> to quit---
#1  0x00007f35a8cb5a46 in virCondWait (c=c@entry=0x7f35a9e25c30, m=m@entry=0x7f35a9e25c08) at util/virthread.c:153
#2  0x00007f35a8cb5efb in virThreadPoolWorker (opaque=opaque@entry=0x7f35a9e1a780) at util/virthreadpool.c:104
#3  0x00007f35a8cb57fe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#4  0x00007f35a6546df5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f35a5e6d1ad in clone () from /lib64/libc.so.6

Thread 6 (Thread 0x7f3597bbb700 (LWP 12469)):
#0  0x00007f35a654a705 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f35a8cb5a46 in virCondWait (c=c@entry=0x7f35a9e25cc8, m=m@entry=0x7f35a9e25c08) at util/virthread.c:153
#2  0x00007f35a8cb5f1b in virThreadPoolWorker (opaque=opaque@entry=0x7f35a9e1a620) at util/virthreadpool.c:104
#3  0x00007f35a8cb57fe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#4  0x00007f35a6546df5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f35a5e6d1ad in clone () from /lib64/libc.so.6

Thread 5 (Thread 0x7f35973ba700 (LWP 12470)):
#0  0x00007f35a654a705 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f35a8cb5a46 in virCondWait (c=c@entry=0x7f35a9e25cc8, m=m@entry=0x7f35a9e25c08) at util/virthread.c:153
#2  0x00007f35a8cb5f1b in virThreadPoolWorker (opaque=opaque@entry=0x7f35a9e1a780) at util/virthreadpool.c:104
#3  0x00007f35a8cb57fe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#4  0x00007f35a6546df5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f35a5e6d1ad in clone () from /lib64/libc.so.6

Thread 4 (Thread 0x7f3596bb9700 (LWP 12471)):
#0  0x00007f35a654a705 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f35a8cb5a46 in virCondWait (c=c@entry=0x7f35a9e25cc8, m=m@entry=0x7f35a9e25c08) at util/virthread.c:153
#2  0x00007f35a8cb5f1b in virThreadPoolWorker (opaque=opaque@entry=0x7f35a9e1a620) at util/virthreadpool.c:104
#3  0x00007f35a8cb57fe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#4  0x00007f35a6546df5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f35a5e6d1ad in clone () from /lib64/libc.so.6

Thread 3 (Thread 0x7f35963b8700 (LWP 12472)):
#0  0x00007f35a654a705 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f35a8cb5a46 in virCondWait (c=c@entry=0x7f35a9e25cc8, m=m@entry=0x7f35a9e25c08) at util/virthread.c:153
#2  0x00007f35a8cb5f1b in virThreadPoolWorker (opaque=opaque@entry=0x7f35a9e1a620) at util/virthreadpool.c:104
#3  0x00007f35a8cb57fe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#4  0x00007f35a6546df5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f35a5e6d1ad in clone () from /lib64/libc.so.6

Thread 2 (Thread 0x7f3595bb7700 (LWP 12473)):
#0  0x00007f35a654a705 in pthread_cond_wait@@GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1  0x00007f35a8cb5a46 in virCondWait (c=c@entry=0x7f35a9e25cc8, m=m@entry=0x7f35a9e25c08) at util/virthread.c:153
#2  0x00007f35a8cb5f1b in virThreadPoolWorker (opaque=opaque@entry=0x7f35a9e1a780) at util/virthreadpool.c:104
#3  0x00007f35a8cb57fe in virThreadHelper (data=<optimized out>) at util/virthread.c:197
#4  0x00007f35a6546df5 in start_thread () from /lib64/libpthread.so.0
#5  0x00007f35a5e6d1ad in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7f35a976f880 (LWP 12463)):
#0  0x00007f35a5e62b7d in poll () from /lib64/libc.so.6
#1  0x00007f35a8c79c71 in poll (__timeout=2544, __nfds=11, __fds=<optimized out>) at /usr/include/bits/poll2.h:46
#2  virEventPollRunOnce () at util/vireventpoll.c:643
#3  0x00007f35a8c78762 in virEventRunDefaultImpl () at util/virevent.c:308
#4  0x00007f35a97f489d in virNetServerRun (srv=0x7f35a9e25ab0) at rpc/virnetserver.c:1139
#5  0x00007f35a97c15b8 in main (argc=<optimized out>, argv=<optimized out>) at libvirtd.c:1507



Actual result:
Qemu and libvirtd crash while do hot-plug guest agent with guest configured with virtio console

Expect result:
fix it

Comment 1 vivian zhang 2015-02-15 09:11:55 UTC
Created attachment 991870 [details]
libvirtd.log

Comment 2 vivian zhang 2015-02-15 09:13:35 UTC
Created attachment 991871 [details]
guest qemu log

Comment 3 vivian zhang 2015-02-15 09:14:28 UTC
Created attachment 991872 [details]
guest xml

Comment 5 vivian zhang 2015-03-03 02:32:35 UTC
hi, Amit
I found a similar issue with the same steps described in this bug comment 0
Qemu could also crash when do hot-plug below pty channel device with guest configured virtio console
... 
<channel type='pty'>
<target type='virtio' name='arbitrary.virtio.serial.port.name'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
...

I attach the core dump for your check, could you please help us check whether this is the same root cause, or a new qemu core dump issue
(gdb) c
Continuing.
[New Thread 0x7fafa17fe700 (LWP 3102)]
[New Thread 0x7fafa0ffd700 (LWP 3105)]
[New Thread 0x7faf4bfff700 (LWP 3106)]
[New Thread 0x7faf4b7fe700 (LWP 3107)]
[New Thread 0x7faf4affd700 (LWP 3108)]
[New Thread 0x7faf4a7fc700 (LWP 3109)]
[New Thread 0x7faf49ffb700 (LWP 3110)]
[New Thread 0x7faf497fa700 (LWP 3111)]
[New Thread 0x7faf48ff9700 (LWP 3112)]
[New Thread 0x7faf2ffff700 (LWP 3113)]
[New Thread 0x7faf2f7fe700 (LWP 3114)]
[Thread 0x7fafa17fe700 (LWP 3102) exited]
[Thread 0x7faf4bfff700 (LWP 3106) exited]
[Thread 0x7fafa8bc5700 (LWP 2992) exited]
[Thread 0x7faf49ffb700 (LWP 3110) exited]
[Thread 0x7faf2ffff700 (LWP 3113) exited]
[Thread 0x7faf2f7fe700 (LWP 3114) exited]
[Thread 0x7faf4affd700 (LWP 3108) exited]
[Thread 0x7faf497fa700 (LWP 3111) exited]
[Thread 0x7faf48ff9700 (LWP 3112) exited]
[Thread 0x7faf4b7fe700 (LWP 3107) exited]
[Thread 0x7faf4a7fc700 (LWP 3109) exited]
[Thread 0x7fafa0ffd700 (LWP 3105) exited]

Program received signal SIGSEGV, Segmentation fault.
0x00007fafb1bba2e6 in __strcmp_ssse3 () from /lib64/libc.so.6
(gdb) t a a bt

Thread 3 (Thread 0x7fafa3dfe700 (LWP 2995)):
#0 0x00007fafb1b6a257 in ioctl () from /lib64/libc.so.6
#1 0x00007fafb91b3f25 in kvm_vcpu_ioctl (cpu=cpu@entry=0x7fafba86cb60, type=type@entry=44672)
at /usr/src/debug/qemu-2.1.2/kvm-all.c:1853
#2 0x00007fafb91b3fdc in kvm_cpu_exec (cpu=cpu@entry=0x7fafba86cb60) at /usr/src/debug/qemu-2.1.2/kvm-all.c:1722
#3 0x00007fafb91a32d2 in qemu_kvm_cpu_thread_fn (arg=0x7fafba86cb60) at /usr/src/debug/qemu-2.1.2/cpus.c:883
#4 0x00007fafb7cc1df5 in start_thread () from /lib64/libpthread.so.0
#5 0x00007fafb1b731ad in clone () from /lib64/libc.so.6

Thread 2 (Thread 0x7fafa2dff700 (LWP 3023)):
#0 0x00007fafb1b68b7d in poll () from /lib64/libc.so.6
#1 0x00007fafb2d6ad37 in red_worker_main () from /lib64/libspice-server.so.1
#2 0x00007fafb7cc1df5 in start_thread () from /lib64/libpthread.so.0
#3 0x00007fafb1b731ad in clone () from /lib64/libc.so.6

Thread 1 (Thread 0x7fafb90aca40 (LWP 2982)):
#0 0x00007fafb1bba2e6 in __strcmp_ssse3 () from /lib64/libc.so.6
#1 0x00007fafb91c4c50 in find_port_by_name (name=0x7fafbb410b60 "arbitrary.virtio.serial.port.name")
at /usr/src/debug/qemu-2.1.2/hw/char/virtio-serial-bus.c:67
#2 virtser_port_device_realize (dev=0x7fafbab7fe60, errp=0x7fff363c47b0) at /usr/src/debug/qemu-2.1.2/hw/char/virtio-serial-bus.c:874
#3 0x00007fafb92e2ef8 in device_set_realized (obj=<optimized out>, value=<optimized out>, errp=0x7fff363c48d8) at hw/core/qdev.c:834
#4 0x00007fafb935f67e in property_set_bool (obj=0x7fafbab7fe60, v=<optimized out>, opaque=0x7fafbb410ab0, name=<optimized out>,
errp=0x7fff363c48d8) at qom/object.c:1473
#5 0x00007fafb9361e27 in object_property_set_qobject (obj=0x7fafbab7fe60, value=<optimized out>, name=0x7fafb9421610 "realized",
errp=0x7fff363c48d8) at qom/qom-qobject.c:24
#6 0x00007fafb9360a40 in object_property_set_bool (obj=obj@entry=0x7fafbab7fe60, value=value@entry=true,
name=name@entry=0x7fafb9421610 "realized", errp=errp@entry=0x7fff363c48d8) at qom/object.c:888
#7 0x00007fafb926f8cf in qdev_device_add (opts=opts@entry=0x7fafba9c59c0) at qdev-monitor.c:554
#8 0x00007fafb926fcaa in do_device_add (mon=<optimized out>, qdict=<optimized out>, ret_data=<optimized out>) at qdev-monitor.c:677
#9 0x00007fafb91a7847 in qmp_call_cmd (cmd=<optimized out>, params=0x7fafbb1ede00, mon=0x7fafba57ec60)
at /usr/src/debug/qemu-2.1.2/monitor.c:5038
#10 handle_qmp_command (parser=<optimized out>, tokens=<optimized out>) at /usr/src/debug/qemu-2.1.2/monitor.c:5104
#11 0x00007fafb93de2a2 in json_message_process_token (lexer=0x7fafba703150, token=0x7fafba9c56c0, type=JSON_OPERATOR, x=167, y=99)
at qobject/json-streamer.c:87
#12 0x00007fafb93f005f in json_lexer_feed_char (lexer=lexer@entry=0x7fafba703150, ch=<optimized out>, flush=flush@entry=false)
at qobject/json-lexer.c:303
#13 0x00007fafb93f012e in json_lexer_feed (lexer=0x7fafba703150, buffer=<optimized out>, size=<optimized out>)
at qobject/json-lexer.c:356
--Type <return> to continue, or q <return> to quit--
#14 0x00007fafb93de439 in json_message_parser_feed (parser=<optimized out>, buffer=<optimized out>, size=<optimized out>)
at qobject/json-streamer.c:110
#15 0x00007fafb91a57df in monitor_control_read (opaque=<optimized out>, buf=<optimized out>, size=<optimized out>)
at /usr/src/debug/qemu-2.1.2/monitor.c:5125
#16 0x00007fafb927ade0 in qemu_chr_be_write (len=<optimized out>, buf=0x7fff363c4b30 "}w㱯\177", s=0x7fafba551b80) at qemu-char.c:213
#17 tcp_chr_read (chan=<optimized out>, cond=<optimized out>, opaque=0x7fafba551b80) at qemu-char.c:2729
#18 0x00007fafb75ca9ba in g_main_context_dispatch () from /lib64/libglib-2.0.so.0
#19 0x00007fafb9399628 in glib_pollfds_poll () at main-loop.c:190
#20 os_host_main_loop_wait (timeout=<optimized out>) at main-loop.c:235
#21 main_loop_wait (nonblocking=<optimized out>) at main-loop.c:484
#22 0x00007fafb917d09e in main_loop () at vl.c:2017
#23 main (argc=<optimized out>, argv=<optimized out>, envp=<optimized out>) at vl.c:4606
(gdb)

Comment 6 Amit Shah 2015-03-04 09:04:49 UTC
It's the same issue; patch posted upstream.

Comment 7 Amit Shah 2015-03-16 06:54:19 UTC
Fix is upstream b18a755c4266a340a25ab4118525bd57c3dfc3fa

Comment 8 Gu Nini 2015-06-19 06:07:00 UTC
Have done test(hot plug a virtio serial chardev device while there is already the virtio console device) with both qemu cmd and libvirt virsh cmd, the bug did not occur any more, the detialed software versions are as follows:

Host kernel: 3.10.0-254.el7.x86_64
Qemu-kvm-rhev: qemu-kvm-rhev-2.3.0-2.el7.x86_64
Libvirt: libvirt-1.2.8-16.el7.x86_64

So the bug if fixed and verified well.

Comment 10 Gu Nini 2015-06-19 06:51:47 UTC
(In reply to Gu Nini from comment #8)
> Have done test(hot plug a virtio serial chardev device while there is
> already the virtio console device) with both qemu cmd and libvirt virsh cmd,
> the bug did not occur any more, the detialed software versions are as
> follows:
> 
> Host kernel: 3.10.0-254.el7.x86_64
> Qemu-kvm-rhev: qemu-kvm-rhev-2.3.0-2.el7.x86_64
> Libvirt: libvirt-1.2.8-16.el7.x86_64
> 
> So the bug if fixed and verified well.

So the bug **is** fixed and verified well. Sorry for the typo error.

Comment 11 huiqingding 2015-06-24 03:40:17 UTC
Based on Comment 8, set this bug to VERIFIED.

Best regards,
Huiqing

Comment 14 errata-xmlrpc 2015-12-04 16:27:26 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-2546.html


Note You need to log in before you can comment on or make changes to this bug.