Bug 1394140 - qemu gets SIGSEGV when hot-plug a vhostuser network
Summary: qemu gets SIGSEGV when hot-plug a vhostuser network
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev   
(Show other bugs)
Version: 7.3
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: 7.4
Assignee: Marc-Andre Lureau
QA Contact: Pei Zhang
URL:
Whiteboard:
Keywords: ZStream
Depends On:
Blocks: 1411879 1366108 1395265 1410200
TreeView+ depends on / blocked
 
Reported: 2016-11-11 07:48 UTC by chhu
Modified: 2017-08-02 03:35 UTC (History)
18 users (show)

Fixed In Version: QEMU 2.8
Doc Type: Bug Fix
Doc Text:
Hot plugging a vhostuser network device to a guest virtual machine caused the QEMU emulator to terminate unexpectedly due to access to an uninitialized chardev structure. The handling of vhostuser was improved not to access this structure if it is not initialized. As a result, the vhostuser network device can be hot plugged successfully.
Story Points: ---
Clone Of:
: 1410200 (view as bug list)
Environment:
Last Closed: 2017-08-01 23:39:45 UTC
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
all thread backtrace (17.21 KB, text/plain)
2016-11-11 07:48 UTC, chhu
no flags Details


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:2392 normal SHIPPED_LIVE Important: qemu-kvm-rhev security, bug fix, and enhancement update 2017-08-01 20:04:36 UTC

Description chhu 2016-11-11 07:48:05 UTC
Created attachment 1219648 [details]
all thread backtrace

Description of problem:
qemu gets SIGSEGV when hot-plug a vhostuser network.


Version-Release number of selected component (if applicable):
qemu-kvm-rhev-2.6.0-27.el7.x86_64
kernel: 3.10.0-514.el7.x86_64
Use latest upstream libvirt

How reproducible:
100%

Steps to Reproduce:

1. Start a guest with vhost-user in libvirt.
# virsh list --all
 Id    Name                           State
----------------------------------------------------
 2     r7.3-1                         running
# virsh dumpxml r7.3-1 | grep interface -A 5
    <interface type='vhostuser'>
      <mac address='52:54:00:93:51d'/>
      <source type='unix' path='/var/run/openvswitch/vhost-user1' mode='client'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>


2. Try to attach a vhostuser network to the guest, failed and hit qemu-kvm-rhev SIGSEGV.
# virsh list --all
 Id    Name                           State
----------------------------------------------------
 2     r7.3-1                         running

# virsh attach-device r7.3-1 vhost-user.xml
error: Failed to attach device from vhost-user.xml
error: Unable to read from monitor: Connection reset by peer

Expected results:
In step2, attach the vhostuser network successfully.


Additional info:

1. The SIGSEGV of qemu-kvm-rhev.
# abrt-cli ls
id 39a02ebdb9677bb75fab9f5128a0bb47817b8f3f
reason:         qemu-kvm killed by SIGSEGV
time:           Thu 10 Nov 2016 12:41:52 AM EST
cmdline:        /usr/libexec/qemu-kvm -name guest=r7.3-1,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-r7.3-1/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu Haswell-noTSX -m 1024 -realtime mlock=off -smp 2,sockets=2,cores=1,threads=1 -object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,share=yes,size=1073741824 -numa node,nodeid=0,cpus=0-1,memdev=ram-node0 -uuid ff2bd6fd-f0a2-429f-99b0-2fcaf8b7a23a -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-r7.3-1/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x5.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x5 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x5.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x5.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/var/lib/libvirt/images/r7.3-console-1.img,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -chardev socket,id=charnet0,path=/var/run/openvswitch/vhost-user1 -netdev vhost-user,chardev=charnet0,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:93:51:dd,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-1-r7.3-1/org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 127.0.0.1:0 -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on

package:        qemu-kvm-rhev-2.6.0-27.el7
uid:            107 (qemu)
count:          2
Directory:      /var/spool/abrt/ccpp-2016-11-10-00:41:52-3642
Run 'abrt-cli report /var/spool/abrt/ccpp-2016-11-10-00:41:52-3642' for creating a case in Red Hat Customer Portal

2. Qemu command line before crash in libvirtd.log
17071:2016-11-10 05:41:52.086+0000: 3569: debug : virJSONValueToString:1841 : result={"execute":"chardev-add","arguments":{"id":"charnet1","backend":{"type":"socket","data":{"addr":{"type":"unix","data":{"path":"/var/run/openvswitch/vhost-user2"}},"wait":false,"server":false}}},"id":"libvirt-14"}
17072:2016-11-10 05:41:52.086+0000: 3569: debug : qemuMonitorJSONCommandWithFd:296 : Send command '{"execute":"chardev-add","arguments":{"id":"charnet1","backend":{"type":"socket","data":{"addr":{"type":"unix","data":{"path":"/var/run/openvswitch/vhost-user2"}},"wait":false,"server":false}}},"id":"libvirt-14"}' for write with FD -1
17073:2016-11-10 05:41:52.086+0000: 3569: info : qemuMonitorSend:1009 : QEMU_MONITOR_SEND_MSG: mon=0x7f69f8004500 msg={"execute":"chardev-add","arguments":{"id":"charnet1","backend":{"type":"socket","data":{"addr":{"type":"unix","data":{"path":"/var/run/openvswitch/vhost-user2"}},"wait":false,"server":false}}},"id":"libvirt-14"}
17076:2016-11-10 05:41:52.086+0000: 3566: info : qemuMonitorIOWrite:534 : QEMU_MONITOR_IO_WRITE: mon=0x7f69f8004500 buf={"execute":"chardev-add","arguments":{"id":"charnet1","backend":{"type":"socket","data":{"addr":{"type":"unix","data":{"path":"/var/run/openvswitch/vhost-user2"}},"wait":false,"server":false}}},"id":"libvirt-14"}
17106:2016-11-10 05:41:52.089+0000: 3569: debug : virJSONValueToString:1841 : result={"execute":"netdev_add","arguments":{"type":"vhost-user","chardev":"charnet1","id":"hostnet1"},"id":"libvirt-15"}
17107:2016-11-10 05:41:52.089+0000: 3569: debug : qemuMonitorJSONCommandWithFd:296 : Send command '{"execute":"netdev_add","arguments":{"type":"vhost-user","chardev":"charnet1","id":"hostnet1"},"id":"libvirt-15"}' for write with FD -1
17108:2016-11-10 05:41:52.089+0000: 3569: info : qemuMonitorSend:1009 : QEMU_MONITOR_SEND_MSG: mon=0x7f69f8004500 msg={"execute":"netdev_add","arguments":{"type":"vhost-user","chardev":"charnet1","id":"hostnet1"},"id":"libvirt-15"}
17111:2016-11-10 05:41:52.089+0000: 3566: info : qemuMonitorIOWrite:534 : QEMU_MONITOR_IO_WRITE: mon=0x7f69f8004500 buf={"execute":"netdev_add","arguments":{"type":"vhost-user","chardev":"charnet1","id":"hostnet1"},"id":"libvirt-15"}
17144:2016-11-10 05:41:52.382+0000: 3569: debug : virJSONValueToString:1841 : result={"execute":"chardev-remove","arguments":{"id":"charnet1"},"id":"libvirt-16"}
17145:2016-11-10 05:41:52.382+0000: 3569: debug : qemuMonitorJSONCommandWithFd:296 : Send command '{"execute":"chardev-remove","arguments":{"id":"charnet1"},"id":"libvirt-16"}' for write with FD -1

3. Attached files: qemu-kvm-rhev.btrace

Comment 8 Pei Zhang 2017-06-06 02:06:01 UTC
==Verification==

Note: Verify this bug using same method with bug 1410200 Comment 5.

Versions:
3.10.0-675.el7.x86_64
qemu-kvm-rhev-2.9.0-7.el7.x86_64

Steps:
1. Boot guest
/usr/libexec/qemu-kvm \
-name guest=7.4 \
-machine pc \
-cpu SandyBridge \
-m 1024 \
-smp 2,sockets=2,cores=1,threads=1 \
-object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/dev/hugepages/libvirt/qemu,share=yes,size=1073741824 \
-numa node,nodeid=0,cpus=0-1,memdev=ram-node0 \
-drive file=/home/images_nfv-virt-rt-kvm/rhel7.4_nonrt.qcow2,format=qcow2,if=none,id=drive-virtio-disk0 \
-device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 \
-monitor stdio \
-qmp tcp:0:5555,server,nowait \
-vnc :2


2. Do hotplug vhostuser by qmp, success.
{"execute":"qmp_capabilities"}
{"return": {}}

{"execute": "chardev-add","arguments": {"id": "charnet2","backend": {"type": "socket","data": {"addr": {"type": "unix","data": {"path": "/var/run/openvswitch/vhost-user1"}},"wait": false,"server": false}}},"id": "libvirt-19"}
{"return": {}, "id": "libvirt-19"}

{"execute": "netdev_add","arguments": {"type": "vhost-user","chardev": "charnet2","id": "hostnet2"},"id": "libvirt-20"}
{"return": {}, "id": "libvirt-20"}

{"execute": "device_add", "arguments": { "driver": "virtio-net-pci", "netdev": "hostnet2", "id": "net2" } ,"id": "libvirt-21"}
{"return": {}, "id": "libvirt-21"}

In guest, the nic added is show.
# lspci | grep "Virtio network"
00:04.0 Ethernet controller: Red Hat, Inc Virtio network device

3. Do hot-unplug, nic can be removed and guest works well.
{"execute": "device_del", "arguments": { "id": "net2" } ,"id": "libvirt-21"}
{"return": {}, "id": "libvirt-21"}

{"execute": "netdev_del","arguments": {"id": "hostnet2"},"id": "libvirt-20"}
{"return": {}, "id": "libvirt-20"}

In guest, the nic added is removed.
# lspci | grep "Virtio network"
(No output)


So this bug has been fixed well. Thanks.

Move status of this bug to 'VERIFIED'.

Comment 10 errata-xmlrpc 2017-08-01 23:39:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 11 errata-xmlrpc 2017-08-02 01:17:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 12 errata-xmlrpc 2017-08-02 02:09:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 13 errata-xmlrpc 2017-08-02 02:50:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 14 errata-xmlrpc 2017-08-02 03:14:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392

Comment 15 errata-xmlrpc 2017-08-02 03:35:00 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2017:2392


Note You need to log in before you can comment on or make changes to this bug.