Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be available on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1657077 - QEMU crashes after hot-unplugging virtio-serial device during transfer data - Fast Train [NEEDINFO]
Summary: QEMU crashes after hot-unplugging virtio-serial device during transfer data -...
Keywords:
Status: NEW
Alias: None
Product: Red Hat Enterprise Linux Advanced Virtualization
Classification: Red Hat
Component: qemu-kvm
Version: 8.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: 8.0
Assignee: Julia Suvorova
QA Contact: liunana
URL:
Whiteboard:
Depends On:
Blocks: 1744438 1771318
TreeView+ depends on / blocked
 
Reported: 2018-12-07 03:07 UTC by xiagao
Modified: 2021-03-19 06:17 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-03-15 07:32:16 UTC
Type: Bug
Target Upstream Version:
nanliu: needinfo? (jusual)


Attachments (Terms of Use)

Description xiagao 2018-12-07 03:07:14 UTC
Description of problem:
As $summary

Version-Release number of selected component (if applicable):
qemu-kvm-3.0.0-2.module+el8+2208+e41b12e0.x86_64
kernel-4.18.0-40.el8.x86_64
seabios-bin-1.11.1-2.module+el8+2179+85112f94.noarch

How reproducible:
4/5

Steps to Reproduce:
1.boot win2019 guest with vioser device and driver installed

2.transfter big file via serial
host: # telnet localhost 2222
guest: > copy bigfile.txt \\.\Global.\com.redhat.rhevm.vdsm1

3.during step2,hot-unplug virtio-serial via qmp:

{ 'execute': 'device_del', 'arguments': {'id': 'port1' }}{"timestamp": {"seconds": 1544149607, "microseconds": 568491}, "event": "VSERPORT_CHANGE", "data": {"open": true, "id": "port1"}}

{"timestamp": {"seconds": 1544149608, "microseconds": 182996}, "event": "DEVICE_DELETED", "data": {"device": "port1", "path": "/machine/peripheral/port1"}}
{"return": {}}
{"timestamp": {"seconds": 1544149608, "microseconds": 146471}, "event": "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}
{ 'execute': 'device_del', 'arguments': {'id': 'port2' }}
{"timestamp": {"seconds": 1544149614, "microseconds": 779675}, "event": "DEVICE_DELETED", "data": {"device": "port2", "path": "/machine/peripheral/port2"}}
{"return": {}}

{'execute':'device_del','arguments':{'id':'virtio-serial1'}}
{"return": {}}
{"timestamp": {"seconds": 1544149628, "microseconds": 883057}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-serial1/virtio-backend"}}
Connection closed by foreign host.


Actual results:
qemu crashed.

Expected results:
qemu run well.

Additional info:
1.qemu cmd line
/usr/libexec/qemu-kvm -name win2019 -enable-kvm -m 3G -smp 4,maxcpus=8,sockets=8,cores=1,threads=1 -nodefconfig -nodefaults -cpu 'SandyBridge',+kvm_pv_unhalt,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time -rtc base=localtime,driftfix=none -boot order=cd,menu=on -monitor stdio -qmp tcp:0:1234,server,nowait -M q35 -vga std -vnc :10 \
-object secret,id=sec0,data=xiagao -drive file=win2019.luks,key-secret=sec0,format=luks,if=none,id=drive-ide0-0-0,cache=none -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 \
-netdev tap,script=/etc/qemu-ifup,downscript=no,id=hostnet0,vhost=on,queues=4 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:52:3b:36:86:09,mq=on,vectors=10 \
-device piix3-usb-uhci,id=usb -device usb-tablet,id=input0 \
-drive file=/home/kvm_autotest_root/iso/ISO/Win2019/en_windows_server_2019_x64_dvd_4cb967d8.iso,if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,drive=drive-ide0-1-0,id=ide0-1-0 \
-cdrom /home/kvm_autotest_root/iso/windows/virtio-win.iso.el8 \


-device pcie-root-port,id=pcie-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0  \
-device virtio-serial-pci,id=virtio-serial1,max_ports=511,bus=pcie-root-port-4 \
-chardev socket,id=channel1,host=127.0.0.1,port=2222,server,nowait  -device virtserialport,bus=virtio-serial1.0,chardev=channel1,name=com.redhat.rhevm.vdsm1,id=port1,nr=1 \
-chardev socket,id=channel2,path=/tmp/helloworld2,server,nowait  -device virtserialport,bus=virtio-serial1.0,chardev=channel2,name=com.redhat.rhevm.vdsm2,id=port2,nr=30 \

2.Check the coredump file with gdb

Core was generated by `/usr/libexec/qemu-kvm -name win2019.luks -enable-kvm -m 3G -smp 4,maxcpus=8,soc'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  virtio_pci_common_write (opaque=0x560742bc0850, addr=24, val=128, size=<optimized out>)
    at hw/virtio/virtio-pci.c:1302
1302	        proxy->vqs[vdev->queue_sel].num = val;
[Current thread is 1 (Thread 0x7f647f7fe700 (LWP 13342))]
(gdb) 
(gdb) 
(gdb) 
(gdb) 
(gdb) bt
#0  virtio_pci_common_write (opaque=0x560742bc0850, addr=24, val=128, size=<optimized out>)
    at hw/virtio/virtio-pci.c:1302
#1  0x0000560740740176 in memory_region_write_accessor (mr=<optimized out>, addr=<optimized out>, 
    value=<optimized out>, size=<optimized out>, shift=<optimized out>, mask=<optimized out>, 
    attrs=...) at /usr/src/debug/qemu-kvm-3.0.0-2.module+el8+2208+e41b12e0.x86_64/memory.c:527
#2  0x000056074073e3d6 in access_with_adjusted_size (addr=addr@entry=24, 
    value=value@entry=0x7f647f7fd5b8, size=size@entry=2, access_size_min=<optimized out>, 
    access_size_max=<optimized out>, 
    access_fn=access_fn@entry=0x560740740130 <memory_region_write_accessor>, mr=0x560742bc1220, 
    attrs=...) at /usr/src/debug/qemu-kvm-3.0.0-2.module+el8+2208+e41b12e0.x86_64/memory.c:594
#3  0x00005607407422d8 in memory_region_dispatch_write (mr=0x560742bc1220, addr=24, 
    data=<optimized out>, size=2, attrs=...)
    at /usr/src/debug/qemu-kvm-3.0.0-2.module+el8+2208+e41b12e0.x86_64/memory.c:1473
#4  0x00005607406ecfc3 in flatview_write_continue (fv=0x7f64680159c0, addr=4263510040, attrs=..., 
    buf=0x7f6499fe6028 <error: Cannot access memory at address 0x7f6499fe6028>, len=2, 
    addr1=<optimized out>, l=<optimized out>, mr=0x560742bc1220)
    at /usr/src/debug/qemu-kvm-3.0.0-2.module+el8+2208+e41b12e0.x86_64/exec.c:3255
#5  0x00005607406ed1e9 in flatview_write (fv=0x7f64680159c0, addr=4263510040, attrs=..., 
    buf=0x7f6499fe6028 <error: Cannot access memory at address 0x7f6499fe6028>, len=2)
    at /usr/src/debug/qemu-kvm-3.0.0-2.module+el8+2208+e41b12e0.x86_64/exec.c:3294
#6  0x00005607406f0ca3 in address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., 
    buf=<optimized out>, len=<optimized out>)
    at /usr/src/debug/qemu-kvm-3.0.0-2.module+el8+2208+e41b12e0.x86_64/exec.c:3384
#7  0x0000560740753468 in kvm_cpu_exec (cpu=<optimized out>)
--Type <RET> for more, q to quit, c to continue without paging--
    at /usr/src/debug/qemu-kvm-3.0.0-2.module+el8+2208+e41b12e0.x86_64/accel/kvm/kvm-all.c:1991
#8  0x000056074072d3fe in qemu_kvm_cpu_thread_fn (arg=0x5607418646c0)
    at /usr/src/debug/qemu-kvm-3.0.0-2.module+el8+2208+e41b12e0.x86_64/cpus.c:1215
#9  0x00007f64953ed2de in start_thread (arg=<optimized out>) at pthread_create.c:486
#10 0x00007f649511d9f3 in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95

Comment 1 xiagao 2018-12-07 03:18:08 UTC
Core dump file is in the following link.
http://fileshare.englab.nay.redhat.com/pub/section2/coredump/bz1657077/

Comment 2 FuXiangChun 2018-12-11 01:47:40 UTC
xiagao,

please fill the relevant fields of bug(e.g "QA Whiteboard"....).

Comment 3 pagupta 2018-12-11 03:15:18 UTC
Is this issue only with Windows guest or we face this crash also with linux guest?

Thanks,
Pankaj

Comment 4 xiagao 2018-12-11 04:40:37 UTC
xiaohli, could you handle linux guest for comment 3?
And this bug is found neither in virtio-win polarion test plan nor in automation, could you check if your test plan cover it? pls fill the relevant fields.

Comment 5 Li Xiaohui 2018-12-11 06:13:52 UTC
(In reply to xiagao from comment #4)
> xiaohli, could you handle linux guest for comment 3?
> And this bug is found neither in virtio-win polarion test plan nor in
> automation, could you check if your test plan cover it? pls fill the
> relevant fields.

ok, I'm checking, will update the result later

Comment 6 Li Xiaohui 2018-12-11 07:46:36 UTC
(In reply to xiagao from comment #4)
> xiaohli, could you handle linux guest for comment 3?
> And this bug is found neither in virtio-win polarion test plan nor in
> automation, could you check if your test plan cover it? pls fill the
> relevant fields.

Hi, all
I have tested this issue, and didn't reproduce this bug.

Version-Release number of selected component (if applicable):
host info: 
kernel-4.18.0-46.el8.x86_64 & qemu-img-3.1.0-0.el8.rc5.x86_64
guest info:
kernel-4.18.0-46.el8.x86_64


Steps to Reproduce
1.boot guest with commands:
/usr/libexec/qemu-kvm -M q35 \
-cpu SandyBridge \
-enable-kvm \
-m 4G \
-smp 4 \
-nodefaults \
-rtc base=utc,clock=host,driftfix=slew \
-device pcie-root-port,id=pcie.0-root-port-2,slot=2,chassis=2,addr=0x2,bus=pcie.0 \
-device pcie-root-port,id=pcie.0-root-port-3,slot=3,chassis=3,addr=0x3,bus=pcie.0 \
-device pcie-root-port,id=pcie.0-root-port-4,slot=4,chassis=4,addr=0x4,bus=pcie.0 \
-device pcie-root-port,id=pcie.0-root-port-5,slot=5,chassis=5,addr=0x5,bus=pcie.0 \
-device virtio-scsi-pci,id=scsi0,bus=pcie.0-root-port-2,addr=0x0 \
-blockdev driver=qcow2,cache.direct=off,cache.no-flush=on,file.filename=/mnt/rhel-image/rhel8-q35-virtio-scsi.qcow2,node-name=my_disk,file.driver=file \
-device scsi-hd,bus=scsi0.0,drive=my_disk \
-device virtio-net-pci,mac=24:be:05:0c:1e:1c,id=netdev1,vectors=4,netdev=net1,bus=pcie.0-root-port-3 -netdev tap,id=net1,vhost=on \
-device nec-usb-xhci,id=controller,bus=pcie.0-root-port-4 \
-device usb-tablet,id=input0,bus=controller.0 \
-device virtio-serial-pci,id=virtio-serial1,max_ports=511,bus=pcie.0-root-port-5 \
-chardev socket,id=channel1,host=127.0.0.1,port=2222,server,nowait \
-device virtserialport,bus=virtio-serial1.0,chardev=channel1,name=com.redhat.rhevm.vdsm1,id=port1,nr=1 \
-chardev socket,id=channel2,path=/tmp/helloworld2,server,nowait \
-device virtserialport,bus=virtio-serial1.0,chardev=channel2,name=com.redhat.rhevm.vdsm2,id=port2,nr=30 \
-vnc :3 \
-qmp tcp:0:4440,server,nowait \
-monitor stdio \
-device VGA \
-boot menu=on \

2.after guest booted, start to transfer data through serial port:
(1)transfer data via port1:
(host)[root@localhost qemu-sh]# nc 127.0.0.1 2222 
(guest)[root@dhcp-65-198 ~]# cat transfer1.sh 
while [ 1 ]
do
    echo hello,port1 > /dev/vport2p1
done
(guest)[root@dhcp-65-198 ~]# ./transfer1.sh

(2)transfer data via port2
(host)[root@localhost qemu-sh]# nc -U /tmp/helloworld2
(guest)[root@dhcp-65-198 ~]# cat transfer2.sh 
while [ 1 ]
do
    echo hello,port30 > /dev/vport2p30
done
(guest)[root@dhcp-65-198 ~]# ./transfer2.sh

3.while transfering data, delete port1 and port2 and virtios-serial-pci via qmp command:
{"execute":"device_del","arguments":{"id":"port1"}}{"timestamp": {"seconds": 1544511209, "microseconds": 385245}, "event": "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}
{"timestamp": {"seconds": 1544511209, "microseconds": 455114}, "event": "VSERPORT_CHANGE", "data": {"open": false, "id": "port2"}}

{"timestamp": {"seconds": 1544511209, "microseconds": 687609}, "event": "DEVICE_DELETED", "data": {"device": "port1", "path": "/machine/peripheral/port1"}}
{"return": {}}

{"execute":"device_del","arguments":{"id":"port2"}}{"timestamp": {"seconds": 1544511458, "microseconds": 678557}, "event": "VSERPORT_CHANGE", "data": {"open": false, "id": "port2"}}

{"timestamp": {"seconds": 1544511459, "microseconds": 134882}, "event": "DEVICE_DELETED", "data": {"device": "port2", "path": "/machine/peripheral/port2"}}
{"return": {}}

{"execute":"device_del","arguments":{"id":"virtio-serial1"}}
{"return": {}}
{"timestamp": {"seconds": 1544511663, "microseconds": 279361}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-serial1/virtio-backend"}}
{"timestamp": {"seconds": 1544511663, "microseconds": 337736}, "event": "DEVICE_DELETED", "data": {"device": "virtio-serial1", "path": "/machine/peripheral/virtio-serial1"}}

4.after steps 3, check qemu and guest, all work well, no crash happen.

Comment 7 Li Xiaohui 2018-12-11 09:03:48 UTC
Hi xiagao,
since I didn't reproduce it in rhel8 guest with latest qemu version, could you use latest qemu version to test again, and check whether can reproduce this bug? 

Additional info:
tried again use "telnet localhost 2222" on host to receive data, didn't hit the issue.

Comment 8 xiagao 2018-12-12 05:47:49 UTC
Can reproduce with latest qemu version.

kernel: kernel-4.18.0-50.el8.x86_64
qemu: qemu-kvm-3.1.0-0.module+el8+2266+616cf026.next.candidate.x86_64

Comment 12 pagupta 2018-12-14 07:49:19 UTC
(In reply to xiagao from comment #1)
> Core dump file is in the following link.
> http://fileshare.englab.nay.redhat.com/pub/section2/coredump/bz1657077/

It looks like I don't have permissions to download the files in above path. Can you please grant required permissions?

Thanks,
Pankaj

Comment 13 xiagao 2018-12-14 08:02:06 UTC
(In reply to pagupta from comment #12)
> (In reply to xiagao from comment #1)
> > Core dump file is in the following link.
> > http://fileshare.englab.nay.redhat.com/pub/section2/coredump/bz1657077/
> 
> It looks like I don't have permissions to download the files in above path.
> Can you please grant required permissions?
> 
> Thanks,
> Pankaj

You can download now. :)

Comment 14 pagupta 2018-12-17 17:06:02 UTC

I tried to look at the core. It looks like qdev is NULL when guest tries to write in PCI region. It looks like PCI device(e.g virtio-serial) is unplugged and guest is still writing to it.
As this issue only occurs with Windows guest, maybe a bug in guest driver. 

static void virtio_pci_common_write(void *opaque, hwaddr addr,
                                    uint64_t val, unsigned size)
{
    VirtIOPCIProxy *proxy = opaque;
    VirtIODevice *vdev = virtio_bus_get_device(&proxy->bus);

...
...
    case VIRTIO_PCI_COMMON_Q_SIZE:
        proxy->vqs[vdev->queue_sel].num = val;
        break;

}

static inline VirtIODevice *virtio_bus_get_device(VirtioBusState *bus)
{
    BusState *qbus = &bus->parent_obj;
    BusChild *kid = QTAILQ_FIRST(&qbus->children); ==================> here
    DeviceState *qdev = kid ? kid->child : NULL; ....../

    /* This is used on the data path, the cast is guaranteed
     * to succeed by the qdev machinery.
     */
    return (VirtIODevice *)qdev;
}

(gdb) p ((VirtIOPCIProxy *) 0x560742bc0850)->bus->parent_obj
$5 = {obj = {class = 0x5607417357c0, free = 0x0, properties = 0x560742b74800, ref = 0, parent = 0x0}, parent = 0x0, name = 0x560742bd1050 "", hotplug_handler = 0x0, max_index = 1, realized = false, children = {
    tqh_first = 0x0, tqh_last = 0x560742bc8990}, sibling = {le_next = 0x0, le_prev = 0x560742bc08b0}}

(gdb) p ((VirtIOPCIProxy *) 0x560742bc0850)->bus->parent_obj.children
$6 = {tqh_first = 0x0, tqh_last = 0x560742bc8990}

(gdb)  p ((VirtIOPCIProxy *) 0x560742bc0850)->bus->parent_obj.children.tqh_first
$13 = (struct BusChild *) 0x0                    ================> here


(gdb) bt
#0  virtio_pci_common_write (opaque=0x560742bc0850, addr=24, val=128, size=<optimized out>) at hw/virtio/virtio-pci.c:1302
#1  0x0000560740740176 in memory_region_write_accessor (mr=<optimized out>, addr=<optimized out>, value=<optimized out>, size=<optimized out>, shift=<optimized out>, mask=<optimized out>, attrs=...)
    at /usr/src/debug/qemu-kvm-3.0.0-2.module+el8+2208+e41b12e0.x86_64/memory.c:527
#2  0x000056074073e3d6 in access_with_adjusted_size (addr=addr@entry=24, value=value@entry=0x7f647f7fd5b8, size=size@entry=2, access_size_min=<optimized out>, access_size_max=<optimized out>, 
    access_fn=access_fn@entry=0x560740740130 <memory_region_write_accessor>, mr=0x560742bc1220, attrs=...) at /usr/src/debug/qemu-kvm-3.0.0-2.module+el8+2208+e41b12e0.x86_64/memory.c:594
#3  0x00005607407422d8 in memory_region_dispatch_write (mr=0x560742bc1220, addr=24, data=<optimized out>, size=2, attrs=...) at /usr/src/debug/qemu-kvm-3.0.0-2.module+el8+2208+e41b12e0.x86_64/memory.c:1473
#4  0x00005607406ecfc3 in flatview_write_continue (fv=0x7f64680159c0, addr=4263510040, attrs=..., buf=0x7f6499fe6028 <error: Cannot access memory at address 0x7f6499fe6028>, len=2, addr1=<optimized out>, 
    l=<optimized out>, mr=0x560742bc1220) at /usr/src/debug/qemu-kvm-3.0.0-2.module+el8+2208+e41b12e0.x86_64/exec.c:3255
#5  0x00005607406ed1e9 in flatview_write (fv=0x7f64680159c0, addr=4263510040, attrs=..., buf=0x7f6499fe6028 <error: Cannot access memory at address 0x7f6499fe6028>, len=2)
    at /usr/src/debug/qemu-kvm-3.0.0-2.module+el8+2208+e41b12e0.x86_64/exec.c:3294
#6  0x00005607406f0ca3 in address_space_write (as=<optimized out>, addr=<optimized out>, attrs=..., buf=<optimized out>, len=<optimized out>)
    at /usr/src/debug/qemu-kvm-3.0.0-2.module+el8+2208+e41b12e0.x86_64/exec.c:3384
#7  0x0000560740753468 in kvm_cpu_exec (cpu=<optimized out>) at /usr/src/debug/qemu-kvm-3.0.0-2.module+el8+2208+e41b12e0.x86_64/accel/kvm/kvm-all.c:1991
#8  0x000056074072d3fe in qemu_kvm_cpu_thread_fn (arg=0x5607418646c0) at /usr/src/debug/qemu-kvm-3.0.0-2.module+el8+2208+e41b12e0.x86_64/cpus.c:1215
#9  0x00007f64953ed2de in start_thread () from /lib64/libpthread.so.0
#10 0x00007f649511d9f3 in readahead () from /lib64/libc.so.6

Thanks,
Pankaj

Comment 16 xiagao 2018-12-18 03:12:28 UTC
Xiaohli, could you handle comment 9?

Comment 18 Li Xiaohui 2018-12-25 07:31:26 UTC
(In reply to xiagao from comment #16)
> Xiaohli, could you handle comment 9?

Hi Xiaoling,
I tried almost 30 times to reproduce this bug, but couldn't reproduce it in fast train rhel8 guest when boot with parameter "max_ports=31".

So this issue maybe only happens in windows guest, you'd better try again in slow train and boot guest with "max_ports=31"


Thanks,
Li Xiaohui

Comment 19 xiagao 2018-12-26 06:05:27 UTC
Test almost 30 times,didn't hit this issue in rhel8 fast train.
qemu-kvm version:
qemu-kvm-2.12.0-49.module+el8+2586+bf759444.x86_64

Comment 20 pagupta 2019-01-04 12:17:24 UTC
Hello Xiaoling,

Any update on comment 18 i.e testing results on windows guest and "max_ports=31"?

Thanks,
Pankaj

Comment 21 xiagao 2019-01-06 12:49:23 UTC
(In reply to pagupta from comment #20)
> Hello Xiaoling,
> 
> Any update on comment 18 i.e testing results on windows guest and
> "max_ports=31"?
> 
> Thanks,
> Pankaj

Replied in comment 19 with "max_ports=31".

Comment 22 pagupta 2019-01-07 12:16:30 UTC
(In reply to xiagao from comment #21)
> (In reply to pagupta from comment #20)
> > Hello Xiaoling,
> > 
> > Any update on comment 18 i.e testing results on windows guest and
> > "max_ports=31"?
> > 
> > Thanks,
> > Pankaj
> 
> Replied in comment 19 with "max_ports=31".

o.k. That means both windows & linux guest works fine with <=31 ports with hotunlpug.
This is the max supported value for virtio serial ports. It looks like this issue is 
not reproducible with <=31 ports. 

Thanks,
Pankaj

Comment 23 pagupta 2019-01-17 11:58:55 UTC
It looks to me 'device pcie-root-port' by default has limitation on slot & channels. When Qemu is trying to access PCIProxy at one index , its crashing with NULL.
I assume libvirt knows about this and takes proper action.

I am adding 'Andrea' in CC to get his inputs on this from libvirt side?

Thanks,
Pankaj

Comment 24 Andrea Bolognani 2019-01-17 12:19:25 UTC
(In reply to pagupta from comment #23)
> It looks to me 'device pcie-root-port' by default has limitation on slot &
> channels. When Qemu is trying to access PCIProxy at one index , its crashing
> with NULL.

Did you mean virtio-serial-pci here? From what I can gather from
the bug report, the issue shows up when we use

  -device virtio-serial-pci,max_ports=X

with X > 31, and even then only when the device is hot-unplugged
during a data transfer. Did I miss something?

> I assume libvirt knows about this and takes proper action.

libvirt doesn't currently perform any validation on the value of
max_ports (except for it being a number), but we can certainly
introduce a check if there are hardcoded limits in QEMU.

Comment 25 pagupta 2019-01-17 13:32:51 UTC
I saw this is tested with windows2019 which is latest version of windows, GA in Oct 2018.

Can we test windows10 or maybe any previous version of windows server just to confirm if it was working before? with and without max_ports=511

Thanks,
Pankaj

Comment 26 xiagao 2019-01-21 02:53:39 UTC
(In reply to pagupta from comment #25)
> I saw this is tested with windows2019 which is latest version of windows, GA
> in Oct 2018.
> 
> Can we test windows10 or maybe any previous version of windows server just
> to confirm if it was working before? with and without max_ports=511
> 
> Thanks,
> Pankaj

Test win8.1-64 guest with and without mx-ports=511 for about 20 times in fast train, didn't hit qemu crash. 

BTW, I will correct comment 19, it is tested in SLOW train, but not fast train. Sorry for confusing you.

Comment 27 pagupta 2019-01-21 06:35:42 UTC
(In reply to xiagao from comment #26)
> (In reply to pagupta from comment #25)
> > I saw this is tested with windows2019 which is latest version of windows, GA
> > in Oct 2018.
> > 
> > Can we test windows10 or maybe any previous version of windows server just
> > to confirm if it was working before? with and without max_ports=511
> > 
> > Thanks,
> > Pankaj
> 
> Test win8.1-64 guest with and without mx-ports=511 for about 20 times in
> fast train, didn't hit qemu crash. 
> 
> BTW, I will correct comment 19, it is tested in SLOW train, but not fast
> train. Sorry for confusing you.

Could you please summarize with a small chart. This will help to narrow down
the pieces which have problem.

qemu-kvm version    OS(Windows/Linux)   version       max_ports

Thanks,
Pankaj

Comment 28 xiagao 2019-01-21 06:55:57 UTC
(In reply to pagupta from comment #27)
> (In reply to xiagao from comment #26)
> > (In reply to pagupta from comment #25)
> > > I saw this is tested with windows2019 which is latest version of windows, GA
> > > in Oct 2018.
> > > 
> > > Can we test windows10 or maybe any previous version of windows server just
> > > to confirm if it was working before? with and without max_ports=511
> > > 
> > > Thanks,
> > > Pankaj
> > 
> > Test win8.1-64 guest with and without mx-ports=511 for about 20 times in
sorry for the typo,it should be max_ports=31

> > fast train, didn't hit qemu crash. 
> > 
> > BTW, I will correct comment 19, it is tested in SLOW train, but not fast
> > train. Sorry for confusing you.
> 
> Could you please summarize with a small chart. This will help to narrow down
> the pieces which have problem.
> 
> qemu-kvm version    OS(Windows/Linux)   version       max_ports
> 
> Thanks,
> Pankaj


qemu-kvm version                                                  guest OS(Windows/Linux)      version     max_ports   qemu-crashed
qemu-kvm-3.0.0-2.module+el8+2208+e41b12e0.x86_64                   win2019                 10.0.17763        511          yes
qemu-kvm-3.1.0-0.module+el8+2266+616cf026.next.candidate.x86_64    win2019                 10.0.17763        511          yes
qemu-kvm-2.12.0-49.module+el8+2586+bf759444.x86_64                 win2019                 10.0.17763        31           no
qemu-kvm-3.1.0-2.module+el8+2606+2c716ad7.x86_64                   win8.1-64               6.3.9600         31/null       no


@xiaohli could you update your test results? I remember you hit qemu crashed on windows guest too.
Thanks.

Comment 29 Li Xiaohui 2019-01-21 07:32:57 UTC
(In reply to xiagao from comment #28)
> 
> qemu-kvm version                                                  guest
> OS(Windows/Linux)      version     max_ports   qemu-crashed
> qemu-kvm-3.0.0-2.module+el8+2208+e41b12e0.x86_64                   win2019  
> 10.0.17763        511          yes
> qemu-kvm-3.1.0-0.module+el8+2266+616cf026.next.candidate.x86_64    win2019  
> 10.0.17763        511          yes
> qemu-kvm-2.12.0-49.module+el8+2586+bf759444.x86_64                 win2019  
> 10.0.17763        31           no
> qemu-kvm-3.1.0-2.module+el8+2606+2c716ad7.x86_64                   win8.1-64
> 6.3.9600         31/null       no
> 
> 
> @xiaohli could you update your test results? I remember you hit qemu crashed
> on windows guest too.
> Thanks.

Hi xiaoling, Pankaj,
I ever tested following two situations.

qemu-kvm version                   guest OS(Linux/Windows)      version                        max_ports      qemu-crashed
qemu-img-3.1.0-0.el8.rc5.x86_64         windows 2019            unknow                           511              yes
qemu-img-3.1.0-0.el8.rc5.x86_64         rhel8                   kernel-4.18.0-46.el8.x86_64      31               no

Best regards,
Li Xiaohui

Comment 30 pagupta 2019-01-23 08:59:46 UTC
Hi,

Can we have output of below test [1]. Want to confirm if this was working before  Windows2019?
or its introduced in latest version of Windows.

[1]
qemu-kvm version    OS(Windows/Linux)         version       max_ports
qemu-kvm-3.1          win8.1-64               6.3.9600         511  

Thanks,
Pankaj

Comment 31 xiagao 2019-01-24 08:44:49 UTC
Only try twice, didn't hit qemu crash. But hit data transfer failed issue after hotplug port and virtio-serial-pci.

Steps:
1.transfer data from guest to host in a loop.
get qmp info.

{"timestamp": {"seconds": 1548318136, "microseconds": 882368}, "event": "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}
{"timestamp": {"seconds": 1548318137, "microseconds": 884038}, "event": "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}
{"timestamp": {"seconds": 1548318138, "microseconds": 883568}, "event": "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}

2.hotunplug port and virtio-serial-pci device
{ 'execute': 'device_del', 'arguments': {'id': 'port1' }}{"timestamp": {"seconds": 1548318142, "microseconds": 884045}, "event": "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}
{"timestamp": {"seconds": 1548318143, "microseconds": 131751}, "event": "DEVICE_DELETED", "data": {"device": "port1", "path": "/machine/peripheral/port1"}}
{"return": {}}
{"timestamp": {"seconds": 1548318143, "microseconds": 131057}, "event": "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}
{"execute":"device_del","arguments":{"id":"virtio-serial1"}}
{"return": {}}
{"timestamp": {"seconds": 1548318152, "microseconds": 768430}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-serial1/virtio-backend"}}
{"timestamp": {"seconds": 1548318152, "microseconds": 820617}, "event": "DEVICE_DELETED", "data": {"device": "virtio-serial1", "path": "/machine/peripheral/virtio-serial1"}}

3.hotplug virtio-serial-pci device and port, data can continue transferring data.
{"execute":"device_add","arguments":{"driver":"virtio-serial-pci","id":"virtio-serial1","max_ports":"511","bus":"pcie-root-port-4"}}
{"return": {}}
{"execute":"device_add","arguments":{"driver":"virtserialport","name":"com.redhat.rhevm.vdsm1","chardev":"channel1","bus":"virtio-serial1.0","id":"port1","nr":"1"}}
{"return": {}}
{"timestamp": {"seconds": 1548318184, "microseconds": 699360}, "event": "VSERPORT_CHANGE", "data": {"open": true, "id": "port1"}}
{"timestamp": {"seconds": 1548318185, "microseconds": 694012}, "event": "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}
{"timestamp": {"seconds": 1548318186, "microseconds": 692520}, "event": "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}
{"timestamp": {"seconds": 1548318187, "microseconds": 698726}, "event": "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}
{"timestamp": {"seconds": 1548318188, "microseconds": 699419}, "event": "VSERPORT_CHANGE", "data": {"open": true, "id": "port1"}}
{"timestamp": {"seconds": 1548318189, "microseconds": 695961}, "event": "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}

4.hotunplug port and virtio-serial-pci device for the second time
{ 'execute': 'device_del', 'arguments': {'id': 'port1' }}
{"timestamp": {"seconds": 1548318190, "microseconds": 679115}, "event": "DEVICE_DELETED", "data": {"device": "port1", "path": "/machine/peripheral/port1"}}
{"return": {}}
{"timestamp": {"seconds": 1548318190, "microseconds": 678314}, "event": "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}
{"execute":"device_del","arguments":{"id":"virtio-serial1"}}
{"return": {}}
{"timestamp": {"seconds": 1548318199, "microseconds": 237078}, "event": "DEVICE_DELETED", "data": {"path": "/machine/peripheral/virtio-serial1/virtio-backend"}}
{"timestamp": {"seconds": 1548318199, "microseconds": 292759}, "event": "DEVICE_DELETED", "data": {"device": "virtio-serial1", "path": "/machine/peripheral/virtio-serial1"}}

5.hotplug virtio-serial-pci device and port, data transfer failed.
{"execute":"device_add","arguments":{"driver":"virtio-serial-pci","id":"virtio-serial1","max_ports":"511","bus":"pcie-root-port-4"}}
{"return": {}}
{"execute":"device_add","arguments":{"driver":"virtserialport","name":"com.redhat.rhevm.vdsm1","chardev":"channel1","bus":"virtio-serial1.0","id":"port1","nr":"1"}}
{"return": {}}
=============There is no data transfer qmp info

Comment 32 xiagao 2019-01-24 08:57:30 UTC
(In reply to xiagao from comment #31)
> Only try twice, didn't hit qemu crash. But hit data transfer failed issue
> after hotplug port and virtio-serial-pci.
> 
> Steps:
> 1.transfer data from guest to host in a loop.
> get qmp info.
> 
> {"timestamp": {"seconds": 1548318136, "microseconds": 882368}, "event":
> "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}
> {"timestamp": {"seconds": 1548318137, "microseconds": 884038}, "event":
> "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}
> {"timestamp": {"seconds": 1548318138, "microseconds": 883568}, "event":
> "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}
> 
> 2.hotunplug port and virtio-serial-pci device
> { 'execute': 'device_del', 'arguments': {'id': 'port1' }}{"timestamp":
> {"seconds": 1548318142, "microseconds": 884045}, "event": "VSERPORT_CHANGE",
> "data": {"open": false, "id": "port1"}}
> {"timestamp": {"seconds": 1548318143, "microseconds": 131751}, "event":
> "DEVICE_DELETED", "data": {"device": "port1", "path":
> "/machine/peripheral/port1"}}
> {"return": {}}
> {"timestamp": {"seconds": 1548318143, "microseconds": 131057}, "event":
> "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}
> {"execute":"device_del","arguments":{"id":"virtio-serial1"}}
> {"return": {}}
> {"timestamp": {"seconds": 1548318152, "microseconds": 768430}, "event":
> "DEVICE_DELETED", "data": {"path":
> "/machine/peripheral/virtio-serial1/virtio-backend"}}
> {"timestamp": {"seconds": 1548318152, "microseconds": 820617}, "event":
> "DEVICE_DELETED", "data": {"device": "virtio-serial1", "path":
> "/machine/peripheral/virtio-serial1"}}
> 
> 3.hotplug virtio-serial-pci device and port, data can continue transferring
> data.
> {"execute":"device_add","arguments":{"driver":"virtio-serial-pci","id":
> "virtio-serial1","max_ports":"511","bus":"pcie-root-port-4"}}
> {"return": {}}
> {"execute":"device_add","arguments":{"driver":"virtserialport","name":"com.
> redhat.rhevm.vdsm1","chardev":"channel1","bus":"virtio-serial1.0","id":
> "port1","nr":"1"}}
> {"return": {}}
> {"timestamp": {"seconds": 1548318184, "microseconds": 699360}, "event":
> "VSERPORT_CHANGE", "data": {"open": true, "id": "port1"}}
> {"timestamp": {"seconds": 1548318185, "microseconds": 694012}, "event":
> "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}
> {"timestamp": {"seconds": 1548318186, "microseconds": 692520}, "event":
> "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}
> {"timestamp": {"seconds": 1548318187, "microseconds": 698726}, "event":
> "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}
> {"timestamp": {"seconds": 1548318188, "microseconds": 699419}, "event":
> "VSERPORT_CHANGE", "data": {"open": true, "id": "port1"}}
> {"timestamp": {"seconds": 1548318189, "microseconds": 695961}, "event":
> "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}
> 
> 4.hotunplug port and virtio-serial-pci device for the second time
> { 'execute': 'device_del', 'arguments': {'id': 'port1' }}
> {"timestamp": {"seconds": 1548318190, "microseconds": 679115}, "event":
> "DEVICE_DELETED", "data": {"device": "port1", "path":
> "/machine/peripheral/port1"}}
> {"return": {}}
> {"timestamp": {"seconds": 1548318190, "microseconds": 678314}, "event":
> "VSERPORT_CHANGE", "data": {"open": false, "id": "port1"}}
> {"execute":"device_del","arguments":{"id":"virtio-serial1"}}
> {"return": {}}
> {"timestamp": {"seconds": 1548318199, "microseconds": 237078}, "event":
> "DEVICE_DELETED", "data": {"path":
> "/machine/peripheral/virtio-serial1/virtio-backend"}}
> {"timestamp": {"seconds": 1548318199, "microseconds": 292759}, "event":
> "DEVICE_DELETED", "data": {"device": "virtio-serial1", "path":
> "/machine/peripheral/virtio-serial1"}}
> 
> 5.hotplug virtio-serial-pci device and port, data transfer failed.
> {"execute":"device_add","arguments":{"driver":"virtio-serial-pci","id":
> "virtio-serial1","max_ports":"511","bus":"pcie-root-port-4"}}
> {"return": {}}
> {"execute":"device_add","arguments":{"driver":"virtserialport","name":"com.
> redhat.rhevm.vdsm1","chardev":"channel1","bus":"virtio-serial1.0","id":
> "port1","nr":"1"}}
> {"return": {}}
> =============There is no data transfer qmp info

qemu version:
qemu-kvm-3.1.0-4.module+el8+2676+33bd6e2b.x86_64

Comment 33 pagupta 2019-01-25 13:42:01 UTC
Thanks for testing.

It looks like this issue is only happening with Windows2019 and >= qemu-kvm-3.0.

As in comment 14, we can see value of "struct BusChild *)" is zero after hot-unplug virtio 
serial device. I tried to look in Qemu code if there is any change which resulted this.

AFAIU after hot-unplugging virtio-serial device we should not be getting any events from guest
for corresponding VIRTIO device. As this works with windows 8, I suspect the issue might be at
guest side.

Need assistance from Windows virtio serial driver side, adding Amnon to CC. 

Thanks,
Pankaj

Comment 34 Gal Hammer 2019-02-12 14:10:22 UTC
(In reply to pagupta from comment #33)
> Thanks for testing.
> 
> It looks like this issue is only happening with Windows2019 and >=
> qemu-kvm-3.0.
> 
> As in comment 14, we can see value of "struct BusChild *)" is zero after
> hot-unplug virtio 
> serial device. I tried to look in Qemu code if there is any change which
> resulted this.
> 
> AFAIU after hot-unplugging virtio-serial device we should not be getting any
> events from guest
> for corresponding VIRTIO device. As this works with windows 8, I suspect the
> issue might be at
> guest side.
> 
> Need assistance from Windows virtio serial driver side, adding Amnon to CC. 
> 
> Thanks,
> Pankaj

There are three bugs (including this one) which basically are saying the same thing, something is wrong after re-plugging a virtio-serial *port*: bug 1657077 (this one), bug 1658144 (dmesg "Error allocating inbufs" after re-plugging) and bug 1669931 (Windows again).

Could it mean that the problem is within QEMU and not guest-related?

If you don't agree with me, feel free to contact me for some Windows' driver debugging ;-).

Thanks, Gal.

Comment 35 pagupta 2019-03-20 10:27:49 UTC
(In reply to Gal Hammer from comment #34)
> (In reply to pagupta from comment #33)
> > Thanks for testing.
> > 
> > It looks like this issue is only happening with Windows2019 and >=
> > qemu-kvm-3.0.
> > 
> > As in comment 14, we can see value of "struct BusChild *)" is zero after
> > hot-unplug virtio 
> > serial device. I tried to look in Qemu code if there is any change which
> > resulted this.
> > 
> > AFAIU after hot-unplugging virtio-serial device we should not be getting any
> > events from guest
> > for corresponding VIRTIO device. As this works with windows 8, I suspect the
> > issue might be at
> > guest side.
> > 
> > Need assistance from Windows virtio serial driver side, adding Amnon to CC. 
> > 
> > Thanks,
> > Pankaj
> 
> There are three bugs (including this one) which basically are saying the
> same thing, something is wrong after re-plugging a virtio-serial *port*: bug
> 1657077 (this one), bug 1658144 (dmesg "Error allocating inbufs" after
> re-plugging) and bug 1669931 (Windows again).

Bug 16558144 is confirmed as regression in Linux guest driver. Other two BZ's
are with Windows guest.

> 
> Could it mean that the problem is within QEMU and not guest-related?

Doesn't look like. 

> 
> If you don't agree with me, feel free to contact me for some Windows' driver
> debugging ;-).

Yes.

Thanks,
Pankaj

> 
> Thanks, Gal.

Comment 38 Ademar Reis 2020-02-05 22:52:23 UTC
QEMU has been recently split into sub-components and as a one-time operation to avoid breakage of tools, we are setting the QEMU sub-component of this BZ to "General". Please review and change the sub-component if necessary the next time you review this BZ. Thanks

Comment 39 Luiz Capitulino 2020-10-06 16:18:13 UTC
This is an old bug, corner case scenario that was only reproduced in
a very specific configuration.

Since it's been more than a year without progress and since it seems
there's no customer case attached to it, I'm closing it as WONTFIX.

Please, re-open if the issue can still be reproduced.

Comment 40 liunana 2020-10-20 03:47:09 UTC
(In reply to Luiz Capitulino from comment #39)
> This is an old bug, corner case scenario that was only reproduced in
> a very specific configuration.
> 
> Since it's been more than a year without progress and since it seems
> there's no customer case attached to it, I'm closing it as WONTFIX.
> 
> Please, re-open if the issue can still be reproduced.


Hi, Sorry for the late update. I still can reproduce this bug, can you help check this and confirm when we will fix this? Thanks.

Test environments
Host:
    kernel-4.18.0-240.el8.x86_64
    qemu-kvm-4.2.0-34.module+el8.3.0+7976+077be4ec.x86_64
    seabios-1.13.0-2.module+el8.3.0+7353+9de0a3cc.x86_64
Guest:
    Win2019



(gdb) bt
#0  0x000055e6c09cd098 in virtio_pci_common_write (opaque=0x55e6c47875e0, addr=52, val=2, size=<optimized out>) at hw/virtio/virtio-pci.c:1297
#1  0x000055e6c07f95a7 in memory_region_write_accessor
    (mr=<optimized out>, addr=<optimized out>, value=<optimized out>, size=<optimized out>, shift=<optimized out>, mask=<optimized out>, attrs=...)
    at /usr/src/debug/qemu-kvm-4.2.0-34.module+el8.3.0+7976+077be4ec.x86_64/memory.c:483
#2  0x000055e6c07f77de in access_with_adjusted_size
    (addr=addr@entry=52, value=value@entry=0x7fbbb7ffe508, size=size@entry=4, access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=
    0x55e6c07f9530 <memory_region_write_accessor>, mr=0x55e6c4787fb0, attrs=...)
    at /usr/src/debug/qemu-kvm-4.2.0-34.module+el8.3.0+7976+077be4ec.x86_64/memory.c:544
#3  0x000055e6c07fb6bc in memory_region_dispatch_write (mr=0x55e6c4787fb0, addr=52, data=<optimized out>, op=<optimized out>, attrs=...)
    at /usr/src/debug/qemu-kvm-4.2.0-34.module+el8.3.0+7976+077be4ec.x86_64/memory.c:1475
#4  0x000055e6c07a8747 in flatview_write_continue
    (fv=0x7fbbb010a770, addr=4244635700, attrs=..., buf=0x7fbc90e5b028 <error: Cannot access memory at address 0x7fbc90e5b028>, len=4, addr1=<optimized out>, l=<optimized out>, mr=0x55e6c4787fb0) at /usr/src/debug/qemu-kvm-4.2.0-34.module+el8.3.0+7976+077be4ec.x86_64/include/qemu/host-utils.h:164
#5  0x000055e6c07a8966 in flatview_write
    (fv=0x7fbbb010a770, addr=4244635700, attrs=..., buf=0x7fbc90e5b028 <error: Cannot access memory at address 0x7fbc90e5b028>, len=4)
    at /usr/src/debug/qemu-kvm-4.2.0-34.module+el8.3.0+7976+077be4ec.x86_64/exec.c:3169
#6  0x000055e6c07ace7f in address_space_write () at /usr/src/debug/qemu-kvm-4.2.0-34.module+el8.3.0+7976+077be4ec.x86_64/exec.c:3259
#7  0x000055e6c080a6fa in kvm_cpu_exec (cpu=<optimized out>)
    at /usr/src/debug/qemu-kvm-4.2.0-34.module+el8.3.0+7976+077be4ec.x86_64/accel/kvm/kvm-all.c:2386
#8  0x000055e6c07ef40e in qemu_kvm_cpu_thread_fn (arg=0x55e6c3166f40) at /usr/src/debug/qemu-kvm-4.2.0-34.module+el8.3.0+7976+077be4ec.x86_64/cpus.c:1318
#9  0x000055e6c0b1a844 in qemu_thread_start (args=0x55e6c318e430) at util/qemu-thread-posix.c:519
#10 0x00007fbc9a6c614a in start_thread () at /lib64/libpthread.so.0
#11 0x00007fbc9a3f7f23 in clone () at /lib64/libc.so.6

Comment 43 John Ferlan 2020-10-20 16:02:14 UTC
The environment from Comment 40 says to me this is not a RHEL-AV bug... 

Have you tried with RHEL-AV 8.3.0?  That's much closer to upstream. The RHEL 8.3.0 is essentially RHEL-AV 8.2.1 (plus/minus a few things).

If it still occurs on RHEL-AV, then that's good to know since that's closer to upstream. If it's fixed there, then excavation for what resolved it at least gets narrowed.

Comment 44 liunana 2020-10-21 04:02:25 UTC
(In reply to John Ferlan from comment #43)
> The environment from Comment 40 says to me this is not a RHEL-AV bug... 
> 
> Have you tried with RHEL-AV 8.3.0?  That's much closer to upstream. The RHEL
> 8.3.0 is essentially RHEL-AV 8.2.1 (plus/minus a few things).
> 
> If it still occurs on RHEL-AV, then that's good to know since that's closer
> to upstream. If it's fixed there, then excavation for what resolved it at
> least gets narrowed.

I also can reproduce this bug with RHEL-AV 8.3.0, so it's still not fixed.


Test environments:
Host:
   qemu-kvm-5.1.0-14.module+el8.3.0+8438+644aff69.x86_64
   kernel-4.18.0-240.el8.x86_64
Guest
   Win2019

Please help to check, thanks.


Best regards
Liu Nana




(gdb) bt
#0  0x0000560223ec1f00 in virtio_pci_common_write (opaque=0x56022803c030, addr=48, val=1983633728, size=<optimized out>)
    at /usr/src/debug/qemu-kvm-5.1.0-14.module+el8.3.0+8438+644aff69.x86_64/hw/virtio/virtio-pci.c:1316
#1  0x0000560223d6ebc8 in memory_region_write_accessor
    (mr=<optimized out>, addr=<optimized out>, value=<optimized out>, size=<optimized out>, shift=<optimized out>, mask=<optimized out>, attrs=...) at /usr/src/debug/qemu-kvm-5.1.0-14.module+el8.3.0+8438+644aff69.x86_64/softmmu/memory.c:483
#2  0x0000560223d6d30e in access_with_adjusted_size
    (addr=addr@entry=48, value=value@entry=0x7f9d61dfe508, size=size@entry=4, access_size_min=<optimized out>, access_size_max=<optimized out>, access_fn=0x560223d6eb70 <memory_region_write_accessor>, mr=0x56022803ca10, attrs=...)
    at /usr/src/debug/qemu-kvm-5.1.0-14.module+el8.3.0+8438+644aff69.x86_64/softmmu/memory.c:544
#3  0x0000560223d708bc in memory_region_dispatch_write (mr=0x56022803ca10, addr=48, data=<optimized out>, op=<optimized out>, attrs=...)
    at /usr/src/debug/qemu-kvm-5.1.0-14.module+el8.3.0+8438+644aff69.x86_64/softmmu/memory.c:1465
#4  0x0000560223c98c97 in flatview_write_continue
    (fv=0x7f9d441022a0, addr=4244635696, attrs=..., ptr=<optimized out>, len=4, addr1=<optimized out>, l=<optimized out>, mr=0x56022803ca10)
    at /usr/src/debug/qemu-kvm-5.1.0-14.module+el8.3.0+8438+644aff69.x86_64/include/qemu/host-utils.h:164
#5  0x0000560223c98eb6 in flatview_write (fv=0x7f9d441022a0, addr=4244635696, attrs=..., buf=0x7fa027fdf028, len=4)
    at /usr/src/debug/qemu-kvm-5.1.0-14.module+el8.3.0+8438+644aff69.x86_64/exec.c:3217
#6  0x0000560223c9d2cf in address_space_write () at /usr/src/debug/qemu-kvm-5.1.0-14.module+el8.3.0+8438+644aff69.x86_64/exec.c:3308
#7  0x0000560223cebb9a in kvm_cpu_exec (cpu=cpu@entry=0x560226a309d0)
    at /usr/src/debug/qemu-kvm-5.1.0-14.module+el8.3.0+8438+644aff69.x86_64/accel/kvm/kvm-all.c:2530
#8  0x0000560223d6aace in qemu_kvm_cpu_thread_fn (arg=0x560226a309d0)
    at /usr/src/debug/qemu-kvm-5.1.0-14.module+el8.3.0+8438+644aff69.x86_64/softmmu/cpus.c:1188
#9  0x0000560223d6aace in qemu_kvm_cpu_thread_fn (arg=0x560226a309d0)
    at /usr/src/debug/qemu-kvm-5.1.0-14.module+el8.3.0+8438+644aff69.x86_64/softmmu/cpus.c:1160
#10 0x000056022400ce44 in qemu_thread_start (args=0x560226a56cc0)
    at /usr/src/debug/qemu-kvm-5.1.0-14.module+el8.3.0+8438+644aff69.x86_64/util/qemu-thread-posix.c:521
#11 0x00007fa03bf5f14a in start_thread () at /lib64/libpthread.so.0
#12 0x00007fa03bc90f23 in clone () at /lib64/libc.so.6
(gdb) f
#0  virtio_pci_common_write (opaque=0x56022803c030, addr=48, val=1983633728, size=<optimized out>)
    at /usr/src/debug/qemu-kvm-5.1.0-14.module+el8.3.0+8438+644aff69.x86_64/hw/virtio/virtio-pci.c:1316
1316	        proxy->vqs[vdev->queue_sel].used[0] = val;

Comment 45 Luiz Capitulino 2020-10-21 12:20:46 UTC
Hi Liu,

Thanks for re-testing. I'm checking who could work on this BZ.

Comment 48 John Ferlan 2020-10-27 11:55:28 UTC
Amnon - looks like this should stay in your team's queue rather than general virt-maint.  You could add triaged keyword and return to virt-maint if desired, but that's your decision.

Comment 50 RHEL Program Management 2021-03-15 07:32:16 UTC
After evaluating this issue, there are no plans to address it further or fix it in an upcoming release.  Therefore, it is being closed.  If plans change such that this issue will be fixed in an upcoming release, then the bug can be reopened.

Comment 51 liunana 2021-03-19 06:14:26 UTC
I still can reproduce this bug now.

Test Environments:
   kernel-4.18.0-298.el8.x86_64
   qemu-kvm-5.2.0-13.module+el8.4.0+10397+65cef07b.x86_64

Guest: 
   en_windows_10_business_editions_version_20h2_updated_jan_2021_x86_dvd_52e612bc.iso


Test cmdline:
    -device virtio-serial-pci,id=virtio-serial0,max_ports=511,bus=root4 \
    -chardev socket,id=channel1,path=/tmp/helloworld1,server,nowait \
    -device virtserialport,chardev=channel1,name=port1,bus=virtio-serial0.0,id=port1 \
    -chardev socket,id=channel2,path=/tmp/helloworld2,server,nowait \
    -device virtserialport,chardev=channel2,name=port2,bus=virtio-serial0.0,id=port2 \

Test steps:
{"execute":"qmp_capabilities"}
{"return": {}}
{ 'execute': 'device_del', 'arguments': {'id': 'port1' }}{"timestamp": {"seconds": 1616133908, "microseconds": 914172}, "event": "VSERPORT_CHANGE", "data": {"open": true, "id": "port1"}}
{"timestamp": {"seconds": 1616133908, "microseconds": 929580}, "event": "VSERPORT_CHANGE", "data": {"open": true, "id": "port1"}}

{"return": {}}
{"timestamp": {"seconds": 1616133912, "microseconds": 215214}, "event": "DEVICE_DELETED", "data": {"device": "port1", "path": "/machine/peripheral/port1"}}
{ 'execute': 'device_del', 'arguments': {'id': 'port2' }}
{"return": {}}
{"timestamp": {"seconds": 1616133928, "microseconds": 123076}, "event": "DEVICE_DELETED", "data": {"device": "port2", "path": "/machine/peripheral/port2"}}
{'execute':'device_del','arguments':{'id':'virtio-serial0'}}
{"return": {}}
Connection closed by foreign host.



dmesg log on host:
# [184874.492170] qemu-kvm[207933]: segfault at 92 ip 000055de2609dd60 sp 00007f4b849fe430 error 4 in qemu-kvm[55de25cf1000+b13000]
[184874.503564] Code: 1f 00 0f b7 8b 92 00 00 00 48 8d 04 cd 00 00 00 00 48 29 c8 89 94 85 fc 10 00 00 e9 c3 fd ff ff 66 2e 0f 1f 84 00 00 00 00 00 <0f> b7 8b 92 00 00 00 48 8d 04 cd 00 00 00 00 48 29 c8 89 94 85 00
[184879.909940] switch: port 3(tap0) entered disabled state


Hi Julia,


I can get a similar error with bug 1716352 but seems the fixes don't work for this bug.
Could you help to check this? Thanks.


Best regards
Liu Nana


Note You need to log in before you can comment on or make changes to this bug.