RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1485867 - No recovery after vhost-user process restart
Summary: No recovery after vhost-user process restart
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: qemu-kvm-rhev
Version: 7.5
Hardware: Unspecified
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Jens Freimann
QA Contact: Pei Zhang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-08-28 10:21 UTC by tiama
Modified: 2017-10-06 08:28 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-10-06 08:26:49 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description tiama 2017-08-28 10:21:46 UTC
Description of problem:
 No recovery after vhost-user process restart in qemu-kvm-rhev-2.9.0-16.el7.x86_64

Version-Release number of selected component (if applicable):
qemu-kvm-rhev-2.9.0-16.el7.x86_64
3.10.0-693.el7.x86_64

How reproducible:
100%

Steps to Reproduce:
1. Run a slirp/vlan in a background process
# /usr/libexec/qemu-kvm \
-net none \
-net socket,vlan=0,udp=localhost:4444,localaddr=localhost:5555 \
-net user,vlan=0

2. Start qemu with vhost-user as server
# /usr/libexec/qemu-kvm \
    -name 'RHEL7.4_vubr'  \
    -drive id=drive_image1,if=none,snapshot=off,aio=native,cache=none,format=qcow2,file=/home/tianqi/rhel7.4_test.qcow2 \
    -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,bus=pci.0,addr=0x3 \
    -device virtio-net-pci,netdev=mynet1,mac=54:52:00:1a:2c:01 \
    -chardev socket,id=char0,path=/tmp/vubr.sock,server \
    -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce \
    -object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on \
    -numa node,memdev=mem -mem-prealloc \
    -m 4096  \
    -smp 2  \
    -vnc :31 \
    -monitor stdio

3. Start vubr as client
# ./vhost-user-bridge  -c

4. Do wget in guest, check if network works well
# wget http://download.eng.bos.redhat.com/brewroot/packages/qemu-kvm-rhev/2.9.0/16.el7_4.5/x86_64/qemu-img-rhev-2.9.0-16.el7_4.5.x86_64.rpm

5. Kill vubr with Ctrl+c

6.  Restart vubr
# ./vhost-user-bridge  -c
Actual results:
Network in guest can not resume ,wget can not continue download.

Expected results:
Network can resume after restart vubr.

Additional info:
This is a regression
qemu-kvm-rhev-2.6.0-27.el7.x86_64    work
qemu-img-rhev-2.9.0-1.el7.x86_64     fail
qemu-img-rhev-2.9.0-4.el7.x86_64     fail
qemu-img-rhev-2.9.0-6.el7.x86_64     fail
qemu-img-rhev-2.9.0-7.el7.x86_64     fail

Comment 3 tiama 2017-08-29 02:35:47 UTC
qemu-2.10.0-rc4 [1]      fail

Sorry for comment 1 :

qemu-kvm-rhev-2.9.0-1.el7.x86_64     fail
qemu-kvm-rhev-2.9.0-4.el7.x86_64     fail
qemu-kvm-rhev-2.9.0-6.el7.x86_64     fail
qemu-kvm-rhev-2.9.0-7.el7.x86_64     fail

[1]  http://download.qemu-project.org/qemu-2.10.0-rc4.tar.xz

Comment 4 Marc-Andre Lureau 2017-08-29 14:53:34 UTC
This is only a vhost-user-bridge regression from 2.8 when libvhost-user was introduced. Since it's only a manual test, I don't think we need to backport it. You can either use vhost-user-bridge from older qemu releases, or use the one from qemu upstream once this fixed is applied "[PATCH 0/2] vhost-user-bridge reconnect regression"

Comment 5 Marc-Andre Lureau 2017-08-29 14:54:27 UTC
(regression introduced in 2.9 actually)

Comment 6 Pei Zhang 2017-08-31 11:49:25 UTC
Hi Marc-Andre,

I agree vhost-user-bridge is a manual test. If I understand correct, probably we can test reconnect issue over ovs with dpdk. 

Versions:
openvswitch-2.8.0-0.1.20170810git3631ed2.el7fdb.x86_64
qemu-kvm-rhev-2.9.0-16.el7.x86_64
3.10.0-693.el7.x86_64
dpdk-17.05-3.el7fdb.x86_64


Steps:
1. Boot ovs as vhost-user client

2. Boot guest as vhost-user server.

3. In guest, start testpmd using vhost-user vNICs.

4. In Another host, start MoonGen as packets generator. 

5. Check packets receiv/send in guest. Network works well.

6. Re-start ovs to emulate vhost-user client reconnect. 

7. Check packets receiv/send in guest. There are packets, so the network can recover.

8. Repeat step 6~step7, network can always recover.

Based on this scenario, the network can recover.

Comment 7 Pei Zhang 2017-08-31 11:57:15 UTC
Another scenario:
1. Boot ovs as vhost-user client

2. Boot guest as vhost-user server.

3. In guest, set IP address for the vhost-user vNICs

4, In another host, ping this IP address. Ping receive packets, it works.

5. Re-start ovs to emulate vhost-user client reconnect. 

7. Check packets receiv/send in guest. Ping can not receive packets, so the network with IP address doesn't recover.


Based on this scenario, the network cannot recover. This is same issue with Description of this bug.


So seems from vhost-user perspective, after client re-connect, the vhost-user server can recover. 

However the network with IP address can not recover until reboot the guest.

Comment 8 Amnon Ilan 2017-09-07 10:57:15 UTC
(In reply to Pei Zhang from comment #7)
> Another scenario:
> 1. Boot ovs as vhost-user client
> 

To isolate OVS issues from this, can you try the same but with the 
PVP setup? (i.e. with testpmd on the host)
http://dpdk.org/doc/guides/howto/pvp_reference_benchmark.html?highlight=pvp

Comment 9 Pei Zhang 2017-09-11 06:51:35 UTC
(In reply to Amnon Ilan from comment #8)
> (In reply to Pei Zhang from comment #7)
> > Another scenario:
> > 1. Boot ovs as vhost-user client
> > 
> 
> To isolate OVS issues from this, can you try the same but with the 
> PVP setup? (i.e. with testpmd on the host)
> http://dpdk.org/doc/guides/howto/pvp_reference_benchmark.html?highlight=pvp

PVP works well. This ip issue should be caused by dpdk.


Versions:
qemu-kvm-rhev-2.9.0-16.el7.x86_64
libvirt-3.7.0-2.el7.x86_64
dpdk-16.11-4.el7fdp.x86_64


Steps:
1. Boot VM as vhost-user server

2. Boot testpmd as vhost-user client
# testpmd -l 19,17,15 --socket-mem=1024,1024 -n 4 \
--vdev 'net_vhost0,iface=/tmp/vhost-user1,client=1' -- \
--portmask=3 --disable-hw-vlan -i --rxq=1 --txq=1 \
--nb-cores=2 --forward-mode=io

testpmd> set portlist 0,1
testpmd> start

3. Set ip in guest, and start ping. Works well.

4. quit testpmd in host, then restart vhost-user client(repeat step2)
testpmd> quit 

5. Check ping in guest, works well, so network can recover.


Same steps with dpdk-17.05-3.el7fdb.x86_64, the ip network can not recover.

Comment 10 Pei Zhang 2017-09-11 08:09:05 UTC
Another update:

For scenario in Comment 7, when testing with old openvswitch version, the ip network can recover.


(1)work(ip network can recover)
openvswitch-2.6.1-20.git20161206.el7fdp.x86_64
dpdk-16.11-4.el7fdp.x86_64


(2)work
openvswitch-2.7.2-7.git20170719.el7fdp.x86_64
(linked with dpdk-16.11.2.tar.xz)


(3)fail(ip network can not recover)
openvswitch-2.8.0-0.1.20170810git3631ed2.el7fdb.x86_64
(linked with dpdk-17.05.1.tar.xz)


Note: From OVS 2.7, OVS is statically linked with DPDK 16.11.1, dpdk is not needed to install on host any more.


So from either pvp or openvsiwtch perspective, the network can recover with dpdk-16.11. But can not recover with dpdk-17.05.

Comment 11 Pei Zhang 2017-09-15 07:56:41 UTC
1. In Description, we are using old vhost-user-bridge(which come from qemu-kvm-rhev-2.6.0-27.el7.src.rpm). So with this old version tool, vhost-user reconnect of qemu-kvm-rhev-2.9.0-16.el7.x86_64 can not recover. 

2. We compiled the new vhost-user-bridge(which come from qemu-kvm-rhev-2.9.0-16.el7.src.rpm). Here is the problem:

When restart the vhost-user-bridge, it will panic:
# ./vhost-user-bridge -c
ud socket: /tmp/vubr.sock (client)
local:     127.0.0.1:4444
remote:    127.0.0.1:5555
Added sock 3 for watching. max_sock: 3
Added sock 4 for watching. max_sock: 4
Waiting for data from udp backend on 127.0.0.1:4444...
Added sock 5 for watching. max_sock: 5
Added sock 5 for watching. max_sock: 5


   ***   IN UDP RECEIVE CALLBACK    ***

    hdrlen = 12
PANIC: Guest moved used index from 0 to 643
Sock 3 removed from dispatcher watch.
Got UDP packet, but no available descriptors on RX virtq.

So we can not test reconnect issue with latest vhost-user-bridge. This tool has a bug.

3. As Jens said in the mail, besides reconnect issue, dpdk' testpmd still hit segment issue(I hit same issue too). So QE reported below two bugs:
[1]Bug 1491898 - In PVP testing, dpdk's testpmd will "Segmentation fault" after booting VM
[2]Bug 1491909 - IP network can not recover after vhost-user reconnect from OVS side

4. A summary: 
(1)From pvp and openvswitch layer, the vhost-user reconnect works well with qemu-kvm-rhev-2.9.0-16.el7.x86_64. (But latest version of dpdk and openvswitch hit regression issues, it's not problems of qemu)
(2)vhost-user-bridge has a bug. So we can not test reconnect issue with this tool.


Thanks Jen.


Best Regards,
Pei

Comment 12 Pei Zhang 2017-09-15 08:04:35 UTC
The way we compiled the vhost-user-bridge tool:
1.Download qemu-kvm-rhev-2.9.0-16.el7.src.rpm

2.Compile vhost-user-bridge
# rpm2cpio qemu-kvm-rhev-2.9.0-16.el7.src.rpm | cpio -div
# tar -xvf qemu-2.9.0.tar.xz
# cd qemu-2.9.0
# mkdir build
# cd build/
# ../configure
# make tests/vhost-user-bridge

Comment 13 Jens Freimann 2017-09-15 15:30:06 UTC
(In reply to Pei Zhang from comment #11)
> (2)vhost-user-bridge has a bug. So we can not test reconnect issue with this
> tool.

This should be fixed with the patches Marc-Andre mentioned in comment #4. Just to be sure: Have you tried with a vhost-user-bridge binary from upstream? Or an older one from v2.8.0?

Comment 14 Pei Zhang 2017-09-18 08:20:48 UTC
(In reply to Jens Freimann from comment #13)
> (In reply to Pei Zhang from comment #11)
> > (2)vhost-user-bridge has a bug. So we can not test reconnect issue with this
> > tool.
> 
> This should be fixed with the patches Marc-Andre mentioned in comment #4.

With this fix, the network still can not recover most times.

> Just to be sure: Have you tried with a vhost-user-bridge binary from
> upstream? Or an older one from v2.8.0?

No, I tested vhost-user-bridge from downstream and compiled this tool as Comment 12. As qemu-kvm-rhev-2.8.0-x.el7 have all been deleted from brewweb, so seems we can not download and test qemu-kvm-rhev-2.8 now.  


Best Regards,
Pei

Comment 15 Jens Freimann 2017-09-18 15:21:30 UTC
With upstream vhost-user-bridge from current upstream master (4f2058ded4feb2fa815b33b57b305c81d5016307) I see this when I start vhost-user-bridge to reconnect:

[  210.299955] virtio_net virtio1: output.0:id 0 is not a head!

Reconnect works with v2.8.0, so I ran bisect and found: 
Bisecting: 0 revisions left to test after this (roughly 0 steps)
[e10e798c85c2331dab338b6a01835ebde81136e5] tests/vhost-user-bridge: use contrib/libvhost-user


Qemu command line:
/usr/local/bin/qemu-system-x86_64     --enable-kvm      -drive id=drive_image1,if=none,snapshot=off,cache=none,format=qcow2,file=/root/jens/rhel74_1.qcow2     -device virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,bus=pci.0,addr=0x3     -device virtio-net-pci,netdev=mynet1,mac=54:52:00:1a:2c:01     -chardev socket,id=char0,path=/tmp/vubr.sock,server     -netdev type=vhost-user,id=mynet1,chardev=char0,vhostforce     -m 4096      -smp 8 -chardev socket,path=/tmp/port0,server,nowait,id=port0-char -device virtio-serial -device virtserialport,id=port1,name=org.fedoraproject.port.0,chardev=port0-char  -nographic -display none -serial mon:stdio -object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on -numa node,memdev=mem -mem-prealloc

vhost-user-bridge: tests/vhost-user-bridge -c

Procedure: 
In guest I set up eth0, ran dhclient and pinged, then killed vhost-user-bridge and tried to ping again.

Comment 16 Marc-Andre Lureau 2017-09-18 15:45:33 UTC
(In reply to Jens Freimann from comment #15)
> With upstream vhost-user-bridge from current upstream master
> (4f2058ded4feb2fa815b33b57b305c81d5016307) I see this when I start
> vhost-user-bridge to reconnect:
> 
> [  210.299955] virtio_net virtio1: output.0:id 0 is not a head!

We would need to debug guest driver to understand that error
 
> Reconnect works with v2.8.0, so I ran bisect and found: 
> Bisecting: 0 revisions left to test after this (roughly 0 steps)
> [e10e798c85c2331dab338b6a01835ebde81136e5] tests/vhost-user-bridge: use
> contrib/libvhost-user
>

That's what I found in
https://bugzilla.redhat.com/show_bug.cgi?id=1485867#c4

> Qemu command line:
> /usr/local/bin/qemu-system-x86_64     --enable-kvm      -drive
> id=drive_image1,if=none,snapshot=off,cache=none,format=qcow2,file=/root/jens/
> rhel74_1.qcow2     -device
> virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,bus=pci.0,addr=0x3  
> -device virtio-net-pci,netdev=mynet1,mac=54:52:00:1a:2c:01     -chardev
> socket,id=char0,path=/tmp/vubr.sock,server     -netdev
> type=vhost-user,id=mynet1,chardev=char0,vhostforce     -m 4096      -smp 8
> -chardev socket,path=/tmp/port0,server,nowait,id=port0-char -device
> virtio-serial -device
> virtserialport,id=port1,name=org.fedoraproject.port.0,chardev=port0-char 
> -nographic -display none -serial mon:stdio -object
> memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on -numa
> node,memdev=mem -mem-prealloc
> 
> vhost-user-bridge: tests/vhost-user-bridge -c
> 
> Procedure: 
> In guest I set up eth0, ran dhclient and pinged, then killed
> vhost-user-bridge and tried to ping again.

Last time I checked, it was fixed with 672339f7eff5e9226f302037290e84e783d2b5cd. But your testing of upstream includes this fix already. What is the guest kernel version? Are you going to investigate further?

Btw, vhost-user-bridge is a manual test, so temporary break is to be excepted...
I don't think we need to backport the fix in RHEL, and this bug should probably be handled upstream only, no?

Comment 17 Jens Freimann 2017-09-19 08:07:49 UTC
(In reply to Marc-Andre Lureau from comment #16)
> (In reply to Jens Freimann from comment #15)
> > With upstream vhost-user-bridge from current upstream master
> > (4f2058ded4feb2fa815b33b57b305c81d5016307) I see this when I start
> > vhost-user-bridge to reconnect:
> > 
> > [  210.299955] virtio_net virtio1: output.0:id 0 is not a head!
> 
> We would need to debug guest driver to understand that error
>  
> > Reconnect works with v2.8.0, so I ran bisect and found: 
> > Bisecting: 0 revisions left to test after this (roughly 0 steps)
> > [e10e798c85c2331dab338b6a01835ebde81136e5] tests/vhost-user-bridge: use
> > contrib/libvhost-user
> >
> 
> That's what I found in
> https://bugzilla.redhat.com/show_bug.cgi?id=1485867#c4

Wanted to test with 672339f7eff5e9226f302037290e84e783d2b5cd ff included so I started with upstream master

> > Qemu command line:
> > /usr/local/bin/qemu-system-x86_64     --enable-kvm      -drive
> > id=drive_image1,if=none,snapshot=off,cache=none,format=qcow2,file=/root/jens/
> > rhel74_1.qcow2     -device
> > virtio-blk-pci,id=image1,drive=drive_image1,bootindex=0,bus=pci.0,addr=0x3  
> > -device virtio-net-pci,netdev=mynet1,mac=54:52:00:1a:2c:01     -chardev
> > socket,id=char0,path=/tmp/vubr.sock,server     -netdev
> > type=vhost-user,id=mynet1,chardev=char0,vhostforce     -m 4096      -smp 8
> > -chardev socket,path=/tmp/port0,server,nowait,id=port0-char -device
> > virtio-serial -device
> > virtserialport,id=port1,name=org.fedoraproject.port.0,chardev=port0-char 
> > -nographic -display none -serial mon:stdio -object
> > memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on -numa
> > node,memdev=mem -mem-prealloc
> > 
> > vhost-user-bridge: tests/vhost-user-bridge -c
> > 
> > Procedure: 
> > In guest I set up eth0, ran dhclient and pinged, then killed
> > vhost-user-bridge and tried to ping again.
> 
> Last time I checked, it was fixed with
> 672339f7eff5e9226f302037290e84e783d2b5cd. But your testing of upstream
> includes this fix already. What is the guest kernel version? Are you going
> to investigate further? 

The guest is RHEL 7.4 with kernel 3.10.0-679.el7.x86_64. Yes, I will investigate further. 

> Btw, vhost-user-bridge is a manual test, so temporary break is to be
> excepted...
> I don't think we need to backport the fix in RHEL, and this bug should
> probably be handled upstream only, no?

Yes, I agree. We can fix this upstream and manual testing could be done with v2.8 in the meantime. 

Pei do you agree?

Comment 18 Pei Zhang 2017-09-19 10:06:44 UTC
(In reply to Jens Freimann from comment #17)
> (In reply to Marc-Andre Lureau from comment #16)
[...]
> 
> > Btw, vhost-user-bridge is a manual test, so temporary break is to be
> > excepted...
> > I don't think we need to backport the fix in RHEL, and this bug should
> > probably be handled upstream only, no?
> 
> Yes, I agree. We can fix this upstream and manual testing could be done with
> v2.8 in the meantime. 
> 
> Pei do you agree?

Hi Jens, Marc-Andre,
 
From QE perspective, we have two concerns:

1. Users can get the vhost-user-bridge tool, even though we are not quite sure if they use it.
  
Just like in Comment 12, this tool can be compiled in qemu-kvm-rhev-2.9.0-16.el7.src.rpm which customers or partners can access.


2. If we don't backport the fix in RHEL, does this mean vhost-user-bridge tool will not supported? 

I mean should QE remove vhost-user-bridge related test case and we cover the reconnect issue by testing PVP or OpenvSwitch layer?


Thanks,
Pei

Comment 19 Jens Freimann 2017-09-20 07:47:59 UTC
(In reply to Pei Zhang from comment #18)
> (In reply to Jens Freimann from comment #17)
> > (In reply to Marc-Andre Lureau from comment #16)
> [...]
> > 
> > > Btw, vhost-user-bridge is a manual test, so temporary break is to be
> > > excepted...
> > > I don't think we need to backport the fix in RHEL, and this bug should
> > > probably be handled upstream only, no?
> > 
> > Yes, I agree. We can fix this upstream and manual testing could be done with
> > v2.8 in the meantime. 
> > 
> > Pei do you agree?
> 
> Hi Jens, Marc-Andre,
>  
> From QE perspective, we have two concerns:
> 
> 1. Users can get the vhost-user-bridge tool, even though we are not quite
> sure if they use it.
>   
> Just like in Comment 12, this tool can be compiled in
> qemu-kvm-rhev-2.9.0-16.el7.src.rpm which customers or partners can access.
> 
> 2. If we don't backport the fix in RHEL, does this mean vhost-user-bridge
> tool will not supported? 
> 
> I mean should QE remove vhost-user-bridge related test case and we cover the
> reconnect issue by testing PVP or OpenvSwitch layer?

It is a tool used for QEMU (unit) tests. I think testing the vhost-user reconnect feature with supported components might be better.

Comment 20 Pei Zhang 2017-09-20 10:32:02 UTC
(In reply to Jens Freimann from comment #19)
[...]
> > 
> > 2. If we don't backport the fix in RHEL, does this mean vhost-user-bridge
> > tool will not supported? 
> > 
> > I mean should QE remove vhost-user-bridge related test case and we cover the
> > reconnect issue by testing PVP or OpenvSwitch layer?
> 
> It is a tool used for QEMU (unit) tests. I think testing the vhost-user
> reconnect feature with supported components might be better.

OK, we will test vhost-user reconnect by PVP and Openvswitch. Thanks.


Best Regards,
Pei

Comment 21 Jens Freimann 2017-10-06 08:26:49 UTC
As discussed I will look into fixing this upstream and QE will test vhost-user not with vhost-user-bridge but instead with a PVP setup and OVS. No need to fix this in RHEL.


Note You need to log in before you can comment on or make changes to this bug.