Bug 1401436 - lockless en-queuing for vhost
Summary: lockless en-queuing for vhost
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: kernel
Version: 7.4
Hardware: Unspecified
OS: Unspecified
Target Milestone: rc
: 7.4
Assignee: Wei
QA Contact: Quan Wenli
Jiri Herrmann
Depends On:
Blocks: 1395265
TreeView+ depends on / blocked
Reported: 2016-12-05 09:16 UTC by jason wang
Modified: 2017-08-02 04:53 UTC (History)
10 users (show)

Fixed In Version: kernel-3.10.0-628.el7
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2017-08-02 04:53:19 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:1842 normal SHIPPED_LIVE Important: kernel security, bug fix, and enhancement update 2017-08-01 18:22:09 UTC

Description jason wang 2016-12-05 09:16:22 UTC
Description of problem:

The following patches needs to be backported:

commit 04b96e5528ca97199b429810fe963185a67dd40e
Author: Jason Wang <jasowang@redhat.com>
Date:   Mon Apr 25 22:14:33 2016 -0400

    vhost: lockless enqueuing
    We use spinlock to synchronize the work list now which may cause
    unnecessary contentions. So this patch switch to use llist to remove
    this contention. Pktgen tests shows about 5% improvement:
    ~1300000 pps
    ~1370000 pps
    Signed-off-by: Jason Wang <jasowang@redhat.com>
    Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
    Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

commit 7235acdb1144460d9f520f0d931f3cbb79eb244c
Author: Jason Wang <jasowang@redhat.com>
Date:   Mon Apr 25 22:14:32 2016 -0400

    vhost: simplify work flushing
    We used to implement the work flushing through tracking queued seq,
    done seq, and the number of flushing. This patch simplify this by just
    implement work flushing through another kind of vhost work with
    completion. This will be used by lockless enqueuing patch.
    Signed-off-by: Jason Wang <jasowang@redhat.com>
    Reviewed-by: Michael S. Tsirkin <mst@redhat.com>
    Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

- compare the pktgen/netperf performance

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:

Actual results:

Expected results:

Additional info:

Comment 1 Wei 2017-03-15 10:18:34 UTC
Downstream test result on my laptop:
tap2 RX  1564831 pkts/s RX Dropped: 0 pkts/s
tap1 TX  2180650 pkts/s TX Dropped: 1677842 pkts/s

tap2 RX  1582509 pkts/s RX Dropped: 0 pkts/s
tap1 TX  2232357 pkts/s TX Dropped: 1702915 pkts/s

Comment 3 Rafael Aquini 2017-03-25 00:27:16 UTC
Patch(es) committed on kernel repository and an interim kernel build is undergoing testing

Comment 5 Rafael Aquini 2017-03-27 13:16:42 UTC
Patch(es) available on kernel-3.10.0-628.el7

Comment 8 Wei 2017-04-24 15:43:48 UTC
Hi Jiri,
I just had a quick look at previous release note for RHEL7.0, since this bz is a performance improvement which differs from a new feature, it is good to keep it out of the release note AFAICT.

Comment 9 xiywang 2017-05-23 02:29:14 UTC
Hi Wenli,

Could you help to do performance test?


Comment 10 Quan Wenli 2017-05-24 08:17:52 UTC
(In reply to xiywang from comment #9)
> Hi Wenli,
> Could you help to do performance test?
> Thanks,
> Xiyue

Pps performance indeed improve with kernel-3.10.0-628, so set it to verified. 

1.run pktgen.sh on tap0 device. 
2.gather pps result on guest. 

host kernel           pkts/s
kernel-3.10.0-627    1130227
kernel-3.10.0-628    1166072

Comment 13 errata-xmlrpc 2017-08-02 04:53:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.