RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1552465 - High TRex packets loss during live migration over ovs+dpdk+vhost-user
Summary: High TRex packets loss during live migration over ovs+dpdk+vhost-user
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: openvswitch
Version: 7.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Maxime Coquelin
QA Contact: Pei Zhang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-07 08:09 UTC by Pei Zhang
Modified: 2018-06-21 13:37 UTC (History)
11 users (show)

Fixed In Version: openvswitch-2.9.0-38.el7fdp
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1568678 (view as bug list)
Environment:
Last Closed: 2018-06-21 13:36:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
TRex server log (88.73 KB, text/plain)
2018-03-07 08:09 UTC, Pei Zhang
no flags Details
XML of VM (3.68 KB, text/html)
2018-03-07 08:15 UTC, Pei Zhang
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:1962 0 None None None 2018-06-21 13:37:49 UTC

Description Pei Zhang 2018-03-07 08:09:37 UTC
Created attachment 1405175 [details]
TRex server log

Description of problem:
There are very high TRex packets loss during nfv live migration, mostly it's 10~20 times loss then expected loss.


Version-Release number of selected component (if applicable):
kernel-3.10.0-855.el7.x86_64
libvirt-3.9.0-13.el7.x86_64
dpdk-17.11-7.el7.x86_64
microcode-20180108.tgz
qemu-kvm-rhev-2.10.0-21.el7.x86_64
openvswitch-2.9.0-1.el7fdb.x86_64


How reproducible:
13/16


Steps to Reproduce:
1. Boot ovs in src and des host, see reference[1]

2. Boot VM

3. Load vfio and start testpmd in VM

4. Start TRex as packets generator in the third host, see reference [4]

Default paramaters:
Traffic Generator: TRex
Frame Size: 64Byte
Bidirectional: Yes
Stream; 1Mpps                      
CPU: Intel(R) Xeon(R) CPU E5-2650 v3 @ 2.30GHz
NIC: 10-Gigabit X540-AT2


5. Do live migration from src to des host
# /bin/virsh migrate --verbose --persistent --live rhel7.5_nonrt qemu+ssh://192.168.1.2/system

6. Check migration results during live migration, some runs get high TRex loss which are not expected, it's even up to 12 million packets.

===========Stream Rate: 1Mpps===========
No Stream_Rate Downtime Totaltime Ping_Loss trex_Loss
 0       1Mpps      128     16567        15    6937850.0
 1       1Mpps      134     17254        16    5421912.0
 2       1Mpps      115     15879        14    5167824.0
 3       1Mpps      121     16385        15    1075464.0
 4       1Mpps      121     16320        14    1129379.0
 5       1Mpps      137     17547        16     622630.0
 6       1Mpps      130     15560        15    7938009.0
 7       1Mpps      129     18414        14    6715591.0
 8       1Mpps      118     16478        15    7467388.0
 9       1Mpps      132     17640        16    3896654.0
 10      1Mpps      129     16090        15    4293640.0
 11      1Mpps      128     16116        14     597561.0
 12      1Mpps      129     14150        15     477036.0
 13      1Mpps      120     17604       112    2627539.0
 14      1Mpps      128     17833        14   12018787.0
 15      1Mpps      129     17099        15    1964834.0

After checking the TRex server statistics of 1 migration run, the TRex terminal shows that there are high dorp rate(about 220Mbps drop) which lasting 13 seconds during live migration.

current_time  drop_rate
...
    14.1 sec  0.00  bps
    14.6 sec  0.00  bps
    15.1 sec  0.00  bps
    15.6 sec  168.30 Mbps
    16.1 sec  168.30 Mbps
    16.6 sec  195.18 Mbps
    17.1 sec  195.18 Mbps
    17.6 sec  205.48 Mbps
    18.1 sec  205.48 Mbps
    18.6 sec  210.59 Mbps
    19.1 sec  210.59 Mbps
    19.6 sec  218.94 Mbps
    20.1 sec  218.94 Mbps
    20.6 sec  211.80 Mbps
    21.1 sec  211.80 Mbps
    21.6 sec  214.75 Mbps
    22.1 sec  214.75 Mbps
    22.6 sec  217.30 Mbps
    23.1 sec  217.30 Mbps
    23.7 sec  215.12 Mbps
    24.2 sec  215.12 Mbps
    24.7 sec  209.03 Mbps
    25.2 sec  209.03 Mbps
    25.7 sec  205.19 Mbps
    26.2 sec  205.19 Mbps
    26.7 sec  207.24 Mbps
    27.2 sec  207.24 Mbps
    27.7 sec  181.60 Mbps
    28.2 sec  181.60 Mbps
    28.7 sec  0.00  bps
    29.2 sec  0.00  bps
    29.7 sec  0.00  bps
    30.2 sec  0.00  bps
...

The full TRex server log is attached to this Description.


Actual results:
High TRex packets loss during live migration.


Expected results:
The TRex packets loss should looks like, should be around 0.4 million packets loss. This was testing with below versions:

qemu-kvm-rhev-2.9.0-16.el7.x86_64
openvswitch-2.7.1-1.git20170710.el7fdp.x86_64

No Stream_Rate Downtime Totaltime Ping_Loss trex_Loss
 0       1Mpps      151     12898        16     352527.0
 1       1Mpps      152     13028        16     350399.0
 3       1Mpps      147     12759        17     339891.0
 4       1Mpps      141     14180        16     465830.0
 5       1Mpps      168     12824        17     407926.0
 6       1Mpps      165     13483        20     483641.0
 7       1Mpps      167     12574        20     410246.0
 8       1Mpps      173     12865        18     413503.0
 9       1Mpps      168     12574        16     405473.0
 10      1Mpps      176     13175        18     417233.0
 11      1Mpps      157     13308        15     381677.0
 12      1Mpps      165     13058        26     562075.0
 13      1Mpps      168     13150        15     402580.0
 14      1Mpps      169     13535        17     475747.0


Additional info:
1. This should be regression bugs as above "Expected results" showed.


Reference:
[1]
# ovs-vsctl show
8e578a3b-ba64-4dab-a9e1-80c37b9811dd
    Bridge "ovsbr1"
        Port "dpdk2"
            Interface "dpdk2"
                type: dpdk
                options: {dpdk-devargs="0000:06:00.0", n_rxq="1", n_txq="1"}
        Port "ovsbr1"
            Interface "ovsbr1"
                type: internal
        Port "vhost-user2"
            Interface "vhost-user2"
                type: dpdkvhostuserclient
                options: {vhost-server-path="/tmp/vhostuser2.sock"}
    Bridge "ovsbr0"
        Port "vhost-user0"
            Interface "vhost-user0"
                type: dpdkvhostuserclient
                options: {vhost-server-path="/tmp/vhostuser0.sock"}
        Port "dpdk1"
            Interface "dpdk1"
                type: dpdk
                options: {dpdk-devargs="0000:04:00.1", n_rxq="1", n_txq="1"}
        Port "dpdk0"
            Interface "dpdk0"
                type: dpdk
                options: {dpdk-devargs="0000:04:00.0", n_rxq="1", n_txq="1"}
        Port "vhost-user1"
            Interface "vhost-user1"
                type: dpdkvhostuserclient
                options: {vhost-server-path="/tmp/vhostuser1.sock"}
        Port "ovsbr0"
            Interface "ovsbr0"
                type: internal

[4]
# cat start_packets_flow.sh 
DIRECTORY=~/src/lua-trafficgen
cd $DIRECTORY
./binary-search.py \
        --traffic-generator=trex-txrx \
        --validation-runtime=60 \
        --rate-unit=mpps \
        --rate=1 \
        --run-bidirec=1 \
        --run-revunidirec=0 \
        --frame-size=64 \
        --num-flows=1024 \
        --one-shot=1 \

Comment 3 Pei Zhang 2018-03-07 08:15:38 UTC
Created attachment 1405177 [details]
XML of VM

Comment 4 Pei Zhang 2018-03-07 09:56:03 UTC
Additional info (continued):
2. This should not be qemu-kvm-rhev bug. As with below versions, the TRex packets loss looks expected:

Versions:
openvswitch-2.7.1-1.git20170710.el7fdp.x86_64
qemu-kvm-rhev-2.10.0-21.el7.x86_64

Results:

With below 20 migration runs, the TRex packets loss is around 0.4 million.

===========Stream Rate: 1Mpps===========
No Stream_Rate Downtime Totaltime Ping_Loss trex_Loss
 0       1Mpps      123     12686        15     293542.0
 1       1Mpps      123     13641        14     461975.0
 2       1Mpps      120     13835        14     461714.0
 3       1Mpps      121     13355        15     282249.0
 4       1Mpps      118     13845        14     459771.0
 5       1Mpps      124     14509        15     468748.0
 6       1Mpps      123     13485        15     295984.0
 7       1Mpps      118     12848        14     280997.0
 8       1Mpps      119     14236        14     458436.0
 9       1Mpps      123     13981        15     292635.0

===========Stream Rate: 1Mpps===========
No Stream_Rate Downtime Totaltime Ping_Loss trex_Loss
 0       1Mpps      118     13183        13     467103.0
 1       1Mpps      127     13688        15     469440.0
 2       1Mpps      125     13677        18     530158.0
 3       1Mpps      124     13279        15     293528.0
 4       1Mpps      121     13703        15     469770.0
 5       1Mpps      110     14318        14     470910.0
 6       1Mpps      119     14223        13     460204.0
 7       1Mpps      114     13799        15     473836.0
 8       1Mpps      123     12392        16     309198.0
 9       1Mpps      122     14100        14     459757.0

Comment 5 Maxime Coquelin 2018-03-10 17:55:29 UTC
Hi Pei,

Just to confirm, I guess this is without IOMMU support enabled for the virtio devices?

Thanks,
Maxime

Comment 6 Pei Zhang 2018-03-12 03:16:12 UTC
(In reply to Maxime Coquelin from comment #5)
> Hi Pei,
> 
> Just to confirm, I guess this is without IOMMU support enabled for the
> virtio devices?

Hi Maxime, you are right, this is testing without IOMMU support enabled.

(1)ovs doesn't enable IOMMU support.
(2)VM XML doesn't enable IOMMU support.
(3)In VM, load vfio with noiommu mode:
   # modprobe vfio enable_unsafe_noiommu_mode=Y


Best Regards,
Pei

> Thanks,
> Maxime

Comment 7 Eelco Chaudron 2018-03-12 08:20:03 UTC
Hi Pei, I took this BZ, as it looks familiar to BZ1512463, which I was about to request if you can replicate this with the latest 2.9.0-8. 

However looking at the details above this is already the same setup, however, you now get different results. Am I correct?

If so, can you close the other BZ, as it was for 2.8, which we will not ship, and let me access your replication and figure out where this issue got introduced?


Maxime if you have any other ideas, please let me know.

Comment 8 Pei Zhang 2018-03-12 09:16:48 UTC
(In reply to Eelco Chaudron from comment #7)
> Hi Pei, I took this BZ, as it looks familiar to BZ1512463, which I was about
> to request if you can replicate this with the latest 2.9.0-8. 

Hi Eelco,

With openvswitch-2.9.0-8.el7fdn.x86_64, still hit this issue.

> However looking at the details above this is already the same setup,
> however, you now get different results. Am I correct?

These 2 bugs are using same setup. Actually I don't quite sure if they are same issues, here are my concerns:

(1) For Bug 1512463, the ping network can totally not recover for 1 ~ 3 seconds in some migration runs, not always. 

(2) Regarding to this bug, above ping issue still exists. The difference is when the ping works well, however still hit high TRex packets loss in some migration run. 

For example, in below No.13 and No.14 migration runs, when the ping loss is unexpected 112, the trex loss is high; but when ping loss is normal as 14, the trex loss is still high.

===========Stream Rate: 1Mpps===========
No Stream_Rate Downtime Totaltime Ping_Loss trex_Loss
...
 13      1Mpps      120     17604       112    2627539.0
 14      1Mpps      128     17833        14   12018787.0


> If so, can you close the other BZ, as it was for 2.8, which we will not
> ship, and let me access your replication and figure out where this issue got
> introduced?

I'll close Bug 1512463 as duplicate of this one. After this one is fixed, if ping issue still exists, then let's re-open it. Is this OK?


My hosts are available now and I'll update them in next comment. Please just wait a sec.


Thanks,
Pei

> 
> 
> Maxime if you have any other ideas, please let me know.

Comment 9 Pei Zhang 2018-03-12 09:18:27 UTC
*** Bug 1512463 has been marked as a duplicate of this bug. ***

Comment 11 Eelco Chaudron 2018-03-14 12:36:29 UTC
Did some experimenting and came to the conclusion that this additional delay got introduced due to the following patch (on dpdk 16.11.3):

commit 6bf02ab821fb02033b07e4d194811648ce8c30f0
Author: Tiwei Bie <tiwei.bie>
Date:   Tue Aug 1 17:01:21 2017 +0800

    vhost: make page logging atomic
    
    [ backported from upstream commit 897f13a1f726cefdc68762da83f9d2225a85c27e ]
    
    Each dirty page logging operation should be atomic. But it's not
    atomic in current implementation. So it's possible that some dirty
    pages can't be logged successfully when different threads try to
    log different pages into the same byte of the log buffer concurrently.
    This patch fixes this issue.
    
    Fixes: b171fad1ffa5 ("vhost: log used vring changes")
    
    Reported-by: Xiao Wang <xiao.w.wang>
    Signed-off-by: Tiwei Bie <tiwei.bie>
    Reviewed-by: Maxime Coquelin <maxime.coquelin>


When I undo this patch, the numbers are ok:


No Stream_Rate Downtime Totaltime Ping_Loss moongen_Loss
 0       1Mpps      141     13360        16       502976
 1       1Mpps      134     12820        16       489831
 2       1Mpps      143     14325        17       509406
 3       1Mpps      136     12830        15       489608
 4       1Mpps      142     14442        16       502628
 5       1Mpps      126     13286        15       470381
 6       1Mpps      139     14251        16       495002
 7       1Mpps      134     12933        15       479034
 8       1Mpps      134     12607        16       485175
 9       1Mpps      131     13340        15       479923
10       1Mpps      133     14514        15       482964
11       1Mpps      133     13312        18       548132
12       1Mpps      141     13771        16       499937
13       1Mpps      145     12562        17       509381
14       1Mpps      137     13595        15       487223
15       1Mpps      147     14436        17       510727

Comment 13 Eelco Chaudron 2018-03-14 12:41:31 UTC
Also removed the patch from 2.9, however there the high traffic loss with the low TTL timeout is gone. However, I still see the high TTL issue reported in BZ1512463. I'll try to pinpoint where this got introduced, as this was first seen in an OVS 2.8 beta.

===========Stream Rate: 1Mpps===========
No Stream_Rate Downtime Totaltime Ping_Loss moongen_Loss
 0       1Mpps      144     12656        16       506396
 1       1Mpps      122     14309       114      2393523
 2       1Mpps      140     14375        16       500156
 3       1Mpps      136     12959        16       487447
 4       1Mpps      152     14027        17       518767
 5       1Mpps      130     12663        16       479817
 6       1Mpps      125     14340       114      2398783
 7       1Mpps      131     14174        15       440981
 8       1Mpps      137     14274        15       492664
 9       1Mpps      140     13849        16       492308
10       1Mpps      556     14029        58      1292156
11       1Mpps      137     14642       115      2483477
12       1Mpps      127     14226        15       478721
13       1Mpps      132     12717        15       415200
14       1Mpps      132     13583        14       478556
15       1Mpps      146     12933        16       506331

Comment 19 Maxime Coquelin 2018-04-01 07:27:52 UTC
Patch accepted upstream:

commit 394313fff39d0f994325c47f7eab39daf5dc9e11
Author: Maxime Coquelin <maxime.coquelin>
Date:   Wed Mar 21 16:44:13 2018 +0100

    vhost: avoid concurrency when logging dirty pages
    
    This patch aims at fixing a migration performance regression
    faced since atomic operation is used to log pages as dirty when
    doing live migration.
    
    Instead of setting a single bit by doing an atomic read-modify-write
    operation to log a page as dirty, this patch write 0xFF to the
    corresponding byte, and so logs 8 page as dirty.
    
    The advantage is that it avoids concurrent atomic operations by
    multiple PMD threads, the drawback is that some clean pages are
    marked as dirty and so are transferred twice.
    
    Fixes: 897f13a1f726 ("vhost: make page logging atomic")
    Cc: stable
    
    Signed-off-by: Maxime Coquelin <maxime.coquelin>
    Reviewed-by: Jianfeng Tan <jianfeng.tan>

Comment 20 Maxime Coquelin 2018-04-18 07:01:02 UTC
Hi Timothy,

Patch 394313fff39d0f994325c47f7eab39daf5dc9e11 is in upstream master branch now.
It backports without conflict on top of DPDK v17.11 LTS.
Can you pick it directly?

Thanks in advance,
Maxime

Comment 21 Maxime Coquelin 2018-04-18 08:38:46 UTC
Hi Timothy,


Please wait before backporting.
Pei did some more testing this morning, and face issues when more than one queue 
pair.

It needs more investigation, but I think this is because this patch marks 8 pages at a time as dirty to avoid contention, but the problem seems to be that it never
converges with multiple queue pairs.

Comment 23 Maxime Coquelin 2018-05-18 07:37:27 UTC
Commit now available upstream (v18.04-rc5):

commit c16915b8710911a75f0fbdb1aa5243f4cdfaf26a
Author: Maxime Coquelin <maxime.coquelin>
Date:   Thu May 17 13:44:47 2018 +0200

    vhost: improve dirty pages logging performance
    
    This patch caches all dirty pages logging until the used ring index
    is updated.
    
    The goal of this optimization is to fix a performance regression
    introduced when the vhost library started to use atomic operations
    to set bits in the shared dirty log map. While the fix was valid
    as previous implementation wasn't safe against concurrent accesses,
    contention was induced.
    
    With this patch, during migration, we have:
    1. Less atomic operations as only a single atomic OR operation
    per 32 or 64 (depending on CPU) pages.
    2. Less atomic operations as during a burst, the same page will
    be marked dirty only once.
    3. Less write memory barriers.
    
    Fixes: 897f13a1f726 ("vhost: make page logging atomic")
    Cc: stable
    
    Suggested-by: Michael S. Tsirkin <mst>
    Signed-off-by: Maxime Coquelin <maxime.coquelin>
    Reviewed-by: Tiwei Bie <tiwei.bie>

Comment 25 Pei Zhang 2018-05-24 07:08:06 UTC
Summary:
This bug has been fixed very well. Live migration with vhost-user single queue and 2 queues both work well as expected. (Bug 1512463  is tracking the TTL issue).

Thank you all, Maxime, Eelco.


Versions:
kernel-3.10.0-887.el7.x86_64
qemu-kvm-rhev-2.12.0-2.el7.x86_64
libvirt-3.9.0-14.el7.x86_64
openvswitch-2.9.0-38.el7fdn.x86_64
dpdk-17.11-10.el7fdb.x86_64


Testing Results:
(1) vhost-user single queues (ovs is vhost-user client):
=======================Stream Rate: 1Mpps=========================
No Stream_Rate Downtime Totaltime Ping_Loss trex_Loss
 0       1Mpps      146     14061        15     473762.0
 1       1Mpps      137     16205       213    4446881.0
 2       1Mpps      144     14370        15     460195.0
 3       1Mpps      159     14257        15     489399.0
 4       1Mpps      148     14502        15     469131.0
 5       1Mpps      149     13551        15     471911.0
 6       1Mpps      151     13901        15     308662.0
 7       1Mpps      144     13853        15     467519.0
 8       1Mpps      148     14065        15     476304.0
 9       1Mpps      157     14241        16     485533.0

=======================Stream Rate: 2Mpps=========================
No Stream_Rate Downtime Totaltime Ping_Loss trex_Loss
 0       2Mpps      145     13993        15     819557.0
 1       2Mpps      145     14092        15     805606.0
 2       2Mpps      149     14269        16     962528.0
 3       2Mpps      155     14078        17     964581.0
 4       2Mpps      144     14070        15     922834.0
 5       2Mpps      151     14096        15     950100.0
 6       2Mpps      142     13429        14     914852.0
 7       2Mpps      153     13463        15     962724.0
 8       2Mpps      148     14414        16     950910.0
 9       2Mpps      148     14215        15     817256.0

=======================Stream Rate: 3Mpps=========================
No Stream_Rate Downtime Totaltime Ping_Loss trex_Loss
 0       3Mpps      159     14421        15    1478081.0
 1       3Mpps      137     14440       112    7177375.0
 2       3Mpps      149     13880        15     954313.0
 3       3Mpps      139     14261        15    1222838.0
 4       3Mpps      154     13637        15    1448481.0
 5       3Mpps      151     14140        14    1309262.0
 6       3Mpps      140     14849       114    7332040.0
 7       3Mpps      134     15950       212   13318512.0
 8       3Mpps      130     14866       112    7288058.0
 9       3Mpps      137     15312       114    7270321.0

=======================Stream Rate: 4Mpps=========================
No Stream_Rate Downtime Totaltime Ping_Loss trex_Loss
 0       4Mpps      156     14977        16    2040846.0
 1       4Mpps      152     14641        16    1581355.0
 2       4Mpps      140     15550       113    9823552.0
 3       4Mpps      154     14624        16    1964888.0
 4       4Mpps      136     14475        15    1876555.0
 5       4Mpps      127     15926       113    9823355.0
 6       4Mpps      144     14802        15    1911330.0
 7       4Mpps      132     15589       113    9792719.0
 8       4Mpps      153     14807        15    1949005.0
 9       4Mpps      144     14678        15    1911219.0


(2)vhost-user 2 queues(ovs is vhost-user client):

=======================Stream Rate: 1Mpps=========================
No Stream_Rate Downtime Totaltime Ping_Loss trex_Loss
 0       1Mpps      158     26346        20     397298.0
 1       1Mpps      144     23397       117    2471128.0
 2       1Mpps      147     23234       117    2467102.0
 3       1Mpps      153     24428        18     493740.0
 4       1Mpps      157     21885        18     484966.0
 5       1Mpps      149     18535        18     480116.0
 6       1Mpps      155     22244        19     511991.0
 7       1Mpps      163     15530        20     510145.0
 8       1Mpps      144     23914       117    2459891.0
 9       1Mpps      137     18231       118    2471084.0

=======================Stream Rate: 2Mpps=========================
No Stream_Rate Downtime Totaltime Ping_Loss trex_Loss
 0       2Mpps      162     15549        20    1111763.0
 1       2Mpps      149     15924        19    1034097.0
 2       2Mpps      150     15702        19     994752.0
 3       2Mpps      148     15402        19    1024076.0
 4       2Mpps      149     15838        19     969778.0
 5       2Mpps      152     15390        19     974624.0
 6       2Mpps      139     16500       117    4977939.0
 7       2Mpps      151     15490        19    1034991.0
 8       2Mpps      153     15425        18     978547.0
 9       2Mpps      152     15465        18    1029640.0

=======================Stream Rate: 3Mpps=========================
No Stream_Rate Downtime Totaltime Ping_Loss trex_Loss
 0       3Mpps      166     15766        20    1586279.0
 1       3Mpps      158     15005        20    1513520.0
 2       3Mpps      161     15509        20    1541615.0
 3       3Mpps      151     14843        18    1640205.0
 4       3Mpps      155     15365        19    1654094.0
 5       3Mpps      164     15198        20    1547613.0
 6       3Mpps      151     15267        18    1501002.0
 7       3Mpps      160     14803        20    1533560.0
 8       3Mpps      138     16774       118    7463825.0
 9       3Mpps      156     14887        19    1647701.0

=======================Stream Rate: 4Mpps=========================
No Stream_Rate Downtime Totaltime Ping_Loss trex_Loss
 0       4Mpps      156     15863        20    2038635.0
 1       4Mpps      156     14483        18    2014078.0
 2       4Mpps      153     15757        19    2024704.0
 3       4Mpps      155     14867        19    2008493.0
 4       4Mpps      152     15581        18    1992717.0
 5       4Mpps      154     15066        20    2060573.0
 6       4Mpps      143     16586       117   10077407.0
 7       4Mpps      145     14632        18    1933209.0
 8       4Mpps      158     15764        19    2031230.0
 9       4Mpps      151     15856       118    9960544.0

Comment 27 Pei Zhang 2018-06-07 00:49:55 UTC
Move this bug to 'VERIFIED' based on Comment 25.

Comment 29 errata-xmlrpc 2018-06-21 13:36:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1962


Note You need to log in before you can comment on or make changes to this bug.