RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1549955 - During PVP live migration, ping packets loss become higher with vIOMMU
Summary: During PVP live migration, ping packets loss become higher with vIOMMU
Keywords:
Status: CLOSED DUPLICATE of bug 1572879
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: openvswitch
Version: 7.5
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: ---
Assignee: Open vSwitch development team
QA Contact: ovs-qe
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-02-28 07:13 UTC by Pei Zhang
Modified: 2018-11-27 09:27 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-11-27 09:27:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
XML of VM (4.75 KB, text/html)
2018-02-28 07:13 UTC, Pei Zhang
no flags Details
ping loss log with vIOMMU (79.42 KB, text/plain)
2018-02-28 07:16 UTC, Pei Zhang
no flags Details

Description Pei Zhang 2018-02-28 07:13:08 UTC
Created attachment 1401637 [details]
XML of VM

Description of problem:
Do PVP live migration with vIOMMU, the ping packets loss become much higher.


Version-Release number of selected component (if applicable):
3.10.0-855.el7.x86_64
qemu-kvm-rhev-2.10.0-21.el7.x86_64
libvirt-3.9.0-13.el7.x86_64
dpdk-17.11-7.el7.x86_64


How reproducible:
100%

Steps to Reproduce:
1. In src and des host, boot testpmd with iommu-support=1, see [1]

2. In src host, boot VM with vIOMMU. Full XML, refer to the attachment.

3. Start guest, and set ip addr for eth1
# ifconfig eth1 192.168.2.4/24

4. In the third generator host, ping(with 1ms interval) to guest and meanwhile monitor ping by running tcpdump.

# ping 192.168.2.4 -i 0.001
# tcpdump -i em2 -n broadcast or icmp > ping_loss.log

5. Check ping loss by "request - reply", script refer to [2].
With vIOMMU, the loss is high up to 281. However with no-iommu, the loss is only 15 and this loss is caused during downtime, so it's expected.

After initially checking the ping_loss.log, we can find those more losses are happened in the 3 seconds after downtime phase. This log will be attached to next comment.

Actual results:
The ping loss become higher with vIOMMU in PVP live migration testing.

Expected results:
The ping loss should keep same between testing with vIOMMU and no-iommu.

Additional info:

1. with no-iommu, the ping loss is expected, it's about 15.


Reference:
[1]
/usr/bin/testpmd \
-l 2,4,6 \
--socket-mem 1024,1024 \
-n 4 \
--vdev net_vhost3,iface=/tmp/vhost-user3,client=0,iommu-support=1 \
-- \
--portmask=3 \
--disable-hw-vlan \
-i \
--rxq=1 --txq=1 \
--nb-cores=2 \
--forward-mode=io

testpmd> set portlist 0,1
testpmd> start 

[2]
# cat count.py 
log = "/root/ping_loss.log"
fd = open(log)
content = fd.read()
fd.close()

request_num = content.count("request")
reply_num = content.count("reply")
loss_num = request_num - reply_num
print "request: %d\n" %request_num
print "reply: %d\n"%reply_num
print "loss: %d" %loss_num

Comment 2 Pei Zhang 2018-02-28 07:16:32 UTC
Created attachment 1401638 [details]
ping loss log with vIOMMU

Comment 3 Peter Xu 2018-03-19 10:58:07 UTC
(In reply to Pei Zhang from comment #2)
> Created attachment 1401638 [details]
> ping loss log with vIOMMU

Hello, Pei,

I see that in the log the first ping loss happened at 01:23:14.211097, while at 01:23:17.334889 the IO recovered.  Does that mean the network is only down for 3 seconds even with vIOMMU?  Isn't that good enough?

Btw, when I tried to use your command, I got this:

$ ping -i 0.001 localhost                                                                                                                  
PING localhost(localhost (::1)) 56 data bytes                                  
ping: cannot flood; minimal interval allowed for user is 200ms                 

So it seems that ping does not welcome such a short interval, so how did you do that?

Thanks,
Peter

Comment 4 Pei Zhang 2018-03-19 12:47:58 UTC
(In reply to Peter Xu from comment #3)
> (In reply to Pei Zhang from comment #2)
> > Created attachment 1401638 [details]
> > ping loss log with vIOMMU
> 
> Hello, Pei,
> 
> I see that in the log the first ping loss happened at 01:23:14.211097, while
> at 01:23:17.334889 the IO recovered.  Does that mean the network is only
> down for 3 seconds even with vIOMMU?  Isn't that good enough?

Hello, Peter,

Yes, the network is only down for 3 seconds. However the expected value should be around 200 microseconds. This downtime should be very close to live migration downtime.

> Btw, when I tried to use your command, I got this:
> 
> $ ping -i 0.001 localhost                                                   
> 
> PING localhost(localhost (::1)) 56 data bytes                               
> 
> ping: cannot flood; minimal interval allowed for user is 200ms              
> 
> 
> So it seems that ping does not welcome such a short interval, so how did you
> do that?

A little strange, this command works in my testing machine. If needed, please ping me in irc, I can lend you my testing machine.


Thanks,
Pei

> 
> Thanks,
> Peter

Comment 5 Pei Zhang 2018-03-20 05:48:49 UTC
Hi Peter, 

Here is the full migration testing results:

Item unit:
Downtime: millisecond
Totaltime: millisecond
Ping_Loss: ping_request - ping_reply
moongen_Loss: dpdk packets number


With vIOMMU: 
===========Stream Rate: 1Mpps===========
No Stream_Rate Downtime Totaltime Ping_Loss moongen_Loss
 0       1Mpps      247     18432       548     10772246
 1       1Mpps      257     19228       282     11336849
 2       1Mpps      253     19978       282     12823471
 3       1Mpps      252     18433       282      6416794
 4       1Mpps      252     20198       283     12504082
 5       1Mpps      247     19743       281     13567169
 6       1Mpps      242     20023       283     12156311
 7       1Mpps      245     18983       281      9976994
 8       1Mpps      248     19060       282     10951112
 9       1Mpps      245     19717       281     15208585


Without vIOMMU:
===========Stream Rate: 1Mpps===========
No Stream_Rate Downtime Totaltime Ping_Loss moongen_Loss
 0       1Mpps      120     19104        15     12332441
 1       1Mpps      131     19529        15     13759546
 2       1Mpps      122     19274        15     11398391
 3       1Mpps      116     19586        14     14552837
 4       1Mpps      120     20228        16     14618624
 5       1Mpps      117     19938        14     14760176
 6       1Mpps      124     19936        15     15862539
 7       1Mpps      129     19693        15     15241897
 8       1Mpps      122     20397        13     17752737
 9       1Mpps      127     19537        15      9572162

Regarding to both high moongen_loss, it's because bug[1] exists, it's in openvswitch component and tested without vIOMMU. 

[1]Bug 1552465 - High TRex packets loss during live migration over ovs+dpdk+vhost-user

Comment 6 Pei Zhang 2018-03-20 07:35:59 UTC
Update:

Sometimes there are even ping packets lost without migration, commands like below. And this loss only happens with vIOMMU.

# ping 192.168.2.4
# tcpdump -i em4 -n broadcast or icmp

Comment 7 Peter Xu 2018-03-21 03:02:39 UTC
Yeah as Pei mentioned, this can be triggered even without migration.  But it seems that migration will 100% (until now) trigger the packet loss.

This is what we observed now (referencing the log uploaded by Pei, named "ping loss log with vIOMMU"):

01:23:14.434003 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 24808, length 64
01:23:14.444137 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 24809, length 64
01:23:14.454272 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 24810, length 64
01:23:14.464406 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 24811, length 64
01:23:14.471691 ARP, Request who-has 192.168.2.4 tell 192.168.2.4, length 46
01:23:14.474542 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 24812, length 64
01:23:14.484678 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 24813, length 64
01:23:14.484819 IP 192.168.2.4 > 192.168.2.3: ICMP echo reply, id 17285, seq 24813, length 64
01:23:14.485676 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 24814, length 64
01:23:14.495810 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 24815, length 64
01:23:14.495903 IP 192.168.2.4 > 192.168.2.3: ICMP echo reply, id 17285, seq 24815, length 64

Here at 01:23:14.471691 we got the ARP packet, which means destination VM starts to run, and we'll send this same ARP for 5 times (this is the first one, and we can observe all these ARPs in the log, which seems fine).

However after that, we only have one ACK for PING seq 24813 but we missed one for PING seq 24812.  This pattern (two PINGs, one ACK) keeps until 01:23:17.334820, then everything is recovered and no PING loss detected:

01:23:17.312622 IP 192.168.2.4 > 192.168.2.3: ICMP echo reply, id 17285, seq 25330, length 64
01:23:17.313549 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 25331, length 64
01:23:17.323684 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 25332, length 64
01:23:17.323751 IP 192.168.2.4 > 192.168.2.3: ICMP echo reply, id 17285, seq 25332, length 64
01:23:17.324685 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 25333, length 64
01:23:17.334820 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 25334, length 64
01:23:17.334889 IP 192.168.2.4 > 192.168.2.3: ICMP echo reply, id 17285, seq 25334, length 64
01:23:17.335821 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 25335, length 64
01:23:17.335887 IP 192.168.2.4 > 192.168.2.3: ICMP echo reply, id 17285, seq 25335, length 64
01:23:17.336822 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 25336, length 64
01:23:17.336887 IP 192.168.2.4 > 192.168.2.3: ICMP echo reply, id 17285, seq 25336, length 64

Pei and I tried to tcpdump in guest, we can't see the missing PING in guest (but we can see the next one which has got its ACK).  It means that the ping packet is lost along the IO path somewhere (NIC -> testpmd -> vhost-user -> guest).

AFAIK the whole IO path is in control of testpmd after vhost-user is fully setup.  So I suspect the packet loss happened inside testpmd.

Maxime, any thoughts?

Thanks,
Peter

Comment 8 Peter Xu 2018-03-21 07:48:41 UTC
One thing to mention is that, I tried to build a customized QEMU to track IOMMU translations inside QEMU. It shows that all translations are successful and finished in about 100us.  So it seems that it's not the translation procedure that blocked testpmd.  It could be something else.

Peter

Comment 9 Peter Xu 2018-03-21 09:42:33 UTC
After some offline discussion with Maxime, it seems very possible that the bug is caused by recent non-blocking implementation of IOTLB miss handling in DPDK.

Currently am moving the bug to Maxime, and component to DPDK.

Thanks,
Peter

Comment 10 Maxime Coquelin 2018-11-27 09:27:36 UTC

*** This bug has been marked as a duplicate of bug 1572879 ***


Note You need to log in before you can comment on or make changes to this bug.