Bug 1549955
Summary: | During PVP live migration, ping packets loss become higher with vIOMMU | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Pei Zhang <pezhang> | ||||||
Component: | openvswitch | Assignee: | Open vSwitch development team <ovs-team> | ||||||
Status: | CLOSED DUPLICATE | QA Contact: | ovs-qe | ||||||
Severity: | medium | Docs Contact: | |||||||
Priority: | medium | ||||||||
Version: | 7.5 | CC: | atragler, chayang, hhuang, juzhang, maxime.coquelin, michen, ovs-qe, peterx, siliu, virt-maint | ||||||
Target Milestone: | rc | Keywords: | Extras | ||||||
Target Release: | --- | ||||||||
Hardware: | Unspecified | ||||||||
OS: | Unspecified | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||
Doc Text: | Story Points: | --- | |||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2018-11-27 09:27:36 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Attachments: |
|
Created attachment 1401638 [details]
ping loss log with vIOMMU
(In reply to Pei Zhang from comment #2) > Created attachment 1401638 [details] > ping loss log with vIOMMU Hello, Pei, I see that in the log the first ping loss happened at 01:23:14.211097, while at 01:23:17.334889 the IO recovered. Does that mean the network is only down for 3 seconds even with vIOMMU? Isn't that good enough? Btw, when I tried to use your command, I got this: $ ping -i 0.001 localhost PING localhost(localhost (::1)) 56 data bytes ping: cannot flood; minimal interval allowed for user is 200ms So it seems that ping does not welcome such a short interval, so how did you do that? Thanks, Peter (In reply to Peter Xu from comment #3) > (In reply to Pei Zhang from comment #2) > > Created attachment 1401638 [details] > > ping loss log with vIOMMU > > Hello, Pei, > > I see that in the log the first ping loss happened at 01:23:14.211097, while > at 01:23:17.334889 the IO recovered. Does that mean the network is only > down for 3 seconds even with vIOMMU? Isn't that good enough? Hello, Peter, Yes, the network is only down for 3 seconds. However the expected value should be around 200 microseconds. This downtime should be very close to live migration downtime. > Btw, when I tried to use your command, I got this: > > $ ping -i 0.001 localhost > > PING localhost(localhost (::1)) 56 data bytes > > ping: cannot flood; minimal interval allowed for user is 200ms > > > So it seems that ping does not welcome such a short interval, so how did you > do that? A little strange, this command works in my testing machine. If needed, please ping me in irc, I can lend you my testing machine. Thanks, Pei > > Thanks, > Peter Hi Peter, Here is the full migration testing results: Item unit: Downtime: millisecond Totaltime: millisecond Ping_Loss: ping_request - ping_reply moongen_Loss: dpdk packets number With vIOMMU: ===========Stream Rate: 1Mpps=========== No Stream_Rate Downtime Totaltime Ping_Loss moongen_Loss 0 1Mpps 247 18432 548 10772246 1 1Mpps 257 19228 282 11336849 2 1Mpps 253 19978 282 12823471 3 1Mpps 252 18433 282 6416794 4 1Mpps 252 20198 283 12504082 5 1Mpps 247 19743 281 13567169 6 1Mpps 242 20023 283 12156311 7 1Mpps 245 18983 281 9976994 8 1Mpps 248 19060 282 10951112 9 1Mpps 245 19717 281 15208585 Without vIOMMU: ===========Stream Rate: 1Mpps=========== No Stream_Rate Downtime Totaltime Ping_Loss moongen_Loss 0 1Mpps 120 19104 15 12332441 1 1Mpps 131 19529 15 13759546 2 1Mpps 122 19274 15 11398391 3 1Mpps 116 19586 14 14552837 4 1Mpps 120 20228 16 14618624 5 1Mpps 117 19938 14 14760176 6 1Mpps 124 19936 15 15862539 7 1Mpps 129 19693 15 15241897 8 1Mpps 122 20397 13 17752737 9 1Mpps 127 19537 15 9572162 Regarding to both high moongen_loss, it's because bug[1] exists, it's in openvswitch component and tested without vIOMMU. [1]Bug 1552465 - High TRex packets loss during live migration over ovs+dpdk+vhost-user Update: Sometimes there are even ping packets lost without migration, commands like below. And this loss only happens with vIOMMU. # ping 192.168.2.4 # tcpdump -i em4 -n broadcast or icmp Yeah as Pei mentioned, this can be triggered even without migration. But it seems that migration will 100% (until now) trigger the packet loss. This is what we observed now (referencing the log uploaded by Pei, named "ping loss log with vIOMMU"): 01:23:14.434003 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 24808, length 64 01:23:14.444137 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 24809, length 64 01:23:14.454272 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 24810, length 64 01:23:14.464406 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 24811, length 64 01:23:14.471691 ARP, Request who-has 192.168.2.4 tell 192.168.2.4, length 46 01:23:14.474542 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 24812, length 64 01:23:14.484678 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 24813, length 64 01:23:14.484819 IP 192.168.2.4 > 192.168.2.3: ICMP echo reply, id 17285, seq 24813, length 64 01:23:14.485676 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 24814, length 64 01:23:14.495810 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 24815, length 64 01:23:14.495903 IP 192.168.2.4 > 192.168.2.3: ICMP echo reply, id 17285, seq 24815, length 64 Here at 01:23:14.471691 we got the ARP packet, which means destination VM starts to run, and we'll send this same ARP for 5 times (this is the first one, and we can observe all these ARPs in the log, which seems fine). However after that, we only have one ACK for PING seq 24813 but we missed one for PING seq 24812. This pattern (two PINGs, one ACK) keeps until 01:23:17.334820, then everything is recovered and no PING loss detected: 01:23:17.312622 IP 192.168.2.4 > 192.168.2.3: ICMP echo reply, id 17285, seq 25330, length 64 01:23:17.313549 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 25331, length 64 01:23:17.323684 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 25332, length 64 01:23:17.323751 IP 192.168.2.4 > 192.168.2.3: ICMP echo reply, id 17285, seq 25332, length 64 01:23:17.324685 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 25333, length 64 01:23:17.334820 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 25334, length 64 01:23:17.334889 IP 192.168.2.4 > 192.168.2.3: ICMP echo reply, id 17285, seq 25334, length 64 01:23:17.335821 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 25335, length 64 01:23:17.335887 IP 192.168.2.4 > 192.168.2.3: ICMP echo reply, id 17285, seq 25335, length 64 01:23:17.336822 IP 192.168.2.3 > 192.168.2.4: ICMP echo request, id 17285, seq 25336, length 64 01:23:17.336887 IP 192.168.2.4 > 192.168.2.3: ICMP echo reply, id 17285, seq 25336, length 64 Pei and I tried to tcpdump in guest, we can't see the missing PING in guest (but we can see the next one which has got its ACK). It means that the ping packet is lost along the IO path somewhere (NIC -> testpmd -> vhost-user -> guest). AFAIK the whole IO path is in control of testpmd after vhost-user is fully setup. So I suspect the packet loss happened inside testpmd. Maxime, any thoughts? Thanks, Peter One thing to mention is that, I tried to build a customized QEMU to track IOMMU translations inside QEMU. It shows that all translations are successful and finished in about 100us. So it seems that it's not the translation procedure that blocked testpmd. It could be something else. Peter After some offline discussion with Maxime, it seems very possible that the bug is caused by recent non-blocking implementation of IOTLB miss handling in DPDK. Currently am moving the bug to Maxime, and component to DPDK. Thanks, Peter *** This bug has been marked as a duplicate of bug 1572879 *** |
Created attachment 1401637 [details] XML of VM Description of problem: Do PVP live migration with vIOMMU, the ping packets loss become much higher. Version-Release number of selected component (if applicable): 3.10.0-855.el7.x86_64 qemu-kvm-rhev-2.10.0-21.el7.x86_64 libvirt-3.9.0-13.el7.x86_64 dpdk-17.11-7.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1. In src and des host, boot testpmd with iommu-support=1, see [1] 2. In src host, boot VM with vIOMMU. Full XML, refer to the attachment. 3. Start guest, and set ip addr for eth1 # ifconfig eth1 192.168.2.4/24 4. In the third generator host, ping(with 1ms interval) to guest and meanwhile monitor ping by running tcpdump. # ping 192.168.2.4 -i 0.001 # tcpdump -i em2 -n broadcast or icmp > ping_loss.log 5. Check ping loss by "request - reply", script refer to [2]. With vIOMMU, the loss is high up to 281. However with no-iommu, the loss is only 15 and this loss is caused during downtime, so it's expected. After initially checking the ping_loss.log, we can find those more losses are happened in the 3 seconds after downtime phase. This log will be attached to next comment. Actual results: The ping loss become higher with vIOMMU in PVP live migration testing. Expected results: The ping loss should keep same between testing with vIOMMU and no-iommu. Additional info: 1. with no-iommu, the ping loss is expected, it's about 15. Reference: [1] /usr/bin/testpmd \ -l 2,4,6 \ --socket-mem 1024,1024 \ -n 4 \ --vdev net_vhost3,iface=/tmp/vhost-user3,client=0,iommu-support=1 \ -- \ --portmask=3 \ --disable-hw-vlan \ -i \ --rxq=1 --txq=1 \ --nb-cores=2 \ --forward-mode=io testpmd> set portlist 0,1 testpmd> start [2] # cat count.py log = "/root/ping_loss.log" fd = open(log) content = fd.read() fd.close() request_num = content.count("request") reply_num = content.count("reply") loss_num = request_num - reply_num print "request: %d\n" %request_num print "reply: %d\n"%reply_num print "loss: %d" %loss_num