Bug 1717793
| Summary: | [OVN]gateway_mtu is not effective after change from a small value to a larger value | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux Fast Datapath | Reporter: | haidong li <haili> |
| Component: | ovn2.11 | Assignee: | Dumitru Ceara <dceara> |
| Status: | CLOSED NOTABUG | QA Contact: | haidong li <haili> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | FDP 19.C | CC: | ctrautma, dceara, qding |
| Target Milestone: | --- | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-06-24 19:48:11 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
haidong li
2019-06-06 08:19:40 UTC
Hi, The default behavior on hv1_vm00 is to learn interface MTU size from "ICMP unreachable - need to frag" packets sent by the gateway. The expiry timeout is controlled via /proc/sys/net/ipv4/route/mtu_expires. On my system: # cat /proc/sys/net/ipv4/route/mtu_expires 600 To disable this behavior, in the VM: # echo 0 > /proc/sys/net/ipv4/route/mtu_expires After disabling MTU size learning ping will always try to send packets as big as the locally configured MTU size. In my case with MTU 1500 in the VM: On northd: # ovn-nbctl set logical_router_port rtr-ls1 options:gateway_mtu=1000 In VM: # ping 10.0.0.1 -s 9000 PING 10.0.0.1 (10.0.0.1) 9000(9028) bytes of data. From 20.0.0.254 icmp_seq=1 Frag needed and DF set (mtu = 982) From 20.0.0.254 icmp_seq=2 Frag needed and DF set (mtu = 982) On northd increase gateway mtu to 1500: # ovn-nbctl set logical_router_port rtr-ls1 options:gateway_mtu=1500 In VM: # ip netns exec vm2 ping 10.0.0.1 -s 9000 PING 10.0.0.1 (10.0.0.1) 9000(9028) bytes of data. From 20.0.0.254 icmp_seq=1 Frag needed and DF set (mtu = 1482) From 20.0.0.254 icmp_seq=2 Frag needed and DF set (mtu = 1482) From 20.0.0.254 icmp_seq=3 Frag needed and DF set (mtu = 1482) # ip netns exec vm2 ping 10.0.0.1 -s 1454 PING 10.0.0.1 (10.0.0.1) 1454(1482) bytes of data. 1462 bytes from 10.0.0.1: icmp_seq=1 ttl=63 time=0.762 ms 1462 bytes from 10.0.0.1: icmp_seq=2 ttl=63 time=0.639 ms So it seems that the functionality works as expected. Hi, I have another question for the fragment.If I set the gateway_mtu less than 568,it seems can't ping successfully to the remote,only displayed (mtu = 542) like this: [root@dell-per730-19 ovn]# ovn-nbctl get logical_router_port r1_s3 options:gateway_mtu "560" [root@dell-per730-19 ovn]# virsh console hv1_vm00 Connected to domain hv1_vm00 Escape character is ^] [root@localhost ~]# ping -s 1000 172.16.103.11 PING 172.16.103.11 (172.16.103.11) 1000(1028) bytes of data. From 172.16.102.1 icmp_seq=1 Frag needed and DF set (mtu = 542) From 172.16.102.1 icmp_seq=2 Frag needed and DF set (mtu = 542) From 172.16.102.1 icmp_seq=3 Frag needed and DF set (mtu = 542) From 172.16.102.1 icmp_seq=4 Frag needed and DF set (mtu = 542) From 172.16.102.1 icmp_seq=5 Frag needed and DF set (mtu = 542) From 172.16.102.1 icmp_seq=6 Frag needed and DF set (mtu = 542) From 172.16.102.1 icmp_seq=7 Frag needed and DF set (mtu = 542) From 172.16.102.1 icmp_seq=8 Frag needed and DF set (mtu = 542) From 172.16.102.1 icmp_seq=9 Frag needed and DF set (mtu = 542) From 172.16.102.1 icmp_seq=10 Frag needed and DF set (mtu = 542) From 172.16.102.1 icmp_seq=11 Frag needed and DF set (mtu = 542) From 172.16.102.1 icmp_seq=12 Frag needed and DF set (mtu = 542) From 172.16.102.1 icmp_seq=13 Frag needed and DF set (mtu = 542) From 172.16.102.1 icmp_seq=14 Frag needed and DF set (mtu = 542) --- 172.16.103.11 ping statistics --- 14 packets transmitted, 0 received, +14 errors, 100% packet loss, time 13017ms But it can ping success if I change the gateway_mtu to 562: root@dell-per730-19 ovn]# ovn-nbctl set logical_router_port r1_s3 options:gateway_mtu=562 [root@dell-per730-19 ovn]# ovn-nbctl get logical_router_port r1_s3 options:gateway_mtu "562" [root@localhost ~]# ping -s 1000 172.16.103.11 PING 172.16.103.11 (172.16.103.11) 1000(1028) bytes of data. From 172.16.102.1 icmp_seq=1 Frag needed and DF set (mtu = 544) 1008 bytes from 172.16.103.11: icmp_seq=2 ttl=63 time=1.01 ms 1008 bytes from 172.16.103.11: icmp_seq=3 ttl=63 time=0.785 ms 1008 bytes from 172.16.103.11: icmp_seq=4 ttl=63 time=0.632 ms 1008 bytes from 172.16.103.11: icmp_seq=5 ttl=63 time=0.590 ms --- 172.16.103.11 ping statistics --- 5 packets transmitted, 4 received, +1 errors, 20% packet loss, time 4003ms rtt min/avg/max/mdev = 0.590/0.755/1.013/0.165 ms [root@localhost ~]# Is it expected?Thanks. the vm can ping successfully with gateway_mtu larger than 562,corret the value in comment2. Hi, Did hv1_vm00 get rebooted in between the gateway_mtu change on r1_s3? Or did the configuration for /proc/sys/net/ipv4/route/mtu_expires on hv1_vm00 change? If not can you please share a tcpdump from hv1_vm00 when gateway_mtu is 562 and ping is successful? tcpdump -n -i <interface> -v Thanks |