Bug 1268847
Summary: | ssh fails to connect to VPN hosts - hangs at "expecting SSH2_MSG_KEX_ECDH_REPLY" | ||
---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | udayb <udayreddy> |
Component: | openconnect | Assignee: | David Woodhouse <dwmw2> |
Status: | CLOSED ERRATA | QA Contact: | Fedora Extras Quality Assurance <extras-qa> |
Severity: | high | Docs Contact: | |
Priority: | unspecified | ||
Version: | 24 | CC: | dcbw, dwmw2, marcodriusso, nmavrogi, psimerda, rkhan, thaller, turchi |
Target Milestone: | --- | Keywords: | Reopened |
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | openconnect-7.08-1.fc25 openconnect-7.08-1.fc24 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-12-19 06:02:47 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
udayb
2015-10-05 12:34:12 UTC
Can you try with NetworkManager-1.0.6-6.fc22? It fixed a bug for VPN MTU - bug 1244547. Works fine with NetworkManager-1.0.6-6.fc22. The MTU is set to 1406 (it was 1500 earlier). This package has changed ownership in the Fedora Package Database. Reassigning to the new owner of this component. I'm facing the same bug, even with NetworkManager-1.0.6-7.fc22. Indeed, in my case, mtu 1406 is not sufficient for enabling a successful connection, and I have to set it to 1200. Ubuntu has a similar bug: https://bugs.launchpad.net/ubuntu/+source/openssh/+bug/1254085. Some additional info: VPN type: openconnect NetworkManager-openconnect.x86_64 1.0.2-1.fc22 openconnect.x86_64 7.06-1.fc22 I have the same problem with NetworkManager-1.0.10-2.fc22.x86_64 and vpnc. I need to set MTU to 1340, having default set to 1412. I suspect that could be better to allow the user to set this value by hand... Fedora 22 changed to end-of-life (EOL) status on 2016-07-19. Fedora 22 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed. I'm still experiencing the same bug in Fedora 24 (and I had it also in Fedora 23). It appears to be still unsolved also in Ubuntu (see link in comment 4, where workarounds are also proposed). Again, the workaround I use is lowering the MTU for the vpn to 1200 with: # ip li set mtu 1200 dev vpn0 Info: NetworkManager.x86_64 1:1.2.4-2.fc24 NetworkManager-openconnect.x86_64 1.2.2-1.fc24 openconnect.x86_64 7.07-2.fc24 It would be useful to know precisely what the problem is. Is OpenConnect not negotiating the correct MTU for its connection to your server... or is it just that your internal network is broken and a lower MTU happens to work around its brokenness. After you SSH into the remote machine, can you put your MTU back *up* again, then attempt 'ping -M do' with different packet sizes in both directions. Does your VPN server correctly send ICMP 'needs fragmentation' responses when you send a packet that's too large? If not, start by finding the idiot sysadmin and nailing it to the wall until it stops blocking ICMP.... Ok, some additional info. First, I tried to connect to the VPN using AnyConnect from Windows, and there the MTU is 1406, which is the same value set by openconnect and that generates the ssh problem. I tried what asked in comment 8, but unfortunately it seems that ICMP is blocked, since I don't get any answer from the VPN server. Anyway, find below the results: 1) First I access the VPN through openconnect (NetworkManager), having: [me@local ~]$ ip addr list ... 8: vpn0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1406 qdisc fq_codel state UP group default qlen 500 link/none inet 172.30.224.1/24 brd 172.30.224.255 scope global vpn0 valid_lft forever preferred_lft forever inet6 fe80::bdc:9f50:b231:19bf/64 scope link flags 800 valid_lft forever preferred_lft forever 2) Then I decrease the MTU in order to be able to access the remote node with ssh. I discovered that the maximum value for the MTU in order to ssh the remote node is 1386 (don't know where this comes from). Hence, after: [me@local ~]$ sudo ip li set mtu 1386 dev vpn0 I have: [me@local ~]$ ip addr list ... 8: vpn0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1386 qdisc fq_codel state UP group default qlen 500 link/none inet 172.30.224.1/24 brd 172.30.224.255 scope global vpn0 valid_lft forever preferred_lft forever inet6 fe80::bdc:9f50:b231:19bf/64 scope link flags 800 valid_lft forever preferred_lft forever 3) Now I ssh the remote node, and increase again the MTU, in order to have again the same situation of point 1). 4) Then, ping from local (172.30.224.1) to remote (172.30.50.172), obtaining: [me@local ~]$ ping 172.30.50.172 -c 3 -M do -s 1358 PING 172.30.50.172 (172.30.50.172) 1358(1386) bytes of data. 1366 bytes from 172.30.50.172: icmp_seq=1 ttl=63 time=70.2 ms 1366 bytes from 172.30.50.172: icmp_seq=2 ttl=63 time=70.5 ms 1366 bytes from 172.30.50.172: icmp_seq=3 ttl=63 time=73.4 ms --- 172.30.50.172 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 70.233/71.422/73.494/1.470 ms [me@local ~]$ ping 172.30.50.172 -c 3 -M do -s 1359 PING 172.30.50.172 (172.30.50.172) 1359(1387) bytes of data. --- 172.30.50.172 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 1999ms 5) Finally, ping from remote (172.30.50.172) to local (172.30.224.1), obtaining: [me@remote ~]$ ping 172.30.224.1 -c 3 -M do -s 1358 PING 172.30.224.1 (172.30.224.1) 1358(1386) bytes of data. 1366 bytes from 172.30.224.1: icmp_req=1 ttl=63 time=71.1 ms 1366 bytes from 172.30.224.1: icmp_req=2 ttl=63 time=70.2 ms 1366 bytes from 172.30.224.1: icmp_req=3 ttl=63 time=121 ms --- 172.30.224.1 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2002ms rtt min/avg/max/mdev = 70.259/87.630/121.472/23.933 ms [me@remote ~]$ ping 172.30.224.1 -c 3 -M do -s 1359 PING 172.30.224.1 (172.30.224.1) 1359(1387) bytes of data. --- 172.30.224.1 ping statistics --- 3 packets transmitted, 0 received, 100% packet loss, time 2014ms OpenConnect 7.08 has automatic MTU detection. Does it fix this? openconnect-7.08-1.fc25 has been submitted as an update to Fedora 25. https://bodhi.fedoraproject.org/updates/FEDORA-2016-236fdd6917 openconnect-7.08-1.fc24 has been submitted as an update to Fedora 24. https://bodhi.fedoraproject.org/updates/FEDORA-2016-4e680d77fa openconnect-7.08-1.fc24 has been pushed to the Fedora 24 testing repository. If problems still persist, please make note of it in this bug report. See https://fedoraproject.org/wiki/QA:Updates_Testing for instructions on how to install test updates. You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2016-4e680d77fa openconnect-7.08-1.fc25 has been pushed to the Fedora 25 testing repository. If problems still persist, please make note of it in this bug report. See https://fedoraproject.org/wiki/QA:Updates_Testing for instructions on how to install test updates. You can provide feedback for this update here: https://bodhi.fedoraproject.org/updates/FEDORA-2016-236fdd6917 (In reply to David Woodhouse from comment #10) > OpenConnect 7.08 has automatic MTU detection. Does it fix this? Yes, openconnect-7.08-1.fc24 fixes it, many thanks! openconnect-7.08-1.fc25 has been pushed to the Fedora 25 stable repository. If problems still persist, please make note of it in this bug report. openconnect-7.08-1.fc24 has been pushed to the Fedora 24 stable repository. If problems still persist, please make note of it in this bug report. |