Created attachment 1892154 [details] journal On a dualstack environment, we see some devices/connections stuck on ip-check state with a default dhcp6 profile generated with nm-initrd-generator: Jun 17 10:48:18.379767 worker-0.ostest.test.metalkube.org NetworkManager[1270]: <info> [1655462898.3797] device (enp2s0): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') ... Jun 23 09:17:17.611657 worker-0.ostest.test.metalkube.org systemd[1]: NetworkManager-wait-online.service: Main process exited, code=exited, status=1/FAILURE ... Jun 23 09:17:17.682700 worker-0.ostest.test.metalkube.org configure-ovs.sh[1613]: + nmcli -g all device Jun 23 09:17:17.683647 worker-0.ostest.test.metalkube.org configure-ovs.sh[1614]: + grep -v unmanaged Jun 23 09:17:17.695764 worker-0.ostest.test.metalkube.org configure-ovs.sh[1614]: enp1s0:ethernet:connecting (checking IP connectivity):none:none:/org/freedesktop/NetworkManager/Devices/2:Wired Connection:dab82ed4-22b9-4ba8-bdb1-4c5c5a29e928:/org/freedesktop/NetworkManager/ActiveConnection/1 Jun 23 09:17:17.695764 worker-0.ostest.test.metalkube.org configure-ovs.sh[1614]: enp2s0:ethernet:connecting (checking IP connectivity):none:none:/org/freedesktop/NetworkManager/Devices/3:Wired Connection:dab82ed4-22b9-4ba8-bdb1-4c5c5a29e928:/org/freedesktop/NetworkManager/ActiveConnection/2 Then configure-ovs enslaves the device to an OVS bridge whose interface gets that same IP dhcp configuration and tries to activate the bridge interface profile, but it fails after a 90s NM timeout where the connection is also stuck in an ip-check state: Jun 23 09:18:33.475122 worker-0.ostest.test.metalkube.org configure-ovs.sh[1608]: + nmcli conn up ovs-if-br-ex ... Jun 23 09:18:33.520880 worker-0.ostest.test.metalkube.org NetworkManager[1271]: <info> [1655975913.5208] dhcp4 (br-ex): state changed new lease, address=192.168.111.23 ... Jun 23 09:18:35.674309 worker-0.ostest.test.metalkube.org NetworkManager[1271]: <info> [1655975915.6742] dhcp6 (br-ex): state changed new lease, address=fd2e:6f44:5dd8:c956::17 Jun 23 09:18:35.675027 worker-0.ostest.test.metalkube.org NetworkManager[1271]: <info> [1655975915.6750] device (br-ex): state change: ip-config -> ip-check (reason 'none', sys-iface-state: 'managed') ... Jun 23 09:20:03.580649 worker-0.ostest.test.metalkube.org configure-ovs.sh[2389]: Error: Timeout expired (90 seconds) This repeats on 10 attempts that configure-ovs does to activate the profile. But we only saw this one time so far. Most of the time, being all circumstances the same(even interfaces stuck in ip-check state before configure-ovs does anything), configure ovs manages to activate the profile. What are the reasons why activation can get stuck in the ip-check phase? Attached journal with normal NM logs, will provide trace logs once I am able to reproduce if needed.
This seems to be because openshift's resolve-prepender dispatcher script is timing out twice, once for dhcp, another for dhcp6, each during 45s, amounting to the same time as dhcp timeout (90s) which we don't get to see. It times out because it can't find a nameserver in /var/run/NetworkManager/resolv.conf: Jun 23 09:16:16.624470 worker-0.ostest.test.metalkube.org nm-dispatcher[1332]: grep: /var/run/NetworkManager/resolv.conf: No such file or directory Jun 23 09:16:16.624797 worker-0.ostest.test.metalkube.org nm-dispatcher[1331]: NM resolv-prepender: NM resolv.conf still empty of nameserver from https://github.com/openshift/machine-config-operator/blob/970dffe12d0623faec65bde714889af0d977275d/templates/common/on-prem/files/NetworkManager-resolv-prepender.yaml#L32 I think we are expecting /var/run/NetworkManager/resolv.conf to be updated with a nameserver pointing to the node's ip address. In other instances on that same log file, this happens as expected: Jun 23 09:34:48.853541 worker-0.ostest.test.metalkube.org nm-dispatcher[8070]: NM resolv-prepender: Prepending 'nameserver 192.168.111.23' to /etc/resolv.conf (other nameservers from /var/run/NetworkManager/resolv.conf) @nm-team before taking this to openshift, can you detail when are nameservers updated in /var/run/NetworkManager/resolv.conf, and why in the log above this might not be working as we expect? Thank you! cc @bnemec
(In reply to Jaime Caamaño Ruiz from comment #1) > I think we are expecting /var/run/NetworkManager/resolv.conf to be updated > with a nameserver pointing to the node's ip address. In other instances on > that same log file, this happens as expected: Not exactly. We look at /var/run/NetworkManager/resolv.conf to get the upstream nameservers, then prepend the local IP to that list and populate /etc/resolv.conf. If there are no servers in the /var location then we have no upstream DNS, which isn't going to work. It sounds like either there are no DNS servers provided by DHCP or they failed to configure DNS in their static IP configuration. If that's not the case then there must be some sort of timing issue with when the dispatcher script is run and the /var resolv.conf is populated.
I think the problem is that NM is calling dispatcher scripts for `dhcp4-change` when resolv.conf is not yet updated with the new nameservers. This is a regression in NM 1.36 compared to previous versions; see the linked gitlab MR.
(In reply to Beniamino Galvani from comment #3) > I think the problem is that NM is calling dispatcher scripts for > `dhcp4-change` when resolv.conf is not yet updated with the new nameservers. > This is a regression in NM 1.36 compared to previous versions; see the > linked gitlab MR. I see. I guess the update of the nameservers is synchronous with respect to the dispatcher script, and comes after. Perhaps what we could do from the dispatcher script is trigger a systemd service that would make the whole thing asynchronous with respect to the nameserver update. And that can help us also with ordering with the kubelet service.
*** Bug 2103744 has been marked as a duplicate of this bug. ***
*** Bug 2102777 has been marked as a duplicate of this bug. ***
Created attachment 1909754 [details] Reproducer script
Hello Jaime, could you please try the NetworkManager-1.40.2-1.el8 in your environment? I tried my best to trigger the issue syntetically by new tests in NM CI but I'd like a confirmation from real environment that the issue is indeed fixed. Thank you.
I have no idea how to do it so it would not be trivial or straight forward for me.
Setting to VERIFIED in NetworkManager-1.40.2-1.el8 as SanityOnly as the fix is present and we have a best-effort synthetic automated test but we can't verify the fix in original or analogous environment.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (NetworkManager bug fix and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2023:2968