RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1713478 - NM can reach `network-online.target` before `/proc/net/route` and `/proc/net/ipv6_route` are populated
Summary: NM can reach `network-online.target` before `/proc/net/route` and `/proc/net/...
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: NetworkManager
Version: 8.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 8.0
Assignee: sushil kulkarni
QA Contact: Desktop QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-05-23 19:54 UTC by Micah Abbott
Modified: 2019-08-26 18:31 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-08-26 18:31:18 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
journal with NM trace enabled (126.81 KB, application/gzip)
2019-05-24 13:29 UTC, Micah Abbott
no flags Details
journal with NM trace enabled; IPV4_FAILURE_FATAL="no" (145.09 KB, application/gzip)
2019-05-24 15:11 UTC, Micah Abbott
no flags Details
journal with NM trace enabled; crio debug build used (208.88 KB, text/plain)
2019-05-30 20:42 UTC, Micah Abbott
no flags Details
journal with NM trace enabled; crio debug build used; IPV6_FAILURE_FATAL=yes (196.66 KB, text/plain)
2019-06-10 19:57 UTC, Micah Abbott
no flags Details

Description Micah Abbott 2019-05-23 19:54:33 UTC
For the Red Hat CoreOS (RHCOS) project, we want to run tests against newly generated RHCOS images using the `kola` (https://github.com/coreos/mantle/kola)est framework.  This framework allows us to boot and test new RHCOS images using qemu as an unprivileged user.

In this scenario, we are isolating the guest from the host using qemu networking options and how DHCP addresses are handed out.  Notably, the DHCP offer does not have any default route provided.

When the host comes up, we can see it reach the `network-online.target`, however we can see the `crio.service` fail immediately afterwards because it detects `/proc/net/route` and `/proc/net/ipv6_route` are not populated.  (crio.service has `After=network-online.target` in the systemd unit file)

If we restart the `crio.service` after login, it starts successfully because the routes have been established.

This leads us to believe that NetworkManager is indicating `network-online` earlier than it should, since routing hasn't been established.

There is a similar looking problem fixed in `systemd-networkd` that clued us in on this potential problem in NetworkManager:

https://github.com/systemd/systemd/pull/8728




To replicate this problem:

1.  Checkout the `mantle` repo and build `kola`

$ git clone https://github.com/coreos/mantle
$ cd mantle/kola
$ ./build kola

2.  Grab a recent RHCOS build

$ curl --compressed -J -L -O https://releases-art-jenkins.cloud.paas.upshift.redhat.com/storage/releases/ootpa/410.8.20190520.0/rhcos-410.8.20190520.0-qemu.qcow2

3. Spawn a `kola` session using `qemu-unpriv`

$ kola spawn --qemu-image rhcos-410.8.20190520.0-qemu.qcow2 -p qemu-unpriv --ignition-version v2

4.  Observe the failed `crio.service` upon login

Comment 1 Micah Abbott 2019-05-23 19:59:39 UTC
Clarification on first comment:

Notably that `systemd-networkd` bug means that `wait-online` is never reached, because the route is never considered ready

Comment 2 Thomas Haller 2019-05-24 06:46:02 UTC
Could you attach level=TRACE logfile for NetworkManager? See https://cgit.freedesktop.org/NetworkManager/NetworkManager/tree/contrib/fedora/rpm/NetworkManager.conf#n28


Note that if you set ipv4.may-fail=yes (the default) and IPv6 is enabled and completes first, then the connection is considered fully activated (NM doesn't know that it should wait for DHCPv4 as well). Did you try setting ipv4.may-fail=no?

Comment 3 Micah Abbott 2019-05-24 13:29:33 UTC
Created attachment 1572902 [details]
journal with NM trace enabled

Comment 4 Micah Abbott 2019-05-24 13:30:23 UTC
I've uploaded a journal with TRACE enabled in the failure case.

I'll try with `ipv4.may-fail=no` next.

Comment 5 Micah Abbott 2019-05-24 15:10:30 UTC
I tried to configure the connection to use `ipv4.may-fail=no` in an ifcfg network-script, but it doesn't seem to have taken effect:

$ cat /etc/sysconfig/network-scripts/ifcfg-ens4 
DEVICE="ens4"
BOOTPROTO="dhcp"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"
IPV4_FAILURE_FATAL="no"

$ nmcli con show
NAME         UUID                                  TYPE      DEVICE 
System ens4  e27f182b-d125-2c43-5a30-43524d0229ac  ethernet  ens4   

$ nmcli con show "System ens4" | grep fail
ipv4.may-fail:                          yes
ipv6.may-fail:                          yes


I'll attach the journal with level=TRACE enabled for this case too

Comment 6 Micah Abbott 2019-05-24 15:11:11 UTC
Created attachment 1572925 [details]
journal with NM trace enabled; IPV4_FAILURE_FATAL="no"

Comment 7 Thomas Haller 2019-05-24 15:56:46 UTC
The thing that delays "network-online.target" is "NetworkManager-wait-online.service".

That one is basically reached when NetworkManager logs the message

    <info>  [1558709935.7694] manager: startup complete


In the log from comment 6 you see that happens shortly after "ens4" reaches "activated" state.

   <info>  [1558709935.7430] device (ens4): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')

... which, happens after NM receives DHCP configuration and configures IP addresses and routes.



  $ zcat journal-raw.txt.gz | sed -n 's/^MESSAGE=//p' | grep 'is starting\|device.*state change:\|startup complete\|platform: signal: route'

shows that at that point there is no default route yet.


Note that the profile has ipv6.method=ignore. In that case, "ipv6.may-fail" has no effect (because "ignore" method always succeed right away). Hence, once IPv4 is up, the device is fully activated.

Do you expect an IPv4 default route there?

If you want to wait for the IPv6 configuration, you probably should set "ipv6.method=auto".


What do you think?



PS: is there a convenient tool how to read the raw journal output? I appreciate getting the full structured logging, but is there a good way to read that? (aside "grep"). Can `journalctl` consume that as input?

Comment 8 Micah Abbott 2019-05-24 16:53:55 UTC
(In reply to Thomas Haller from comment #7)

> PS: is there a convenient tool how to read the raw journal output? I
> appreciate getting the full structured logging, but is there a good way to
> read that? (aside "grep"). Can `journalctl` consume that as input?

Sorry about the raw journal files.  kola provides `journal.txt` and `journal-raw.txt.gz`, but I mistakenly thought the .gz was just a gzipped version of the text file.  I'll provide normal journal files going forward.



Now that I'm looking at this more carefully with the information you've extracted, this may be an issue with the `crio.service`


It originally fails like so:

$ journalctl -b | grep crio                                                                                                                                                                         
May 24 16:36:10 localhost.localdomain crio[1135]: time="2019-05-24 16:36:10.717209235Z" level=fatal msg="no default routes found in "/proc/net/route" or "/proc/net/ipv6_route""                                     
May 24 16:36:10 localhost.localdomain systemd[1]: crio.service: Main process exited, code=exited, status=1/FAILURE                                                                                                   
May 24 16:36:10 localhost.localdomain systemd[1]: crio.service: Failed with result 'exit-code'.



When I'm able to get a login prompt on the host and inspect those two files:

$ cat /proc/net/route 
Iface   Destination     Gateway         Flags   RefCnt  Use     Metric  Mask            MTU     Window  IRTT                                                       
ens4    0002000A        00000000        0001    0       0       100     00FFFFFF        0       0       0                                                                             

$ cat /proc/net/ipv6_route 
00000000000000000000000000000001 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000100 00000001 00000000 00000001       lo
fe800000000000000000000000000000 40 00000000000000000000000000000000 00 00000000000000000000000000000000 00000100 00000002 00000000 00000001     ens4
fec00000000000000000000000000000 40 00000000000000000000000000000000 00 00000000000000000000000000000000 00000100 00000001 00000000 004c0001     ens4
00000000000000000000000000000000 00 00000000000000000000000000000000 00 fe800000000000000000000000000002 00000400 00000001 00000000 00450003     ens4
00000000000000000000000000000001 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000000 00000003 00000000 80200001       lo
fe80000000000000505400fffe123456 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000000 00000003 00000000 80200001     ens4
fec0000000000000505400fffe123456 80 00000000000000000000000000000000 00 00000000000000000000000000000000 00000000 00000002 00000000 80200001     ens4
ff000000000000000000000000000000 08 00000000000000000000000000000000 00 00000000000000000000000000000000 00000100 00000003 00000000 00000001     ens4
00000000000000000000000000000000 00 00000000000000000000000000000000 00 00000000000000000000000000000000 ffffffff 00000001 00000000 00200200       lo

$ ip route
10.0.2.0/24 dev ens4 proto kernel scope link src 10.0.2.15 metric 100 

$ ip -6 route
::1 dev lo proto kernel metric 256 pref medium
fe80::/64 dev ens4 proto kernel metric 256 pref medium
fec0::/64 dev ens4 proto kernel metric 256 expires 86154sec pref medium
default via fe80::2 dev ens4 proto ra metric 1024 expires 1554sec hoplimit 64 pref medium


So there are no default routes present, but there is are routes defined.

However if I restart the `crio.service`, it starts successfully, but the routing table hasn't changed.

$ sudo systemctl restart crio.service

$ journalctl -b | grep "crio\|Open Container"
May 24 16:36:10 localhost.localdomain systemd[1]: Starting Open Container Initiative Daemon...
May 24 16:36:10 localhost.localdomain crio[1135]: time="2019-05-24 16:36:10.717209235Z" level=fatal msg="no default routes found in "/proc/net/route" or "/proc/net/ipv6_route""
May 24 16:36:10 localhost.localdomain systemd[1]: crio.service: Main process exited, code=exited, status=1/FAILURE
May 24 16:36:10 localhost.localdomain systemd[1]: crio.service: Failed with result 'exit-code'.
May 24 16:36:10 localhost.localdomain systemd[1]: Failed to start Open Container Initiative Daemon.
May 24 16:38:18 localhost.localdomain sudo[1410]: core : TTY=pts/0 ; PWD=/var/home/core ; USER=root ; COMMAND=/bin/systemctl restart crio
May 24 16:38:18 localhost.localdomain systemd[1]: Starting Open Container Initiative Daemon...
May 24 16:38:18 localhost.localdomain systemd[1]: Started Open Container Initiative Daemon.


So why is it starting successfully if the routing table hasn't changed?

It doesn't seem likely that `/proc/net/route` or `/proc/net/ipv6_route` wouldn't be populated by the time NetworkManager is online with the message 

<info>  [1558709935.7694] manager: startup complete



Let me contact the crio folks and see if they can give me some additional info.

Comment 9 Micah Abbott 2019-05-30 20:42:55 UTC
Created attachment 1575393 [details]
journal with NM trace enabled; crio debug build used

Comment 10 Micah Abbott 2019-05-30 20:43:32 UTC
The crio folks were able to provide me with a version of the crio binary that would dump the contents of /proc/net/route and /proc/net/ipv6_route during startup.

I was able to see that the IPv6 routing table changed between reaching `network-online.target` and when the `crio` service successfully started afterwards.

I've attached another journal that shows the `crio` service initially failing, then starting successfully after the IPv6 routing table changed.


My confusion lies in whether the changes to the routing table should be interpreted as having a valid default route installed, as crio expects.

From what I can tell (and I might be spectacularly wrong), the host identifies a default IPv6 route via fe80::2 before `network-online.target` is reached.  Additional routes are added afterwards, but the default route is left unchanged.


My understanding is that `crio` is using a kubernetes library to decide on the default route; I believe the code is here:

https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/apimachinery/pkg/util/net/interface.go#L131


I know my use case is a bit contrived, but it seems like there might be a disconnect between what `crio` expects as a default route and what NetworkManager is reporting.  I've added one of the crio developers (Mrunal Patel) to this BZ in hopes of spurring some cross-team discussion.

Comment 11 Beniamino Galvani 2019-06-05 10:37:32 UTC
Hi,

when DHCPv4 succeeds the connection on ens4 becomes ACTIVATED and we signal the online status.

 May 30 16:40:55.774207 NetworkManager[1028]: <info>  [1559248855.7742] dhcp4 (ens4): state changed unknown -> bound
 May 30 16:40:55.775515 NetworkManager[1028]: <info>  [1559248855.7755] device (ens4): state change: secondaries -> activated (reason 'none', sys-iface-state: 'managed')
 May 30 16:40:55.792243 systemd[1]: Reached target Network is Online.

But there isn't any default route because DHCPv4 didn't provide a gateway.

 May 30 16:40:56.236668 crio[1134]: time="2019-05-30 20:40:56.236635121Z" level=fatal msg="no default routes found in "/proc/net/route" or "/proc/net/ipv6_route""

IPv6 autoconf obtains a default route later:

 May 30 16:40:57.365597 NetworkManager[1028]: <debug> [1559248857.3655] ndisc-lndp[0x56159d2560e0,"ens4"]: received router advertisement at 4
 May 30 16:40:57.365610 NetworkManager[1028]: <debug> [1559248857.3656] ndisc[0x56159d2560e0,"ens4"]: complete-address: using an stable-privacy address
 May 30 16:40:57.365617 NetworkManager[1028]: <debug> [1559248857.3656] ndisc[0x56159d2560e0,"ens4"]: neighbor discovery configuration changed [dGAR]:
 May 30 16:40:57.365622 NetworkManager[1028]: <debug> [1559248857.3656] ndisc[0x56159d2560e0,"ens4"]:   dhcp-level none
 May 30 16:40:57.365632 NetworkManager[1028]: <debug> [1559248857.3656] ndisc[0x56159d2560e0,"ens4"]:   gateway fe80::2 pref medium exp 1799.9990

 May 30 16:41:14.627108 crio[1381]: 00000000000000000000000000000000 00 00000000000000000000000000000000 00 fe800000000000000000000000000002 00000064 00000001 00000000 00000003     ens4

If you need to wait that IPv6 autoconf completes, set ipv6.may-fail=no
(or add IPV6_FAILURE_FATAL=yes in the ifcfg file).

Comment 12 Micah Abbott 2019-06-10 19:56:54 UTC
Hi Beniamino, thanks for your reply.

I've tried my scenario again using `IPV6_FAILURE_FATAL=yes` in the ifcfg file and I'm continuing to see `crio` fail to start for the same reason.  From what I can glean from the debug messages from NM, it looks like the IPv6 default route is being added after reaching `network-online.target`.  (However, I could be mis-interpreting what I am seeing...)

I've attached another journal;  I appreciate you continuing to help me with this particular issue.


FWIW, here's what my `ifcfg` file looks like:

```
$ cat /etc/sysconfig/network-scripts/ifcfg-ens4
DEVICE="ens4"
BOOTPROTO="dhcp"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"
IPV4_FAILURE_FATAL="yes"
IPV6_FAILURE_FATAL="yes"
```

Comment 13 Micah Abbott 2019-06-10 19:57:36 UTC
Created attachment 1579156 [details]
journal with NM trace enabled; crio debug build used; IPV6_FAILURE_FATAL=yes

Comment 14 Beniamino Galvani 2019-06-11 07:13:49 UTC
The connection now doesn't have IPv6 enabled (note the method = ignore):

 <debug> [1560196514.7395] ++ ipv6               [ 0x55f7f1bf0340 ]
 <debug> [1560196514.7395] ++ ipv6.addresses     = ((GPtrArray*) 0x55f7f1c2d3a0)
 <debug> [1560196514.7395] ++ ipv6.dns           = []
 <debug> [1560196514.7396] ++ ipv6.dns-search    = []
 <debug> [1560196514.7396] ++ ipv6.method        = 'ignore'
 <debug> [1560196514.7397] ++ ipv6.routes        = ((GPtrArray*) 0x55f7f1c2d340)

You should run:

 nmcli connection modify 'System ens4' ipv6.method auto ipv6.may-fail no

to enable IPv6 autoconf and let the connection wait for IPv6.

Comment 15 Micah Abbott 2019-06-11 13:36:50 UTC
(In reply to Beniamino Galvani from comment #14)
> The connection now doesn't have IPv6 enabled (note the method = ignore):
> 
>  <debug> [1560196514.7395] ++ ipv6               [ 0x55f7f1bf0340 ]
>  <debug> [1560196514.7395] ++ ipv6.addresses     = ((GPtrArray*)
> 0x55f7f1c2d3a0)
>  <debug> [1560196514.7395] ++ ipv6.dns           = []
>  <debug> [1560196514.7396] ++ ipv6.dns-search    = []
>  <debug> [1560196514.7396] ++ ipv6.method        = 'ignore'
>  <debug> [1560196514.7397] ++ ipv6.routes        = ((GPtrArray*)
> 0x55f7f1c2d340)
> 
> You should run:
> 
>  nmcli connection modify 'System ens4' ipv6.method auto ipv6.may-fail no
> 
> to enable IPv6 autoconf and let the connection wait for IPv6.

Huh...not sure how that got disabled.  I'll try again with everything properly enabled.  Thanks for the pointer!

Comment 16 Colin Walters 2019-06-13 15:13:56 UTC
https://github.com/cri-o/cri-o/issues/2478

Comment 17 Beniamino Galvani 2019-08-21 13:55:09 UTC
Hi Micah,

do you have any updates on this?

Comment 18 Micah Abbott 2019-08-26 18:01:14 UTC
(In reply to Beniamino Galvani from comment #17)
> Hi Micah,
> 
> do you have any updates on this?

Unfortunately, I have not been able to return to this BZ and continue investigation.

The changes made to RHEL CoreOS, referenced in comment #16, have allowed us to move forward in this specific case.  

Feel free to close this as you see fit.  If it becomes a problem for us again, we will reopen this BZ.

Comment 19 Beniamino Galvani 2019-08-26 18:31:18 UTC
(In reply to Micah Abbott from comment #18) 
> Feel free to close this as you see fit.  If it becomes a problem for us
> again, we will reopen this BZ.

Okay, thank you.


Note You need to log in before you can comment on or make changes to this bug.