Bug 1982821 - nmstatectl fails in handler container
Summary: nmstatectl fails in handler container
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.8
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Ben Nemec
QA Contact: zhaozhanqi
URL:
Whiteboard:
: 1983816 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-07-15 19:25 UTC by Ben Nemec
Modified: 2022-12-07 20:58 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-12-07 20:58:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Output of a manual nmstatectl run (78.79 KB, text/plain)
2021-07-15 19:25 UTC, Ben Nemec
no flags Details

Description Ben Nemec 2021-07-15 19:25:56 UTC
Created attachment 1802011 [details]
Output of a manual nmstatectl run

Description of problem: nmstatectl calls in the handler container are failing on 4.8.


Version-Release number of selected component (if applicable):
host:
nmcli tool, version 1.30.0-9.el8_4

container:
nmcli tool, version 1.30.0-7.el8

nmstatectl:
1.0.2


How reproducible: Always


Steps to Reproduce:
1. Install operator on a 4.8 cluster
2. Attempt to apply an NNCP


Actual results: NNCP fails to apply due to error running nmstatectl.


Expected results: NNCP is applied.


Additional info: nmstatectl from the handler just doesn't work. Attached is the output from attempting to run nmstatectl manually.

Comment 1 Ben Nemec 2021-07-20 14:21:18 UTC
*** Bug 1983816 has been marked as a duplicate of this bug. ***

Comment 2 Ben Nemec 2021-07-20 14:25:08 UTC
When applying a policy that looks like:

interfaces:
- name: enp2s0
  state: up
  type: ethernet
- ipv4:
    enabled: false
  ipv6:
    enabled: false
  name: enp3s0
  state: down
  type: ethernet
- ipv4:
    enabled: false
  ipv6:
    enabled: false
  name: enp4s0
  state: down
  type: ethernet

It appears that after NMState brings the p3 and p4 interfaces down, the NetworkManager default "Wired Connection" brings them right back up. This can be seen in a log message from NM:

device (enp3s0): Activation: starting connection 'Wired Connection' (fb456b53-0552-4794-9200-7b080ed93aeb)

This essentially undoes the NMState changes and causes the tests to fail. While in some real-world cases this may not be a problem, just the fact that it fails the e2e tests is a significant issue as it makes it impossible for us to regularly test the operator.

I found a similar-sounding bug here: https://bugzilla.redhat.com/show_bug.cgi?id=1896866

I don't know if that's a particularly good solution for us since we don't know the interfaces the user will want to manage ahead of time so writing udev rules is not a great fit, but it does cover some other options they tried that did not solve the problem.

Comment 3 Ben Nemec 2021-07-22 15:12:00 UTC
Okay, I did some more digging on this and I think I understand what is happening. In 4.8, we added the "ip=dhcp" option to the kernel boot args in order to get consistent networking in dual stack environments. In some cases the v4 and v6 DHCP config would differ slightly (hostnames, for example), and if both were allowed to run at the same time some nodes would pick one and some nodes the other.

Anyway, the combination of that change and the behavior described in https://github.com/coreos/fedora-coreos-config/pull/773 means that we no longer get the default behavior and an expicit default_connection.nmconnection file is written to /etc/NetworkManager/system-connections. This connection has multi-connect=3 set, so it results in all of the interfaces being managed by the same connection id (I'm not sure what that last connection is, but it doesn't seem relevant here):

Wired Connection  8bd8ffb0-11f9-45b0-b0c3-dc9592274742  ethernet  enp1s0 
Wired Connection  8bd8ffb0-11f9-45b0-b0c3-dc9592274742  ethernet  enp3s0 
Wired Connection  8bd8ffb0-11f9-45b0-b0c3-dc9592274742  ethernet  enp2s0 
Wired Connection  8bd8ffb0-11f9-45b0-b0c3-dc9592274742  ethernet  enp4s0 
Wired Connection  a91f4065-5dd4-4013-bb4f-3391c0fcaa41  ethernet  --

I also found https://github.com/nmstate/nmstate/pull/1499 which adds support for this scenario to nmstate. That mostly seems to be working fine, with one exception: disabling interfaces. While I can do things like configure bonds from these connections, I am unable to apply a policy such as:

interfaces:
- ipv4:
    enabled: false
  ipv6:
    enabled: false
  name: enp3s0
  state: down
  type: ethernet

The reason seems to be that taking down an interface doesn't trigger creation of a separate connection. As a result, after the connection is set down the still existing Wired Connection for that interface brings it right back up.

I don't know if this is reasonable to fix in nmstate, but it's confusing behavior that causes nmstate to fail for no immediately apparent reason. I'm going to try a workaround for the e2e tests where I first apply an NNCP that will hopefully trigger separate connections to be created for each interface. However, if a user were to try to do something similar then they would encounter this same problem.

Comment 4 Ben Nemec 2021-07-26 16:30:55 UTC
I tried a number of workarounds, including applying an NNCP that triggers creation of individual connections for each interface, as well as deleting the default_connection.nmconnection file. None of that was sufficient. It seems that I was wrong to dismiss the extra connection from my previous comment because even if I address the first default_connection, the extra one seems to take over and bring interfaces back up. That connection seems to be defined in /run/NetworkManager/system-connections. I haven't been able to figure out where it comes from though. That one I can't even delete because it gets recreated on reboot.

At this point I think I need help to resolve this. I'm not sure if there's something nmstate can do to handle the situation, or if I need to modify the NetworkManager configuration to avoid these default connections (although I'm not sure if that would cause other problems since the connection file is presumably there for a reason).

Comment 5 Ben Nemec 2021-07-28 20:12:19 UTC
For reference, here is default_connection.nmconnection. The weird thing is I don't think this is even correct. The system where this was collected had ip=dhcp in its kernel args, yet you can see that ipv6 is also set to auto.

[root@worker-1 core]# cat /etc/NetworkManager/system-connections/default_connection.nmconnection 
[connection]
id=Wired Connection
uuid=8ba60aec-4dd3-47fe-a737-99af4471bee9
type=ethernet
autoconnect-retries=1
multi-connect=3
permissions=
wait-device-timeout=60000

[ethernet]
mac-address-blacklist=

[ipv4]
dhcp-timeout=90
dns-search=
may-fail=false
method=auto

[ipv6]
addr-gen-mode=eui64
dhcp-timeout=90
dns-search=
method=auto

[proxy]
[root@worker-1 core]# cat /proc/cmdline
BOOT_IMAGE=(hd0,gpt3)/ostree/rhcos-4db93642d1d8298df8c1c9b655ba64df137a7572854a8dbc6adc8ce6431a9cef/vmlinuz-4.18.0-305.10.2.el8_4.x86_64 ip=dhcp random.trust_cpu=on console=tty0 console=ttyS0,115200n8 ostree=/ostree/boot.0/rhcos/4db93642d1d8298df8c1c9b655ba64df137a7572854a8dbc6adc8ce6431a9cef/0 ignition.platform.id=openstack root=UUID=88afa9a3-a428-4115-87e6-6bd8d33b6bf2 rw rootflags=prjquota

It looks like nm-initrd-generator does not do the right thing:
[root@worker-1 core]# /usr/libexec/nm-initrd-generator ip=dhcp -s
...snip...
[ipv6]
addr-gen-mode=eui64
dhcp-timeout=90
dns-search=
method=auto

With ip=dhcp6 it correctly sets the ipv4 method to disabled.

Comment 7 Andrew Bays 2021-10-07 10:04:26 UTC
My team and I are seeing something similar when we try to re-apply an NNCP that looks like so:


apiVersion: nmstate.io/v1beta1
kind: NodeNetworkConfigurationPolicy
metadata:
  labels:
    osp-director.openstack.org/controller: osp-openstacknet
    osp-director.openstack.org/namespace: openstack
  name: br-ex
spec:
  desiredState:
    interfaces:
    - bridge:
        options:
          stp:
            enabled: false
        port:
        - name: enp6s0
      description: Linux bridge with enp6s0 as a port
      name: br-ex
      state: up
      type: linux-bridge
  nodeSelector:
    node-role.kubernetes.io/worker: ""


The first apply against a particular cluster and nodeSelector-matching node works.  However, if the NNCP is deleted and then re-applied later on, the nmstate handler container reports a mismatch error between the "desired" and "current" states.  It would seem that removing the NNCP does not remove the bridge, and the fact that the bridge is present on a subsequent re-apply results in a mismatch due to a plethora of additional field/values that appear in the "current" state during comparison.  Here is a raw example of what shows up in the log for the nmstate handler container:


current\n=======\n---\nname: br-ex\ntype: linux-bridge\nstate: up\nbridge:\n  options:\n    group-addr: XX:XX:XX:XX:XX:XX\n    group-forward-mask: 0\n    hash-max: 4096\n    mac-ageing-time: 300\n    multicast-last-member-count: 2\n    multicast-last-member-interval: 100\n    multicast-querier: false\n    multicast-querier-interval: 25500\n    multicast-query-interval: 12500\n    multicast-query-response-interval: 1000\n    multicast-query-use-ifaddr: false\n    multicast-router: 1\n    multicast-snooping: true\n    multicast-startup-query-count: 2\n    multicast-startup-query-interval: 3125\n    stp:\n      enabled: false\n      forward-delay: 15\n      hello-time: 2\n      max-age: 20\n      priority: 32768\n  port:\n  - name: enp6s0\n    stp-hairpin-mode: false\n    stp-path-cost: 100\n    stp-priority: 32\n    vlan:\n      mode: access\n      trunk-tags: []\ndescription: Linux bridge with enp6s0 as a port\nipv4:\n  enabled: false\nipv6:\n  enabled: false\nlldp:\n  enabled: false\nmac-address: XX:XX:XX:XX:XX:XX\nmtu: 1500

desired\n=======\n---\nname: br-ex\ntype: linux-bridge\nstate: up\nbridge:\n  options:\n    stp:\n      enabled: false\n  port:\n  - name: enp6s0\n    stp-hairpin-mode: false\n    stp-path-cost: 100\n    stp-priority: 32\n    vlan:\n      enable-native: false\n      mode: trunk\n      trunk-tags: <thousands of IDs>...


Comparing these two will clearly fail the actual "match" function used by libnmstate: https://github.com/nmstate/nmstate/blob/a972b09a1d42b00f4195cacc90e372ea61c50244/libnmstate/state.py#L65-L74.

Our current workaround is to use "oc debug node/<worker node name> -- ip l del br-ex" on each worker node before re-applying the NNCP.  That way there is no existing br-ex bridge to cause a state mismatch during NNCP re-apply.

Comment 8 Ben Nemec 2021-10-07 15:35:51 UTC
I think that's a slightly different issue. In our case, the resulting configuration on the node is legitimately wrong. In your case it seems like nmstate is incorrectly failing when the thing it was asked to create already exists. I think it's likely a different fix will be needed for these two issues so I would suggest opening a separate one for your scenario.

Comment 9 Andrew Bays 2021-10-07 16:27:55 UTC
(In reply to Ben Nemec from comment #8)
> I think that's a slightly different issue. In our case, the resulting
> configuration on the node is legitimately wrong. In your case it seems like
> nmstate is incorrectly failing when the thing it was asked to create already
> exists. I think it's likely a different fix will be needed for these two
> issues so I would suggest opening a separate one for your scenario.

Understood.  Thank you for the prompt feedback, Ben.

Comment 11 Dave Gordon 2022-12-07 20:58:22 UTC
Closing out the bz due to being stale and severity.


Note You need to log in before you can comment on or make changes to this bug.