Bug 2105956

Summary: nmstate gc does not activate veth interfaces
Product: Red Hat Enterprise Linux 8 Reporter: Flavio Percoco <fpercoco>
Component: NetworkManagerAssignee: Fernando F. Mancera <ferferna>
Status: CLOSED ERRATA QA Contact: Matej Berezny <mberezny>
Severity: high Docs Contact:
Priority: high    
Version: ---CC: acabral, bgalvani, ferferna, fge, jiji, jishi, lrintel, mberezny, mshi, network-qe, rkhan, sfaye, sukulkar, till, vbenes, welin
Target Milestone: rcKeywords: Triaged, ZStream
Target Release: ---Flags: pm-rhel: mirror+
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: NetworkManager-1.39.10-1.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 2120569 (view as bug list) Environment:
Last Closed: 2022-11-08 10:10:38 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2120569    
Deadline: 2022-07-18   

Description Flavio Percoco 2022-07-11 10:49:21 UTC
Description of problem:

Need to create a Linux bridge with 1 port on a physical interfaces and another one on a veth pair. This seems to work fine when using `nmstatectl apply`. However, when running `nmstatectl gc` the interfaces are not enabled by default.

We are doing this through assisted installer using the NMStateConfig CR, which is still on the 1.3 version.

Setting priority high as this is blocking the work on ZTPFW

NMStateConfig:


```
---
apiVersion: agent-install.openshift.io/v1beta1
kind: NMStateConfig
metadata:
 name: ztpfw-edgecluster0-cluster-master-0
 namespace: edgecluster0-cluster
 labels:
   nmstate_config_cluster_name: edgecluster0-cluster
spec:
 config:
   interfaces:
     - name: veth1
       type: veth
       state: up
       mtu: 1500
       veth:
         peer: veth2
     - name: veth2
       type: veth
       state: up
       mtu: 1500
       veth:
         peer: veth1
       ipv4:
         enabled: true
         address:
          - ip: 192.168.7.10
            prefix-length: 24
     - name: enp1s0
       type: ethernet
       state: up
       mtu: 1500
       mac-address: 'ee:ee:ee:ee:00:0e'
     - name: br-ztp
       type: linux-bridge
       state: up
       bridge:
         port:
           - name: veth1
           - name: enp1s0
       ipv4:
         enabled: true
         dhcp: true
         auto-dns: true
         auto-gateway: true
         auto-routes: true
       mtu: 1500
   routes:
     config:
       - destination: 192.168.7.0/24
         next-hop-address: 192.168.7.1
         next-hop-interface: veth2
       - destination: 0.0.0.0/0
         next-hop-address: 192.168.7.1
         metric: 501
         table-id: 254
         next-hop-interface: veth2
 interfaces:
   - name: "enp1s0"
     macAddress: 'ee:ee:ee:ee:00:0e'
```

Network manager configs:


[root@edgecluster0-cluster-m0 core]# cat /etc/NetworkManager/system-connections/
br-ztp.nmconnection  enp1s0.nmconnection  veth1.nmconnection   veth2.nmconnection
[root@edgecluster0-cluster-m0 core]# cat /etc/NetworkManager/system-connections/*
[connection]
id=br-ztp
uuid=db385bd9-cb8a-4877-af8f-502f486fcfc3
type=bridge
autoconnect-slaves=1
interface-name=br-ztp
autoconnect=true
autoconnect-priority=1

[ethernet]
mtu=1500

[bridge]

[ipv4]
dhcp-client-id=mac
dhcp-timeout=2147483647
method=auto

[ipv6]
addr-gen-mode=eui64
dhcp-duid=ll
dhcp-iaid=mac
method=disabled

[proxy]

[connection]
id=enp1s0
uuid=cab877b4-beb5-4fea-b53a-52f45bc4770a
type=ethernet
interface-name=enp1s0
master=br-ztp
slave-type=bridge
autoconnect=true
autoconnect-priority=1

[ethernet]
cloned-mac-address=EE:EE:EE:EE:00:0E
mtu=1500

[bridge-port]

[connection]
id=veth1
uuid=871ac012-f302-4f19-ab4a-f6a14c906b90
type=veth
interface-name=veth1
master=br-ztp
slave-type=bridge
autoconnect=true
autoconnect-priority=1

[ethernet]
mtu=1500

[veth]
peer=veth2

[bridge-port]

[connection]
id=veth2
uuid=97ed9eec-46ba-4c2e-8242-4ff95ed7015e
type=veth
interface-name=veth2
autoconnect=true
autoconnect-priority=1

[ethernet]
mtu=1500

[veth]
peer=veth1

[ipv4]
address1=192.168.7.10/24
dhcp-client-id=mac
method=manual
route1=0.0.0.0/0,192.168.7.1,501
route1_options=table=254
route2=192.168.7.0/24,192.168.7.1
route2_options=table=254

[ipv6]
addr-gen-mode=eui64
dhcp-duid=ll
dhcp-iaid=mac
method=disabled

[proxy]

[root@edgecluster0-cluster-m0 core]#

Network state:

[root@edgecluster0-cluster-m0 core]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br-ztp state UP group default qlen 1000
    link/ether ee:ee:ee:ee:00:0e brd ff:ff:ff:ff:ff:ff
6: veth1@veth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ae:46:41:ee:a3:70 brd ff:ff:ff:ff:ff:ff
7: veth2@veth1: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue state LOWERLAYERDOWN group default qlen 1000
    link/ether 3e:8a:83:6d:73:de brd ff:ff:ff:ff:ff:ff
8: br-ztp: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ee:ee:ee:ee:00:0e brd ff:ff:ff:ff:ff:ff
    inet 192.168.150.201/24 brd 192.168.150.255 scope global dynamic noprefixroute br-ztp
       valid_lft 3524sec preferred_lft 3524sec
9: cni-podman0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 86:c5:10:ee:50:cf brd ff:ff:ff:ff:ff:ff
    inet 10.88.0.1/16 brd 10.88.255.255 scope global cni-podman0
       valid_lft forever preferred_lft forever
    inet6 fe80::84c5:10ff:feee:50cf/64 scope link
       valid_lft forever preferred_lft forever

[root@edgecluster0-cluster-m0 core]# ip r
default via 192.168.150.1 dev br-ztp proto dhcp metric 425
10.88.0.0/16 dev cni-podman0 proto kernel scope link src 10.88.0.1 linkdown
192.168.150.0/24 dev br-ztp proto kernel scope link src 192.168.150.201 metric 425

[root@edgecluster0-cluster-m0 core]# nmcli con
NAME         UUID                                  TYPE      DEVICE
br-ztp       db385bd9-cb8a-4877-af8f-502f486fcfc3  bridge    br-ztp
cni-podman0  491ff2cd-c995-4528-842b-c2ead7ec9d3b  bridge    cni-podman0
enp1s0       cab877b4-beb5-4fea-b53a-52f45bc4770a  ethernet  enp1s0
veth1        871ac012-f302-4f19-ab4a-f6a14c906b90  veth      --
veth2        97ed9eec-46ba-4c2e-8242-4ff95ed7015e  veth      --

Comment 1 Fernando F. Mancera 2022-07-12 09:05:59 UTC
We have confirmed this is a NetworkManager bug. The activation is failing when booting with one profile for each veth peer. I have created https://gitlab.freedesktop.org/NetworkManager/NetworkManager/-/merge_requests/1293

Comment 2 Vladimir Benes 2022-07-12 09:18:15 UTC
I reported this a long time ago but it was never fixed. I am quite unsure how it's different from https://bugzilla.redhat.com/show_bug.cgi?id=1915284#c3. Did the udev rule removal help? The funny story is that the original bug was auto-closed just today :-)

Comment 3 Gris Ge 2022-07-12 11:35:02 UTC
*** Bug 2036023 has been marked as a duplicate of this bug. ***

Comment 11 Fernando F. Mancera 2022-10-18 05:33:54 UTC
*** Bug 1915284 has been marked as a duplicate of this bug. ***

Comment 13 errata-xmlrpc 2022-11-08 10:10:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (NetworkManager bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:7680

Comment 14 sfaye 2023-01-24 09:21:43 UTC
*** Bug 2135595 has been marked as a duplicate of this bug. ***