Bug 2142711

Summary: Static MAC address is not working inside the container in podman-netavark-mcvlan
Product: Red Hat Enterprise Linux 8 Reporter: Krishnakumar <kmarutha>
Component: netavarkAssignee: Matthew Heon <mheon>
Status: CLOSED ERRATA QA Contact: Joy Pu <ypu>
Severity: low Docs Contact:
Priority: unspecified    
Version: 8.7CC: arajendr, bbaude, jnovy, mheon, pthomas, ypu
Target Milestone: rcKeywords: Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: netavark-1.3.0-1.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-05-16 08:22:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Krishnakumar 2022-11-15 02:27:34 UTC
Description of problem:

Static MAC address is not working inside the container in podman with netavark backend network using mcvlan driver

Version-Release number of selected component (if applicable):

RHEL 8.6
Podman 4.1.1-7

How reproducible:


Steps to Reproduce:
1. Configure podman using netavark as a backend network
~~~
# podman info
host:
  arch: amd64
  buildahVersion: 1.26.2
  cgroupControllers:
  - cpuset
  - cpu
  - cpuacct
  - blkio
  - memory
  - devices
  - freezer
  - net_cls
  - perf_event
  - net_prio
  - hugetlb
  - pids
  - rdma
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: conmon-2.1.2-2.module+el8.6.0+15917+093ca6f8.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.2, commit: 8c4f33ac0dcf558874b453d5027028b18d1502db'
  cpuUtilization:
    idlePercent: 99.24
    systemPercent: 0.32
    userPercent: 0.44
  cpus: 1
  distribution:
    distribution: '"rhel"'
    version: "8.6"
  eventLogger: file
  hostname: dhcp131-95.gsslab.pnq2.redhat.com
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-372.26.1.el8_6.x86_64
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 237268992
  memTotal: 1011589120
  networkBackend: netavark
  ociRuntime:
    name: runc
    package: runc-1.1.3-2.module+el8.6.0+15917+093ca6f8.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.1.3
      spec: 1.0.2-dev
      go: go1.17.7
      libseccomp: 2.5.2
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-2.module+el8.6.0+16771+28dfca77.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 992735232
  swapTotal: 1069543424
  uptime: 183h 36m 1.43s (Approximately 7.62 days)
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 4
    paused: 0
    running: 0
    stopped: 4
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 8579448832
  graphRootUsed: 6633136128
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 11
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 4.1.1
  Built: 1665582179
  BuiltTime: Wed Oct 12 19:12:59 2022
  GitCommit: ""
  GoVersion: go1.17.7
  Os: linux
  OsArch: linux/amd64
  Version: 4.1.1
~~~

2. Create podman network using mcvlan driver

~~~
# podman  network ls
NETWORK ID    NAME        DRIVER
119db1ac0481  mcv0        macvlan
2f259bab93aa  podman      bridge
# podman network inspect mcv0
[
     {
          "name": "mcv0",
          "id": "119db1ac0481bd402c37c82717555604dc1df23b42ba268d52e7fc7f2a418027",
          "driver": "macvlan",
          "network_interface": "ens192",
          "created": "2022-11-14T19:28:57.518598198+05:30",
          "subnets": [
               {
                    "subnet": "10.74.128.0/22",
                    "gateway": "10.74.131.254"
               }
          ],
          "ipv6_enabled": false,
          "internal": false,
          "dns_enabled": true,
          "options": {
               "mode": "bridge"
          },
          "ipam_options": {
               "driver": "host-local"
          }
     }
]
~~~

3. Run podman using MAC option to use static MAC

~~~
# podman run --network mcv0 -it  --mac-address=66:99:8f:19:2e:74 --rm --name test1_bridge quay.io/rhn_support_kmarutha/ubi8custom ip a s|grep -A2 eth0
8: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether da:36:f2:83:74:45 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.74.128.1/22 brd 10.74.131.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::d836:f2ff:fe83:7445/64 scope link tentative
~~~

Actual results:

~~~
# podman run --network mcv0 -it  --mac-address=66:99:8f:19:2e:74 --rm --name test1_bridge quay.io/rhn_support_kmarutha/ubi8custom ip a s|grep -A2 eth0
8: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether da:36:f2:83:74:45 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.74.128.1/22 brd 10.74.131.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::d836:f2ff:fe83:7445/64 scope link tentative
~~~

Expected results:

MAC address should be static as it mentioned --mac-address=66:99:8f:19:2e:74.

Additional info:

It is working in podman with cni as backend network using mcvlan driver.

~~~
# podman info |grep -i network
  networkBackend: cni
  network:

# podman run --network mcv0 -it  --mac-address=66:99:8f:19:2e:74 --rm --name test1_bridge quay.io/rhn_support_kmarutha/ubi8custom ip a s|grep -A2 eth0
Trying to pull quay.io/rhn_support_kmarutha/ubi8custom:latest...
Getting image source signatures
Copying blob ebf24b1a8baf done  
Copying blob 9160faa7ad21 done  
Copying config c6e397febb done  
Writing manifest to image destination
Storing signatures
2: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 66:99:8f:19:2e:74 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 192.168.119.150/24 brd 192.168.119.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::6499:8fff:fe19:2e74/64 scope link tentative
~~~

Comment 3 Brent Baude 2022-11-21 18:13:10 UTC
This bug was fixed upstream and released in netavark-v1.2.  Because no version of netavark was provided in this bug,  I cannot comment further.

$ sudo podman run -it --rm --network mc --mac-address=0a:d1:8e:5c:fc:20 alpine ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
8: eth0@if2: <NO-CARRIER,BROADCAST,MULTICAST,UP,M-DOWN> mtu 1500 qdisc noqueue state LOWERLAYERDOWN qlen 1000
    link/ether 0a:d1:8e:5c:fc:20 brd ff:ff:ff:ff:ff:ff
    inet 192.168.99.4/24 brd 192.168.99.255 scope global eth0
       valid_lft forever preferred_lft forever

Comment 8 Joy Pu 2022-12-05 10:23:27 UTC
Can reproduce with netavark-1.0.1-39.module+el8.6.0+16891+aaa3bd4c.x86_64.rpm. Test with netavark-1.3.0-1.module+el8.8.0+17233+49402d35.x86_64 and it works as expected now. The Mac addr is already the same as command line. So move it to verified:
# podman network create -d macvlan -o parent=ens3 --subnet="10.74.128.0/22" --gateway="10.74.131.254" mcv0 
mcv0
# podman network inspect mcv0
[
     {
          "name": "mcv0",
          "id": "2b29552e8867cbaa4718e476c6d24f4f41457c8632d578302feb51753c520fc9",
          "driver": "macvlan",
          "network_interface": "ens3",
          "created": "2022-12-05T04:08:11.542708741-05:00",
          "subnets": [
               {
                    "subnet": "10.74.128.0/22",
                    "gateway": "10.74.131.254"
               }
          ],
          "ipv6_enabled": false,
          "internal": false,
          "dns_enabled": true,
          "ipam_options": {
               "driver": "host-local"
          }
     }
]
# podman run --network mcv0 -it  --mac-address=66:99:8f:19:2e:74 --rm --name test1_bridge quay.io/rhn_support_kmarutha/ubi8custom ip a s|grep -A2 eth0
Trying to pull quay.io/rhn_support_kmarutha/ubi8custom:latest...
Getting image source signatures
Copying blob ebf24b1a8baf done  
Copying blob 9160faa7ad21 done  
Copying config c6e397febb done  
Writing manifest to image destination
Storing signatures
2: eth0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 66:99:8f:19:2e:74 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.74.128.1/22 brd 10.74.131.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::6499:8fff:fe19:2e74/64 scope link tentative

Comment 13 errata-xmlrpc 2023-05-16 08:22:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: container-tools:rhel8 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:2758

Comment 14 Red Hat Bugzilla 2023-09-19 04:30:03 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days