Bug 2091840

Summary: Failing to build images due to higher mtu on podman0 bridge
Product: Red Hat Enterprise Linux 9 Reporter: Sandeep Yadav <sandyada>
Component: buildahAssignee: Paul Holzinger <pholzing>
Status: CLOSED NOTABUG QA Contact: atomic-bugs <atomic-bugs>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 9.0CC: apevec, bbaude, dwalsh, mheon, pholzing, pthomas, tsweeney, umohnani
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2022-06-08 10:58:18 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Sandeep Yadav 2022-05-31 07:27:00 UTC
Description of problem:

After buildah moved from host to private network namespace we faced bug[1] because mtu on `cni-podman0` bridge was higher the local interface.
In bug[1] we were told to set correct mtu in the config file, We were able to solve that issue after setting mtu in  /etc/cni/net.d/87-podman-bridge.conflist.

Now, With netavark introduction we are again facing same issue during container image build(reported in bug[2]) - mtu on `podman0` bridge is higher than interface.

~~~
$ ip link | grep mtu
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 1000
4: br-ex: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1350 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
11: podman0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
12: veth49b24621@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman0 state UP mode DEFAULT group default qlen 1000
~~~

Need help with the below queries:-

* Which is default and recommended - netavark or CNI in RHEL9.0?
* How to set mtu in netavark case - Which configuration file(for cni case we were setting mtu in  /etc/cni/net.d/87-podman-bridge.conflist)?


[1] https://bugzilla.redhat.com/show_bug.cgi?id=2060932
[2] https://bugzilla.redhat.com/show_bug.cgi?id=2091816

Version-Release number of selected component (if applicable):

RHEL-9

Version of podman/buildah and netavark:-
~~~
podman-catatonit-4.0.2-7.el9_0.x86_64
podman-4.0.2-7.el9_0.x86_64
buildah-1.24.2-4.el9_0.x86_64
netavark-1.0.1-34.el9_0.x86_64
~~~


How reproducible:

Everytime


Steps to Reproduce:
1) Run an instance in Openstack environment where mtu on interface is set lower than 1500. (It was 1450 in our case)
2) Build container using buildah and try Pull content from outside / Ping outside network with mtu 1500

Actual results:

* dnf install during image build is failing with timeout - Unable to curl the repo metadata:-
~~~  
masked-repo                         2.7 kB/s | 365 kB     02:16    
Errors during downloading metadata for repository 'masked-repo':
  - Curl error (28): Timeout was reached for <url masked for sanity> [Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds]
~~~

Expected results:

Image build to pass


Additional info:


If we change to cni and set correct mtu in 87-podman-bridge.conflist - things works.

# diff /usr/share/containers/containers.conf /usr/share/containers/containers.conf.backup 
265c265
< network_backend = "cni"
---
> #network_backend = ""


/etc/cni/net.d/87-podman-bridge.conflist
~~~
"mtu": 1450
~~~

Comment 1 Tom Sweeney 2022-05-31 21:11:11 UTC
Paul, PTAL

Comment 2 Paul Holzinger 2022-06-01 12:11:46 UTC
You can set the mtu in the config file for netavark as well.

By default the network config is stored in memory but you can create a config file at /etc/containers/networks/podman.json (default path, can be change din containers.conf).
Podman will use this file over the built in default.

You should be able to use this config:

{
  "name": "podman",
  "id": "2f259bab93aaaaa2542ba43ef33eb990d0999ee1b9924b557b7be53c0b7a1bb9",
  "driver": "bridge",
  "network_interface": "podman0",
  "created": "2022-06-01T14:07:03.536116064+02:00",
  "subnets": [
    {
      "subnet": "10.88.0.0/16",
      "gateway": "10.88.0.1"
    }
  ],
  "ipv6_enabled": false,
  "internal": false,
  "dns_enabled": false,
  "ipam_options": {
    "driver": "host-local"
  },
  "options": {
    "mtu": "1400"
  }
}


Basically just run `podman network inspect podman | jq .[]` and then add the options block with the mtu to it and write it to the config file.

Comment 3 Sandeep Yadav 2022-06-03 14:06:13 UTC
Hello Paul,

Thanks for answering my query on high to set mtu.

* Could some please confirm which is default and recommended - netavark or CNI in RHEL9.0?

Comment 4 Paul Holzinger 2022-06-03 14:20:55 UTC
Netavark should be the default starting with RHEL9 and we recommend to use it. If you encounter any problems with it please let us know.

Comment 7 Sandeep Yadav 2022-06-08 10:58:18 UTC
Thank you so much Paul,

I don't have any further queries on this, closing this bug.