RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2126243 - Podman container got global IPv6 address unexpectedly even when macvlan network is created for pure IPv4 network
Summary: Podman container got global IPv6 address unexpectedly even when macvlan netwo...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: netavark
Version: 8.6
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: rc
: ---
Assignee: Jindrich Novy
QA Contact: Joy Pu
URL:
Whiteboard:
Depends On:
Blocks: 2133390 2133391
TreeView+ depends on / blocked
 
Reported: 2022-09-13 02:01 UTC by shuai.ma@veritas.com
Modified: 2023-05-16 09:08 UTC (History)
17 users (show)

Fixed In Version: netavark-1.5.0-4.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2133390 2133391 (view as bug list)
Environment:
Last Closed: 2023-05-16 08:21:12 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github containers netavark pull 434 0 None Merged Do not use ipv6 autoconf 2022-10-10 06:48:42 UTC
Red Hat Issue Tracker RHELPLAN-133786 0 None None None 2022-09-13 02:03:26 UTC
Red Hat Product Errata RHSA-2023:2758 0 None None None 2023-05-16 08:22:56 UTC

Description shuai.ma@veritas.com 2022-09-13 02:01:17 UTC
Description of problem:
macvlan network is created for pure IPv4 network, but Podman container got global IPv6 address unexpectedly

Version-Release number of selected component (if applicable):

[root@eagappflx248 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux release 8.6 (Ootpa)

[root@eagappflx248 ~]# podman version
Client:       Podman Engine
Version:      4.1.1
API Version:  4.1.1
Go Version:   go1.17.7
Built:        Mon Jul 11 14:56:53 2022
OS/Arch:      linux/amd64


How reproducible:


Steps to Reproduce:
1. Create macvlan network with pure ipv4 network
docker network create -d macvlan --subnet=10.85.40.0/21 --gateway=10.85.40.1 -o parent=nic0 nic0

2. docker run -itd --name c1 --ip=10.85.41.247 --network nic0 flex.io/uss-engine:17.0

3. check container ip, and find there are a global ipv6 address and a local-link ipv6 address there.
[root@eagappflx223 ~]# docker exec -ti c1 bash
engine : ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.85.42.1  netmask 255.255.248.0  broadcast 10.85.47.255
        inet6 2001:db8:1:0:1023:21ff:fef5:974d  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::1023:21ff:fef5:974d  prefixlen 64  scopeid 0x20<link>
        ether 12:23:21:f5:97:4d  txqueuelen 0  (Ethernet)
        RX packets 363  bytes 23318 (22.7 KiB)
        RX errors 0  dropped 41  overruns 0  frame 0
        TX packets 12  bytes 980 (980.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Actual results:
there are ipv6 addresses auto-configured for container which is in pure ipv4 address.

Expected results:
there should be no ipv6 addresses auto-configured for container which is in pure ipv4 address.

Additional info:
our lab environment has DHCP server configured, and our DNS server can resolve a  hostname to both ipv4 and ipv6 address. for example
[root@eagappflx128 hostadmin]#  nslookup eagappflx128p1

Server: 172.16.8.12

Address: 172.16.8.12#53

 

Name: eagappflx128p1.engba.veritas.com

Address: 10.85.41.137

Name: eagappflx128p1.engba.veritas.com

Address: 2620:128:f021:9014::1b

Comment 1 Tom Sweeney 2022-09-13 21:52:35 UTC
@shuai.ma it might be end of day brain on my part, but I'm not seeing a Podman problem here.  The only call I see to Podman that I see in your notes is `podman version`. All of the other calls are to Docker, no?  Or have you aliased Podman to Docker?

Comment 2 shuai.ma@veritas.com 2022-09-14 01:31:46 UTC
hi Tom,
yes Podman aliased to Docker. 

BTW,
i tried to set following parameters during Podman run to workaround the issue, but had no effect. Does these settings not supported in container?
net.ipv6.conf.all.accept_ra=0
net.ipv6.conf.all.autoconf=0

BR, Shuai

Comment 3 Tom Sweeney 2022-09-14 13:24:09 UTC
Shuai, thanks for the update.  I think this is a Podman versus a podman-container issue, so I've changed the component.  @mheon could you take a look please?

Comment 4 Matthew Heon 2022-09-14 13:51:50 UTC
Please provide the full output of `podman info` so we can determine network backend in use.

Comment 5 shuai.ma@veritas.com 2022-09-15 01:01:50 UTC
# podman info
host:
  arch: amd64
  buildahVersion: 1.26.2
  cgroupControllers:
  - cpuset
  - cpu
  - cpuacct
  - blkio
  - memory
  - devices
  - freezer
  - net_cls
  - perf_event
  - net_prio
  - hugetlb
  - pids
  - rdma
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: conmon-2.1.2-2.module+el8.6.0+15917+093ca6f8.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.2, commit: 8c4f33ac0dcf558874b453d5027028b18d1502db'
  cpuUtilization:
    idlePercent: 99.28
    systemPercent: 0.17
    userPercent: 0.55
  cpus: 48
  distribution:
    distribution: '"rhel"'
    version: "8.6"
  eventLogger: file
  hostname: eagappflx248
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-372.19.1.el8_6.x86_64
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 249346179072
  memTotal: 269723447296
  networkBackend: cni
  ociRuntime:
    name: runc
    package: runc-1.0.3-2.module+el8.6.0+14877+f643d2d6.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.3
      spec: 1.0.2-dev
      go: go1.17.7
      libseccomp: 2.5.2
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_MKNOD,CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /etc/opt/veritas/flex/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /bin/slirp4netns
    package: slirp4netns-1.2.0-2.module+el8.6.0+15917+093ca6f8.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 67076734976
  swapTotal: 67108859904
  uptime: 40h 14m 16.61s (Approximately 1.67 days)
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
  - filevol
  - veritas
registries:
  docker.io:
    Blocked: true
    Insecure: false
    Location: docker.io
    MirrorByDigestOnly: false
    Mirrors: null
    Prefix: docker.io
    PullFromMirror: ""
  flex.io:
    Blocked: false
    Insecure: false
    Location: console:8443
    MirrorByDigestOnly: false
    Mirrors: null
    Prefix: flex.io
    PullFromMirror: ""
  registry.access.redhat.com:
    Blocked: true
    Insecure: false
    Location: registry.access.redhat.com
    MirrorByDigestOnly: false
    Mirrors: null
    Prefix: registry.access.redhat.com
    PullFromMirror: ""
  registry.redhat.io:
    Blocked: true
    Insecure: false
    Location: registry.redhat.io
    MirrorByDigestOnly: false
    Mirrors: null
    Prefix: registry.redhat.io
    PullFromMirror: ""
  search:
  - flex.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 1
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 157209591808
  graphRootUsed: 5615824896
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageCopyTmpDir: /tmp
  imageStore:
    number: 9
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 4.1.1
  Built: 1657551413
  BuiltTime: Mon Jul 11 14:56:53 2022
  GitCommit: ""
  GoVersion: go1.17.7
  Os: linux
  OsArch: linux/amd64
  Version: 4.1.1

Comment 6 Matthew Heon 2022-09-15 13:14:13 UTC
`   networkBackend: cni`

CNI plugins are in use. The Podman team recommends a migration to the Netavark network stack (as we are better able to support those when things go wrong); if that is impossible, we can reassign this issue to the CNI team for investigation.

Comment 7 shuai.ma@veritas.com 2022-09-16 01:04:17 UTC
hi Matthew,

is Netavark fully supported in RHEL 8.6 ?

BR, Shuai

Comment 8 shuai.ma@veritas.com 2022-09-16 06:56:02 UTC
update: change the network backend from cni to netavark, still the same problem.


[root@eagappflx223 ~]# podman network inspect nic0
[
     {
          "name": "nic0",
          "id": "a43c59dd854a14e83516a321688546b85dc593f220c21af728d961414435180a",
          "driver": "macvlan",
          "network_interface": "nic0",
          "created": "2022-09-16T06:39:30.925390932Z",
          "subnets": [
               {
                    "subnet": "10.85.40.0/21",
                    "gateway": "10.85.40.1"
               }
          ],
          "ipv6_enabled": false,
          "internal": false,
          "dns_enabled": true,
          "ipam_options": {
               "driver": "host-local"
          }
     }
]
[root@eagappflx223 ~]# podman info | grep netav
  networkBackend: netavark
[root@eagappflx223 ~]# podman network inspect nic0
[
     {
          "name": "nic0",
          "id": "a43c59dd854a14e83516a321688546b85dc593f220c21af728d961414435180a",
          "driver": "macvlan",
          "network_interface": "nic0",
          "created": "2022-09-16T06:39:30.925390932Z",
          "subnets": [
               {
                    "subnet": "10.85.40.0/21",
                    "gateway": "10.85.40.1"
               }
          ],
          "ipv6_enabled": false,
          "internal": false,
          "dns_enabled": true,
          "ipam_options": {
               "driver": "host-local"
          }
     }
]
[root@eagappflx223 ~]# podman info | grep netav
  networkBackend: netavark

[root@eagappflx223 ~]# podman run -itd --name c1 --ip=10.85.41.247 --network nic0 flex.io/uss-engine:15.0.2
d6fe18855b222abe8df478da52d8a387e03f6f6afff1817934b2db031f4beafe
[root@eagappflx223 ~]# podman exec -it c1 bash
engine : ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.85.41.247  netmask 255.255.248.0  broadcast 10.85.47.255
        inet6 2001:db8:1:0:9008:aeff:fe4e:cb97  prefixlen 64  scopeid 0x0<global>
        inet6 fe80::9008:aeff:fe4e:cb97  prefixlen 64  scopeid 0x20<link>
        ether 92:08:ae:4e:cb:97  txqueuelen 1000  (Ethernet)
        RX packets 319  bytes 21071 (20.5 KiB)
        RX errors 0  dropped 39  overruns 0  frame 0
        TX packets 13  bytes 1102 (1.0 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

Comment 9 Matthew Heon 2022-09-16 13:15:03 UTC
Paul - mind taking a look? We're probably missing one of the v6 sysctls in macvlan.

Comment 10 shuai.ma@veritas.com 2022-09-20 00:53:32 UTC
hi Matthew, Paul, 
we see two issues in this mail. 

1. as titled, in a pure ipv4 macvlan network, creating a container got ipv6 global address. This is unexpected behaviour.

2. to workaround the #1 issue, i tried following parameters but all/any of them has no effect.
net.ipv6.conf.all.accept_ra=0
net.ipv6.conf.all.autoconf=0
net.ipv6.conf.default.autoconf=0

Are you planning to fix this or do you have any workaournd for any of the above 2 issues? our product is blocked by this issue.

BR, Shuai Ma

Comment 11 Paul Holzinger 2022-09-21 12:24:32 UTC
2) You cannot set sysctls because we setup the namespace before the oci runtime applies any sysctls. I think what you can try is running with a custom user namespace, e.g. --userns auto, in this case we have to setup the namespace after the oci runtime creates the namespace.

I think we have to fix it so netavark sets automatically the sysctls.

Comment 12 shuai.ma@veritas.com 2022-09-21 13:49:03 UTC
i tried like this, but no effect.
# docker run -itd --name c1 --ip=10.85.41.247 --network nic0 --sysctl net.ipv6.conf.all.autoconf=0 flex.io/uss-engine:17.0

tried set net.ipv6.conf.all.forwarding=1, it has effect. Does net.ipv6.conf.all.forwarding and net.ipv6.conf.all.autoconf applied at differnet phase?

BTW, when would we expect a fix for this?

Thanks a lot!

Comment 14 shuai.ma@veritas.com 2022-09-22 00:37:53 UTC
please be noticed that any 'docker' command here is actually 'podman' as we aliased Podman to Docker.

Comment 18 Brent Baude 2022-10-06 18:52:14 UTC
I'm hoping to clear up some of the confusion between teams.

The sysctl must be set before you run Podman or it will not effect things.  Can you confirm whether you have set the sysctl before running Podman with netavark?  If not, please try and report back.

The good news is one of our engineers has been able to reproduce the bug you observe.  One of us will update this bugzilla with results.

Comment 19 Brent Baude 2022-10-06 18:53:10 UTC
Update: disregard the suggestion for a work-around.  We have verified it does not work.

Comment 20 Paul Holzinger 2022-10-06 19:16:23 UTC
As mentioned before you have to create a custom userns, only then the sysctl will be applied before netavark is called.

This works for me `podman run --uidmap=0:0:4294967295 --sysctl net.ipv6.conf.default.autoconf=0 ...`, the important bit is --uidmap=0:0:4294967295 but any other userns (e.g. --userns auto) should also work.

Comment 21 Brent Baude 2022-10-07 15:03:47 UTC
upstream fix -> https://github.com/containers/netavark/pull/434

Comment 24 shuai.ma@veritas.com 2022-10-08 03:28:40 UTC
i see the fix is in netavark, do you plan to add that fix in cni as well? 

repeat my question - does netavak fully support RHEL8.6?

BR, Shuai

Comment 27 Tom Sweeney 2022-10-10 20:06:28 UTC
@mrussell can you answer https://bugzilla.redhat.com/show_bug.cgi?id=2126243#c24 please?

Comment 29 mrussell@redhat.com 2022-10-12 18:16:58 UTC
@shuai.ma Netavark was Tech Preview in 8.6 but we are changing that to fully supported in the documentation soon. You can consider it supported in 8.6 now.

Comment 31 shuai.ma@veritas.com 2022-10-19 00:57:13 UTC
hi, 
our product is now using CNI backend instead of Netavark, can you provide fix to Cni as well? 

BR, Shuai

Comment 34 mrussell@redhat.com 2022-10-20 22:02:24 UTC
@shuai.ma That should be raised in a new bug, however, we are not planning to make this fix in CNI. We understand that workaround is being tested with a plan to converge on Netavark in the future. Thank you and regards.

Comment 38 Joy Pu 2023-02-13 09:39:02 UTC
Test with netavark-1.5.0-4.module+el8.8.0+18060+3f21f2cc.x86_64. The auto_conf_ipv6 already disabled inside container. And checked inside container only a linked-local ipv6 addr set. So move it to verified. More details:
[root@kvm-04-guest09 ~]# podman network create -d macvlan --subnet=10.85.40.0/21 --gateway=10.85.40.1 -o parent=ens3 ens3
ens3
[root@kvm-04-guest09 ~]# podman network inspect  ens3
[
     {
          "name": "ens3",
          "id": "0c738066b2f06e0cb663a6fa5a2d05d776e41e4f2d6841f64207c4692fee3e4e",
          "driver": "macvlan",
          "network_interface": "ens3",
          "created": "2023-02-13T04:31:17.569829276-05:00",
          "subnets": [
               {
                    "subnet": "10.85.40.0/21",
                    "gateway": "10.85.40.1"
               }
          ],
          "ipv6_enabled": false,
          "internal": false,
          "dns_enabled": false,
          "ipam_options": {
               "driver": "host-local"
          }
     }
]
[root@kvm-04-guest09 ~]# podman --log-level debug run -itd --name c1 --ip=10.85.41.247 --network ens3 quay.io/libpod/busybox
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called run.PersistentPreRunE(podman --log-level debug run -itd --name c1 --ip=10.85.41.247 --network ens3 quay.io/libpod/busybox) 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /var/lib/containers/storage 
DEBU[0000] Using run root /run/containers/storage       
DEBU[0000] Using static dir /var/lib/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/libpod                    
DEBU[0000] Using volume path /var/lib/containers/storage/volumes 
DEBU[0000] Using transient store: false                 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that overlay is supported 
DEBU[0000] Cached value indicated that metacopy is being used 
DEBU[0000] Cached value indicated that native-diff is not being used 
INFO[0000] Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled 
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true 
DEBU[0000] Initializing event backend file              
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument 
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument 
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument 
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument 
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument 
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument 
DEBU[0000] Using OCI runtime "/usr/bin/runc"            
INFO[0000] Setting parallel job count to 4              
DEBU[0000] Successfully loaded network dual: &{dual 165ffe44b63c23701966a3cf82e649be209e75d52e1881bd6a5c270a3c718f7c bridge podman3 2023-02-13 02:36:33.980492564 -0500 EST [{{{10.89.2.0 ffffff00}} 10.89.2.1 <nil>} {{{fdeb:da7c:92fd:8eef:: ffffffffffffffff0000000000000000}} fdeb:da7c:92fd:8eef::1 <nil>}] true false true [] map[] map[] map[driver:host-local]} 
DEBU[0000] Successfully loaded network ens3: &{ens3 0c738066b2f06e0cb663a6fa5a2d05d776e41e4f2d6841f64207c4692fee3e4e macvlan ens3 2023-02-13 04:31:17.569829276 -0500 EST [{{{10.85.40.0 fffff800}} 10.85.40.1 <nil>}] false false false [] map[] map[] map[driver:host-local]} 
DEBU[0000] Successfully loaded network test100: &{test100 c4971f987fb3ca7ac37ead96182f72a9e2410e074ad564f9316c36b11e52ac33 bridge podman1 2023-02-13 02:20:21.659441618 -0500 EST [{{{10.89.0.0 ffffff00}} 10.89.0.1 <nil>}] false false true [] map[] map[metric:100] map[driver:host-local]} 
DEBU[0000] Successfully loaded network test200: &{test200 e54facbdb6598ccca45bc5482791b04600f4393e864eb9c8d419ddca9a549340 bridge podman2 2023-02-13 02:20:35.520580531 -0500 EST [{{{10.89.1.0 ffffff00}} 10.89.1.1 <nil>}] false false true [] map[] map[metric:200] map[driver:host-local]} 
DEBU[0000] Successfully loaded 5 networks               
DEBU[0000] Pulling image quay.io/libpod/busybox (policy: missing) 
DEBU[0000] Looking up image "quay.io/libpod/busybox" in local containers storage 
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] } 
DEBU[0000] Trying "quay.io/libpod/busybox:latest" ...   
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@f0b02e9d092d905d0d87a8455a1ae3e9bb47b4aa3dc125125ca5cd10d6441c9f" 
DEBU[0000] Found image "quay.io/libpod/busybox" as "quay.io/libpod/busybox:latest" in local containers storage 
DEBU[0000] Found image "quay.io/libpod/busybox" as "quay.io/libpod/busybox:latest" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@f0b02e9d092d905d0d87a8455a1ae3e9bb47b4aa3dc125125ca5cd10d6441c9f) 
DEBU[0000] exporting opaque data as blob "sha256:f0b02e9d092d905d0d87a8455a1ae3e9bb47b4aa3dc125125ca5cd10d6441c9f" 
DEBU[0000] Looking up image "quay.io/libpod/busybox:latest" in local containers storage 
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] } 
DEBU[0000] Trying "quay.io/libpod/busybox:latest" ...   
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@f0b02e9d092d905d0d87a8455a1ae3e9bb47b4aa3dc125125ca5cd10d6441c9f" 
DEBU[0000] Found image "quay.io/libpod/busybox:latest" as "quay.io/libpod/busybox:latest" in local containers storage 
DEBU[0000] Found image "quay.io/libpod/busybox:latest" as "quay.io/libpod/busybox:latest" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@f0b02e9d092d905d0d87a8455a1ae3e9bb47b4aa3dc125125ca5cd10d6441c9f) 
DEBU[0000] exporting opaque data as blob "sha256:f0b02e9d092d905d0d87a8455a1ae3e9bb47b4aa3dc125125ca5cd10d6441c9f" 
DEBU[0000] Looking up image "quay.io/libpod/busybox" in local containers storage 
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux  [] } 
DEBU[0000] Trying "quay.io/libpod/busybox:latest" ...   
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@f0b02e9d092d905d0d87a8455a1ae3e9bb47b4aa3dc125125ca5cd10d6441c9f" 
DEBU[0000] Found image "quay.io/libpod/busybox" as "quay.io/libpod/busybox:latest" in local containers storage 
DEBU[0000] Found image "quay.io/libpod/busybox" as "quay.io/libpod/busybox:latest" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@f0b02e9d092d905d0d87a8455a1ae3e9bb47b4aa3dc125125ca5cd10d6441c9f) 
DEBU[0000] exporting opaque data as blob "sha256:f0b02e9d092d905d0d87a8455a1ae3e9bb47b4aa3dc125125ca5cd10d6441c9f" 
DEBU[0000] Inspecting image f0b02e9d092d905d0d87a8455a1ae3e9bb47b4aa3dc125125ca5cd10d6441c9f 
DEBU[0000] exporting opaque data as blob "sha256:f0b02e9d092d905d0d87a8455a1ae3e9bb47b4aa3dc125125ca5cd10d6441c9f" 
DEBU[0000] exporting opaque data as blob "sha256:f0b02e9d092d905d0d87a8455a1ae3e9bb47b4aa3dc125125ca5cd10d6441c9f" 
DEBU[0000] Inspecting image f0b02e9d092d905d0d87a8455a1ae3e9bb47b4aa3dc125125ca5cd10d6441c9f 
DEBU[0000] Inspecting image f0b02e9d092d905d0d87a8455a1ae3e9bb47b4aa3dc125125ca5cd10d6441c9f 
DEBU[0000] Inspecting image f0b02e9d092d905d0d87a8455a1ae3e9bb47b4aa3dc125125ca5cd10d6441c9f 
DEBU[0000] Inspecting image f0b02e9d092d905d0d87a8455a1ae3e9bb47b4aa3dc125125ca5cd10d6441c9f 
DEBU[0000] using systemd mode: false                    
DEBU[0000] setting container name c1                    
DEBU[0000] No hostname set; container's hostname will default to runtime default 
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json" 
DEBU[0000] Allocated lock 8 for container 1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c 
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:overlay.mountopt=nodev,metacopy=on]@f0b02e9d092d905d0d87a8455a1ae3e9bb47b4aa3dc125125ca5cd10d6441c9f" 
DEBU[0000] exporting opaque data as blob "sha256:f0b02e9d092d905d0d87a8455a1ae3e9bb47b4aa3dc125125ca5cd10d6441c9f" 
DEBU[0000] Cached value indicated that idmapped mounts for overlay are not supported 
DEBU[0000] Check for idmapped mounts support            
DEBU[0000] Created container "1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c" 
DEBU[0000] Container "1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c" has work directory "/var/lib/containers/storage/overlay-containers/1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c/userdata" 
DEBU[0000] Container "1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c" has run directory "/run/containers/storage/overlay-containers/1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c/userdata" 
DEBU[0000] overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/WNEF277CUMEDPUVLH7KHH4V3UK,upperdir=/var/lib/containers/storage/overlay/fed4a754c7533300311548b8b3d4de58138c94c422819dcea9b1eb5076211fbf/diff,workdir=/var/lib/containers/storage/overlay/fed4a754c7533300311548b8b3d4de58138c94c422819dcea9b1eb5076211fbf/work,nodev,metacopy=on,context="system_u:object_r:container_file_t:s0:c366,c453" 
DEBU[0000] Mounted container "1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c" at "/var/lib/containers/storage/overlay/fed4a754c7533300311548b8b3d4de58138c94c422819dcea9b1eb5076211fbf/merged" 
DEBU[0000] Created root filesystem for container 1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c at /var/lib/containers/storage/overlay/fed4a754c7533300311548b8b3d4de58138c94c422819dcea9b1eb5076211fbf/merged 
DEBU[0000] Made network namespace at /run/netns/netns-58ad2df5-dfd5-6fcc-ae38-979a21d26571 for container 1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c 
[DEBUG netavark::network::validation] "Validating network namespace..."
[DEBUG netavark::commands::setup] "Setting up..."
[INFO  netavark::firewall] Using iptables firewall driver
[DEBUG netavark::network::vlan] Setup network ens3
[DEBUG netavark::network::vlan] Container interface name: eth0 with IP addresses [10.85.41.247/21]
[DEBUG netavark::network::core_utils] Setting sysctl value for /proc/sys/net/ipv6/conf/eth0/autoconf to 0
[INFO  netavark::network::netlink] Adding route (dest: 0.0.0.0/0 ,gw: 10.85.40.1, metric 100)
[INFO  netavark::commands::setup] dns disabled because aardvark-dns path does not exists
[DEBUG netavark::commands::setup] {
        "ens3": StatusBlock {
            dns_search_domains: Some(
                [],
            ),
            dns_server_ips: Some(
                [],
            ),
            interfaces: Some(
                {
                    "eth0": NetInterface {
                        mac_address: "2a:ed:b7:d3:8b:a1",
                        subnets: Some(
                            [
                                NetAddress {
                                    gateway: Some(
                                        10.85.40.1,
                                    ),
                                    ipnet: 10.85.41.247/21,
                                },
                            ],
                        ),
                    },
                },
            ),
        },
    }
[DEBUG netavark::commands::setup] "Setup complete"
DEBU[0000] Adding nameserver(s) from network status of '[]' 
DEBU[0000] Adding search domain(s) from network status of '[]' 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode subscription 
DEBU[0000] Setting Cgroups for container 1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c to machine.slice:libpod:1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c 
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d 
DEBU[0000] Workdir "/" resolved to host path "/var/lib/containers/storage/overlay/fed4a754c7533300311548b8b3d4de58138c94c422819dcea9b1eb5076211fbf/merged" 
DEBU[0000] Created OCI spec for container 1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c at /var/lib/containers/storage/overlay-containers/1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c/userdata/config.json 
DEBU[0000] /usr/bin/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/bin/conmon               args="[--api-version 1 -c 1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c -u 1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c/userdata -p /run/containers/storage/overlay-containers/1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c/userdata/pidfile -n c1 --exit-dir /run/libpod/exits --full-attach -s -l k8s-file:/var/lib/containers/storage/overlay-containers/1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c/userdata/ctr.log --log-level debug --syslog -t --conmon-pidfile /run/containers/storage/overlay-containers/1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /run/libpod --exit-command-arg --network-config-dir --exit-command-arg  --exit-command-arg --network-backend --exit-command-arg netavark --exit-command-arg --volumepath --exit-command-arg /var/lib/containers/storage/volumes --exit-command-arg --transient-store=false --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c]"
INFO[0000] Running conmon under slice machine.slice and unitName libpod-conmon-1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c.scope 
DEBU[0000] Received: 42456                              
INFO[0000] Got Conmon PID as 42448                      
DEBU[0000] Created container 1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c in OCI runtime 
DEBU[0000] Starting container 1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c with command [sh] 
DEBU[0000] Started container 1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c 
DEBU[0000] Notify sent successfully                     
1c403e41d9847cf99fd020a5b31d91e9ada4806bafc3eb80571ac5a40a88954c
DEBU[0000] Called run.PersistentPostRunE(podman --log-level debug run -itd --name c1 --ip=10.85.41.247 --network ens3 quay.io/libpod/busybox) 
DEBU[0000] Shutting down engines                        

[root@kvm-04-guest09 ~]# podman exec -ti c1 ifconfig
eth0      Link encap:Ethernet  HWaddr 2A:ED:B7:D3:8B:A1  
          inet addr:10.85.41.247  Bcast:10.85.47.255  Mask:255.255.248.0
          inet6 addr: fe80::28ed:b7ff:fed3:8ba1/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:65 errors:0 dropped:0 overruns:0 frame:0
          TX packets:6 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:3902 (3.8 KiB)  TX bytes:516 (516.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Comment 40 errata-xmlrpc 2023-05-16 08:21:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: container-tools:rhel8 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:2758


Note You need to log in before you can comment on or make changes to this bug.