RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1998835 - error loading cached network config: network "podman" not found in CNI cache
Summary: error loading cached network config: network "podman" not found in CNI cache
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: podman
Version: CentOS Stream
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: Paul Holzinger
QA Contact: Alex Jia
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-08-29 09:30 UTC by morgan read
Modified: 2024-05-12 16:45 UTC (History)
14 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-05-10 13:27:31 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)
output of terminal sessons leading to errors (543.60 KB, text/plain)
2021-08-29 09:30 UTC, morgan read
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-95502 0 None None None 2021-08-29 09:33:57 UTC
Red Hat Product Errata RHSA-2022:1762 0 None None None 2022-05-10 13:27:47 UTC

Description morgan read 2021-08-29 09:30:58 UTC
Created attachment 1818815 [details]
output of terminal sessons leading to errors

Created attachment 1818815 [details]
output of terminal sessons leading to errors
[I probably included more info than necessary - useful dnf update info starts at l 698 and useful podman info at l 3458]

Description of problem:
dnf upgrade seems to have broken podman

Version-Release number of selected component (if applicable):
podman-3.3.1-3.module_el8.5.0+911+f19012f9.x86_64
podman-3.3.0-0.17.module_el8.5.0+874+6db8bee3.x86_64

How reproducible:
Always broken since upgrade

Steps to Reproduce:
1. dnf upgrade
2. # podman container start X
   # podman pod start Y
3.
ERRO[0000] error loading cached network config: network "podman" not found in CNI cache 
WARN[0000] falling back to loading from existing plugins on disk 
ERRO[0000] Error tearing down partially created network namespace for container 2d2be3756348540e90e48474cd47006e5ac901af50ef30434d1795a5169c41ee: CNI network "podman" not found 
Error: unable to start container "2d2be3756348540e90e48474cd47006e5ac901af50ef30434d1795a5169c41ee": error configuring network namespace for container 2d2be3756348540e90e48474cd47006e5ac901af50ef30434d1795a5169c41ee: CNI network "podman" not found

ERRO[0000] error loading cached network config: network "podman" not found in CNI cache 
WARN[0000] falling back to loading from existing plugins on disk 
ERRO[0000] Error tearing down partially created network namespace for container c2785c5e43b0b9829de0e318655041683b5e46c4caf81d601e58c828b1963266: CNI network "podman" not found 
Error: error starting container c2785c5e43b0b9829de0e318655041683b5e46c4caf81d601e58c828b1963266: error configuring network namespace for container c2785c5e43b0b9829de0e318655041683b5e46c4caf81d601e58c828b1963266: CNI network "podman" not found
Error: error starting container 15d9fdbe53daea84929b76c4d325803341962f32b9832c250c140207e200751d: a dependency of container 15d9fdbe53daea84929b76c4d325803341962f32b9832c250c140207e200751d failed to start: container state improper
Error: error starting container 54fc0cc8f0829c30b17dcb500ad5060247808be2c2f21c3525586d24f85a57b4: a dependency of container 54fc0cc8f0829c30b17dcb500ad5060247808be2c2f21c3525586d24f85a57b4 failed to start: container state improper

Actual results:
As above

Expected results:
containers start

Additional info:
I came back to following this howto:
https://blog.while-true-do.io/podman-systemd-container-management/
From 'Recreate' - after an absence of a week or two - and thought to do a dnf upgrade before proceeding further (it seems from the attached output that I may have been prompted by wanting to install gedit for some reason - perhaps I initially wanted to carry out the investigation I did on the dnf errors with meld, with gedit instead)
Unfortunately, doing the dnf upgrade seems to have broken podman - which has been a bit of an anticlimax...  Especially as podman is responsible for playing all the audio in the house...

Comment 1 morgan read 2021-08-29 09:48:13 UTC
[root@frontserver ~]# podman info --debug
host:
  arch: amd64
  buildahVersion: 1.22.3
  cgroupControllers:
  - cpuset
  - cpu
  - cpuacct
  - blkio
  - memory
  - devices
  - freezer
  - net_cls
  - perf_event
  - net_prio
  - hugetlb
  - pids
  - rdma
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.29-1.module_el8.5.0+890+6b136101.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.29, commit: 84384406047fae626269133e1951c4b92eed7603'
  cpus: 2
  distribution:
    distribution: '"centos"'
    version: "8"
  eventLogger: file
  hostname: frontserver.lan
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-305.12.1.el8_4.centos.plus.x86_64
  linkmode: dynamic
  memFree: 582135808
  memTotal: 3350163456
  ociRuntime:
    name: runc
    package: runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.2
      spec: 1.0.2-dev
      go: go1.16.7
      libseccomp: 2.5.1
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.8-1.module_el8.5.0+890+6b136101.x86_64
    version: |-
      slirp4netns version 1.1.8
      commit: d361001f495417b880f20329121e3aa431a8f90f
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.1
  swapFree: 2145632256
  swapTotal: 2147479552
  uptime: 12h 1m 29.09s (Approximately 0.50 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 4
    paused: 0
    running: 0
    stopped: 4
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 6
  runRoot: /var/run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.3.1-dev
  Built: 1630096035
  BuiltTime: Fri Aug 27 21:27:15 2021
  GitCommit: ""
  GoVersion: go1.16.7
  OsArch: linux/amd64
  Version: 3.3.1-dev

[root@frontserver ~]#

Comment 2 Daniel Walsh 2021-08-29 13:26:35 UTC
I have the same situation on my Fedora 34 system.

Comment 3 Paul Holzinger 2021-08-30 11:18:52 UTC
I can reproduce this as well. Can you run `sudo rm /var/lib/containers/storage/libpod/defaultCNINetExists`, this should fix it for now.

Comment 4 morgan read 2021-08-30 12:32:09 UTC
Thanks Paul, works for now.

Comment 5 Matthew LeSieur 2022-01-06 13:40:04 UTC
I ran into this bug too on a RHEL 8.5 server.  The bug likely happened when podman was upgraded to 3.3.1-9, but I'm not certain because the container on this server is rarely used. I resolved the problem by removing /var/lib/containers/storage/libpod/defaultCNINetExists as per https://github.com/containers/podman/issues/12651#issuecomment-997394699

# cat /etc/redhat-release 
Red Hat Enterprise Linux release 8.5 (Ootpa)

# rpm -qa --last | grep podman
podman-3.3.1-9.module+el8.5.0+12697+018f24d7.x86_64 Wed 08 Dec 2021 11:31:41 AM EST
podman-catatonit-3.3.1-9.module+el8.5.0+12697+018f24d7.x86_64 Wed 08 Dec 2021 11:31:39 AM EST

Comment 6 Paul Holzinger 2022-02-24 13:17:34 UTC
This should be fixed permanently with podman v4.0 since it will always have the default network in memory.
I am not sure how exactly this happened but it I think this was a packaging bug with podman v3.3. It should also work with 3.4.

Comment 12 Alex Jia 2022-03-03 02:59:18 UTC
This bug has been verified on podman-4.0.2-1.module+el8.6.0+14379+4ec2a99a.


[root@sweetpig-21 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux release 8.6 Beta (Ootpa)

[root@sweetpig-21 ~]# rpm -q podman runc crun criu systemd kernel
podman-4.0.2-1.module+el8.6.0+14379+4ec2a99a.x86_64
runc-1.0.3-2.module+el8.6.0+14379+4ec2a99a.x86_64
crun-1.4.2-1.module+el8.6.0+14379+4ec2a99a.x86_64
criu-3.15-3.module+el8.6.0+14379+4ec2a99a.x86_64
systemd-239-58.el8.x86_64
kernel-4.18.0-369.el8.x86_64

[root@sweetpig-21 ~]# podman create registry.access.redhat.com/ubi8 bash
Trying to pull registry.access.redhat.com/ubi8:latest...
Getting image source signatures
Checking if image destination supports signatures
Copying blob 8dfe9326f733 done
Copying blob 0d875a68bf99 done
Copying config 52de04277b done
Writing manifest to image destination
Storing signatures
50128433a31387c23b02ea2b589107d6aeb53899b9b27f23fea9b37702f24ab3

[root@sweetpig-21 ~]# podman ps -a
CONTAINER ID  IMAGE                                   COMMAND     CREATED        STATUS      PORTS       NAMES
50128433a313  registry.access.redhat.com/ubi8:latest  bash        5 seconds ago  Created                 silly_mestorf

[root@sweetpig-21 ~]# podman container start 50128433a313
50128433a313

[root@sweetpig-21 ~]# podman ps -a
CONTAINER ID  IMAGE                                   COMMAND     CREATED        STATUS                    PORTS       NAMES
50128433a313  registry.access.redhat.com/ubi8:latest  bash        8 minutes ago  Exited (0) 8 seconds ago              silly_mestorf

[root@sweetpig-21 ~]# podman pod create
f5cdee4a0f513638f5be5c17f019d48b5cb1d6d91abcf0a593b6e4890d541c02

[root@sweetpig-21 ~]# podman pod ps
POD ID        NAME           STATUS      CREATED         INFRA ID      # OF CONTAINERS
f5cdee4a0f51  cranky_mclean  Created     23 seconds ago  41321b9c7f78  1

[root@sweetpig-21 ~]# podman pod start f5cdee4a0f51
f5cdee4a0f513638f5be5c17f019d48b5cb1d6d91abcf0a593b6e4890d541c02

[root@sweetpig-21 ~]# podman pod ps
POD ID        NAME           STATUS      CREATED             INFRA ID      # OF CONTAINERS
f5cdee4a0f51  cranky_mclean  Running     About a minute ago  41321b9c7f78  1

Comment 14 errata-xmlrpc 2022-05-10 13:27:31 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: container-tools:rhel8 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:1762


Note You need to log in before you can comment on or make changes to this bug.