Bug 1998835
| Summary: | error loading cached network config: network "podman" not found in CNI cache | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 8 | Reporter: | morgan read <mstuff> | ||||
| Component: | podman | Assignee: | Paul Holzinger <pholzing> | ||||
| Status: | CLOSED ERRATA | QA Contact: | Alex Jia <ajia> | ||||
| Severity: | medium | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | CentOS Stream | CC: | bbaude, bstinson, ddarrah, dwalsh, jligon, jnovy, jwboyer, lsm5, matthew.lesieur, mheon, pthomas, tsweeney, umohnani, ypu | ||||
| Target Milestone: | rc | Keywords: | Reopened | ||||
| Target Release: | --- | Flags: | pm-rhel:
mirror+
|
||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||
| Doc Text: | Story Points: | --- | |||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2022-05-10 13:27:31 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
morgan read
2021-08-29 09:30:58 UTC
[root@frontserver ~]# podman info --debug
host:
arch: amd64
buildahVersion: 1.22.3
cgroupControllers:
- cpuset
- cpu
- cpuacct
- blkio
- memory
- devices
- freezer
- net_cls
- perf_event
- net_prio
- hugetlb
- pids
- rdma
cgroupManager: systemd
cgroupVersion: v1
conmon:
package: conmon-2.0.29-1.module_el8.5.0+890+6b136101.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.29, commit: 84384406047fae626269133e1951c4b92eed7603'
cpus: 2
distribution:
distribution: '"centos"'
version: "8"
eventLogger: file
hostname: frontserver.lan
idMappings:
gidmap: null
uidmap: null
kernel: 4.18.0-305.12.1.el8_4.centos.plus.x86_64
linkmode: dynamic
memFree: 582135808
memTotal: 3350163456
ociRuntime:
name: runc
package: runc-1.0.2-1.module_el8.5.0+911+f19012f9.x86_64
path: /usr/bin/runc
version: |-
runc version 1.0.2
spec: 1.0.2-dev
go: go1.16.7
libseccomp: 2.5.1
os: linux
remoteSocket:
exists: true
path: /run/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: false
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.1.8-1.module_el8.5.0+890+6b136101.x86_64
version: |-
slirp4netns version 1.1.8
commit: d361001f495417b880f20329121e3aa431a8f90f
libslirp: 4.4.0
SLIRP_CONFIG_VERSION_MAX: 3
libseccomp: 2.5.1
swapFree: 2145632256
swapTotal: 2147479552
uptime: 12h 1m 29.09s (Approximately 0.50 days)
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- registry.centos.org
- docker.io
store:
configFile: /etc/containers/storage.conf
containerStore:
number: 4
paused: 0
running: 0
stopped: 4
graphDriverName: overlay
graphOptions:
overlay.mountopt: nodev,metacopy=on
graphRoot: /var/lib/containers/storage
graphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "true"
imageStore:
number: 6
runRoot: /var/run/containers/storage
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 3.3.1-dev
Built: 1630096035
BuiltTime: Fri Aug 27 21:27:15 2021
GitCommit: ""
GoVersion: go1.16.7
OsArch: linux/amd64
Version: 3.3.1-dev
[root@frontserver ~]#
I have the same situation on my Fedora 34 system. I can reproduce this as well. Can you run `sudo rm /var/lib/containers/storage/libpod/defaultCNINetExists`, this should fix it for now. Thanks Paul, works for now. I ran into this bug too on a RHEL 8.5 server. The bug likely happened when podman was upgraded to 3.3.1-9, but I'm not certain because the container on this server is rarely used. I resolved the problem by removing /var/lib/containers/storage/libpod/defaultCNINetExists as per https://github.com/containers/podman/issues/12651#issuecomment-997394699 # cat /etc/redhat-release Red Hat Enterprise Linux release 8.5 (Ootpa) # rpm -qa --last | grep podman podman-3.3.1-9.module+el8.5.0+12697+018f24d7.x86_64 Wed 08 Dec 2021 11:31:41 AM EST podman-catatonit-3.3.1-9.module+el8.5.0+12697+018f24d7.x86_64 Wed 08 Dec 2021 11:31:39 AM EST This should be fixed permanently with podman v4.0 since it will always have the default network in memory. I am not sure how exactly this happened but it I think this was a packaging bug with podman v3.3. It should also work with 3.4. This bug has been verified on podman-4.0.2-1.module+el8.6.0+14379+4ec2a99a. [root@sweetpig-21 ~]# cat /etc/redhat-release Red Hat Enterprise Linux release 8.6 Beta (Ootpa) [root@sweetpig-21 ~]# rpm -q podman runc crun criu systemd kernel podman-4.0.2-1.module+el8.6.0+14379+4ec2a99a.x86_64 runc-1.0.3-2.module+el8.6.0+14379+4ec2a99a.x86_64 crun-1.4.2-1.module+el8.6.0+14379+4ec2a99a.x86_64 criu-3.15-3.module+el8.6.0+14379+4ec2a99a.x86_64 systemd-239-58.el8.x86_64 kernel-4.18.0-369.el8.x86_64 [root@sweetpig-21 ~]# podman create registry.access.redhat.com/ubi8 bash Trying to pull registry.access.redhat.com/ubi8:latest... Getting image source signatures Checking if image destination supports signatures Copying blob 8dfe9326f733 done Copying blob 0d875a68bf99 done Copying config 52de04277b done Writing manifest to image destination Storing signatures 50128433a31387c23b02ea2b589107d6aeb53899b9b27f23fea9b37702f24ab3 [root@sweetpig-21 ~]# podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 50128433a313 registry.access.redhat.com/ubi8:latest bash 5 seconds ago Created silly_mestorf [root@sweetpig-21 ~]# podman container start 50128433a313 50128433a313 [root@sweetpig-21 ~]# podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 50128433a313 registry.access.redhat.com/ubi8:latest bash 8 minutes ago Exited (0) 8 seconds ago silly_mestorf [root@sweetpig-21 ~]# podman pod create f5cdee4a0f513638f5be5c17f019d48b5cb1d6d91abcf0a593b6e4890d541c02 [root@sweetpig-21 ~]# podman pod ps POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS f5cdee4a0f51 cranky_mclean Created 23 seconds ago 41321b9c7f78 1 [root@sweetpig-21 ~]# podman pod start f5cdee4a0f51 f5cdee4a0f513638f5be5c17f019d48b5cb1d6d91abcf0a593b6e4890d541c02 [root@sweetpig-21 ~]# podman pod ps POD ID NAME STATUS CREATED INFRA ID # OF CONTAINERS f5cdee4a0f51 cranky_mclean Running About a minute ago 41321b9c7f78 1 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: container-tools:rhel8 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:1762 |