RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2131836 - PANIC podman API service endpoint handler panic
Summary: PANIC podman API service endpoint handler panic
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: podman
Version: 8.6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: rc
: ---
Assignee: Jindrich Novy
QA Contact: Edward Shen
URL:
Whiteboard:
Depends On:
Blocks: 2132412 2132413 2136287
TreeView+ depends on / blocked
 
Reported: 2022-10-03 21:11 UTC by navabharathi.gorantl
Modified: 2023-09-19 04:27 UTC (History)
15 users (show)

Fixed In Version: podman-4.3.1-1.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2132412 2132413 (view as bug list)
Environment:
Last Closed: 2023-05-16 08:22:22 UTC
Type: Bug
Target Upstream Version:
Embargoed:
pm-rhel: mirror+


Attachments (Terms of Use)
journalctl logs for podman API server (12.24 MB, text/plain)
2022-10-03 21:11 UTC, navabharathi.gorantl
no flags Details
compose yml (2.26 KB, text/plain)
2022-10-04 20:53 UTC, navabharathi.gorantl
no flags Details
/var/log/messages (376.34 KB, text/plain)
2022-10-14 18:50 UTC, navabharathi.gorantl
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github containers podman pull 16050 0 None Merged Prevent nil pointer deref in GetImage 2022-11-28 08:57:29 UTC
Red Hat Issue Tracker RHELPLAN-135596 0 None None None 2022-10-03 21:16:20 UTC
Red Hat Product Errata RHSA-2023:2758 0 None None None 2023-05-16 08:23:42 UTC

Description navabharathi.gorantl 2022-10-03 21:11:41 UTC
Created attachment 1915831 [details]
journalctl logs for podman API server

Description of problem:
After node power cycle, podman APIs panic resulting is error bringing up containers. Attached journalctl logs for podman API server


Version-Release number of selected component (if applicable):
podman-4.1.1


How reproducible:


Steps to Reproduce:
1. Create containers
2. Power cycle node. Power recycle happened at Fri Sep 30 01:32
3. Unable to start containers

Actual results:
Containers fail to start


Expected results:
Containers should be ONLINE after node restart

Additional info:

[root@n58-h109 /]# last
hostadmi pts/2        10.98.181.193    Mon Oct  3 11:25   still logged in
hostadmi pts/1        10.98.181.193    Mon Oct  3 09:18   still logged in
hostadmi pts/1        10.98.180.194    Fri Sep 30 15:20 - 18:22  (03:01)
hostadmi pts/0        10.80.98.154     Fri Sep 30 02:40   still logged in
reboot   system boot  4.18.0-372.26.1. Fri Sep 30 01:32   still running
hostadmi pts/0        10.80.98.154     Fri Sep 30 00:21 - 01:28  (01:07)
hostadmi pts/0        172.28.9.253     Thu Sep 29 21:13 - 00:14  (03:00)
reboot   system boot  4.18.0-372.26.1. Thu Sep 29 20:50 - 01:28  (04:38)
reboot   system boot  4.18.0-372.26.1. Thu Sep 29 20:44 - 01:28  (04:44)
hostadmi pts/4        rsvlmvc01vm445.r Thu Sep 29 20:00 - 20:40  (00:39)
root     pts/5                         Wed Sep 28 20:21 - crash (1+00:23)
hostadmi pts/4        haadpf36f80a.com Wed Sep 28 20:21 - 20:27  (00:06)
root     pts/4                         Tue Sep 27 00:53 - 20:21 (1+19:27)
hostadmi pts/3        rsvlmvc01vm445.r Tue Sep 27 00:52 - 01:13  (00:20)
root     pts/5                         Mon Sep 26 23:15 - 20:21 (1+21:05)
hostadmi pts/4        rsvlmvc01vm445.r Mon Sep 26 23:15 - 23:28  (00:12)
hostadmi pts/4        172.28.9.250     Mon Sep 26 19:43 - 19:53  (00:10)
hostadmi pts/3        rsvlmvc01vm445.r Mon Sep 26 05:16 - 05:42  (00:25)
root     pts/0                         Mon Sep 26 04:44 - 04:44  (00:00)
root     pts/0                         Mon Sep 26 04:44 - 04:44  (00:00)
hostadmi tty1                          Mon Sep 26 04:38 - 20:40 (3+16:01)
reboot   system boot  3.10.0-1160.45.1 Mon Sep 26 03:23 - 01:28 (3+22:04)
reboot   system boot  3.10.0-1160.45.1 Mon Sep 26 03:20 - 01:28 (3+22:08)

wtmp begins Mon Sep 26 03:20:30 2022

Comment 1 Matthew Heon 2022-10-04 19:45:51 UTC
Can I ask what is bringing up the containers - is it the app talking to the Podman API? Podman itself does not restart containers after a reboot, so I'm curious.

Regardless, the panic is definitely an issue, and ought to be an easy/fast fix.

Comment 2 Brent Baude 2022-10-04 19:55:17 UTC
please attach your compose file in question.

Comment 3 Brent Baude 2022-10-04 20:20:21 UTC
This bugzilla is actually two problems.  The first problem is a problem with an image layer after reboots, is being actively worked on elsewhere.  This needs to be confirmed.  The second problem is that when a remote compat inspect is being done with an image that has a problem, the error reporting attempts to access nil to get the image id.

Fix for the panic upstream -> https://github.com/containers/podman/pull/16050

The problem exists in both 4.1 and 4.2

Lastly, as Matt says, there is no mechanism for having containers come up  from reboot unless systemd is being used.

Comment 4 navabharathi.gorantl 2022-10-04 20:53:44 UTC
Created attachment 1916060 [details]
compose yml

Comment 5 navabharathi.gorantl 2022-10-04 20:54:34 UTC
We use VCS (Veritas cluster server) which brings up the container after system start. Attached compose file

Comment 11 navabharathi.gorantl 2022-10-05 16:56:15 UTC
Can you suggest a workaround for now to get the system to working state?

Comment 12 Matthew Heon 2022-10-05 18:07:14 UTC
It's hard to say, since the panic is swallowing what I suspect to be the actual error. Is this happening on more than one system? Does it reproduce consistently?

If it is only one system: We recommend saving any data on the system, then doing a `podman system reset` to remove all images. It seems like the storage library was interrupted midway through writing an image by power loss, resulting in the image being corrupted.

Comment 13 navabharathi.gorantl 2022-10-05 22:39:41 UTC
This is a system test machine and have not heard this happening on any other system yet. Cannot use "podman system reset" as it resets all configurations

Comment 15 navabharathi.gorantl 2022-10-14 18:48:32 UTC
system test is able to reproduce the issue again. Attached the latest logs

Comment 16 navabharathi.gorantl 2022-10-14 18:50:21 UTC
Created attachment 1918107 [details]
/var/log/messages

Comment 17 Tom Sweeney 2022-10-14 19:19:37 UTC
Brent, please see previous comment.

Comment 18 navabharathi.gorantl 2022-10-14 20:28:24 UTC
Also, could not run "podman system reset". Any other way to get podman running again?

Error: error opening database /var/lib/containers/storage/libpod/bolt_state.db: open /var/lib/containers/storage/libpod/bolt_state.db: input/output error

Comment 20 Brent Baude 2022-10-16 13:57:31 UTC
The panic here is different than the previous one.  I spent a little time looking at it this morning,.  I think this is wholly related to the DB being corrupted; and the calls to Info are part of the API so to speak. @mheon do you agree?  And if so, should it be deferred to a new bug?

Comment 21 Matthew Heon 2022-10-17 13:13:07 UTC
It's definitely different, but this does not strike me as the underlying cause. It looks like something is causing Podman to try and lock a null c/storage lock; none of those locks should ever be nil, so there's something going wrong higher up the chain than the panic.

For a manual removal / reset of the system, you can remove `/var/lib/containers/storage` and `/var/run/libpod`. This should get things stabilized sufficiently to run `podman system reset`.

Comment 22 navabharathi.gorantl 2022-10-17 18:12:50 UTC
Tried manually deleting '/var/lib/containers/storage' but could not due to device/resource busy

[root@eagappflx040 containers]# rm -rf storage/
rm: cannot remove 'storage/overlay/dbeba618543dbe64859a7bddf4c14757a9d80bb0268493be760133765235bb6d/merged': Device or resource busy
rm: cannot remove 'storage/overlay/96f544efc96a41b8e13f4ea868ee42ef3b57c1c2ef681892aece4349119d52e6/merged': Device or resource busy
rm: cannot remove 'storage/overlay/8ec99d6acf1d1cac52a3800f32dc1e4eea7f4963694507b99f11c53f435aef07/merged': Device or resource busy
rm: cannot remove 'storage/overlay/fb15fc9745b6b8d801cf55756039b10b3419137eda983bd8d3373dfc97d8b096/merged': Device or resource busy
rm: cannot remove 'storage/overlay/81fc77b8480a3b4082bd6741a9541382f753b51c0d1b7d9235a19ac77b2ea45a/merged': Device or resource busy
rm: cannot remove 'storage/overlay/1d8807fcb9bdcdc9fce76996ddca9e7e23d7f7a63752c85f3fc2f0c7a20f0f67/merged': Device or resource busy
rm: cannot remove 'storage/overlay/2771138a08b8667213edb044b588704e481fa5457dfae6068264837d76bf4ab3/merged': Device or resource busy
rm: cannot remove 'storage/overlay/ae13097ea3e3058821985e23e0e6945edcbe25d58ccd541d0b49f38280e3e9cd/merged': Device or resource busy
rm: cannot remove 'storage/overlay-containers/bcbb498d5de15870e5d00097058008b549670c490a107fe2213a638a9558d506/userdata/shm': Device or resource busy
rm: cannot remove 'storage/overlay-containers/3ff041442067b22138bb93b7ea4130484b520bc5eae65c9fc3e3801165756679/userdata/shm': Device or resource busy
rm: cannot remove 'storage/overlay-containers/0c7c7faa696e69159ceed1b2649f5a00536b10e6aac1a0bdbe90f71398ffa0fa/userdata/shm': Device or resource busy
rm: cannot remove 'storage/overlay-containers/bbae8c62c21e4317d1a0d1af4db9ce5fe0d39269282195f4fb18df67b0c8fae3/userdata/shm': Device or resource busy
rm: cannot remove 'storage/overlay-containers/852db27b6583b1987d68f232c28969d065a73e7527172bf1a122038228337ea4/userdata/shm': Device or resource busy
rm: cannot remove 'storage/overlay-containers/80c7e2b97bdbd9a22ef118d0c29aafa4a1a23b2a2d942e8834dd5969cdee6d95/userdata/shm': Device or resource busy
rm: cannot remove 'storage/overlay-containers/5849a9d2f66f96f3e120c549091ad246c91befd4951144a06539c722c355bbe8/userdata/shm': Device or resource busy
rm: cannot remove 'storage/overlay-containers/2e3dcd37487aef2f20caa3c296c67bb4f4d404a6624c01ec380bfa2ad1ebb3e3/userdata/shm': Device or resource busy

Comment 23 Daniel Walsh 2022-10-18 13:23:37 UTC
Looks like you have a lot of running containers or at least leaked overlay mount points. 

mount | grep overlay-container

And then unmount those mount points.

Comment 24 mangesh.panche 2022-11-22 23:15:26 UTC
Today we ran into another set of podman panic issues during power cycle on multiple setups.

In all cases, it ran into issues while running

docker network ls | grep macvlan | awk '{print $2}'

On two setups we see the similar error:

<30>1 2022-11-22T04:04:40.309+00:00 nbapp837 infra-network-control.sh 11201 - - panic: invalid freelist page: 5918356779941454958, page type is branch
<30>1 2022-11-22T04:04:40.309+00:00 nbapp837 infra-network-control.sh 11201 - - goroutine 1 [running]:
<30>1 2022-11-22T04:04:40.338+00:00 nbapp837 infra-network-control.sh 11201 - - panic({0x55a4df2c6460, 0xc0003c8110})
<30>1 2022-11-22T04:04:40.338+00:00 nbapp837 infra-network-control.sh 11201 - - #011/usr/lib/golang/src/runtime/panic.go:1147 +0x3a8 fp=0xc0004cfb28 sp=0xc0004cfa68 pc=0x55a4dde990e8
<30>1 2022-11-22T04:04:40.347+00:00 nbapp837 infra-network-control.sh 11201 - - github.com/containers/podman/vendor/go.etcd.io/bbolt.(*freelist).read(0x55a4ded74981, 0x7f01e453c000)
<30>1 2022-11-22T04:04:40.347+00:00 nbapp837 infra-network-control.sh 11201 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/vendor/go.etcd.io/bbolt/freelist.go:266 +0x234 fp=0xc0004cfbc8 sp=0xc0004cfb28 pc=0x55a4de634e94
<30>1 2022-11-22T04:04:40.347+00:00 nbapp837 infra-network-control.sh 11201 - - github.com/containers/podman/vendor/go.etcd.io/bbolt.(*DB).loadFreelist.func1()
<30>1 2022-11-22T04:04:40.348+00:00 nbapp837 infra-network-control.sh 11201 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/vendor/go.etcd.io/bbolt/db.go:323 +0xae fp=0xc0004cfbf8 sp=0xc0004cfbc8 pc=0x55a4de62fe0e
<30>1 2022-11-22T04:04:40.348+00:00 nbapp837 infra-network-control.sh 11201 - - sync.(*Once).doSlow(0xc000d261c8, 0x10)
<30>1 2022-11-22T04:04:40.348+00:00 nbapp837 infra-network-control.sh 11201 - - #011/usr/lib/golang/src/sync/once.go:68 +0xd2 fp=0xc0004cfc60 sp=0xc0004cfbf8 pc=0x55a4dded76f2
<30>1 2022-11-22T04:04:40.348+00:00 nbapp837 infra-network-control.sh 11201 - - sync.(*Once).Do(...)
<30>1 2022-11-22T04:04:40.348+00:00 nbapp837 infra-network-control.sh 11201 - - #011/usr/lib/golang/src/sync/once.go:59
<30>1 2022-11-22T04:04:40.348+00:00 nbapp837 infra-network-control.sh 11201 - - github.com/containers/podman/vendor/go.etcd.io/bbolt.(*DB).loadFreelist(0xc000d26000)
<30>1 2022-11-22T04:04:40.348+00:00 nbapp837 infra-network-control.sh 11201 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/vendor/go.etcd.io/bbolt/db.go:316 +0x47 fp=0xc0004cfc90 sp=0xc0004cfc60 pc=0x55a4de62fd27
<30>1 2022-11-22T04:04:40.348+00:00 nbapp837 infra-network-control.sh 11201 - - github.com/containers/podman/vendor/go.etcd.io/bbolt.Open({0xc0002000f0, 0x30}, 0xdeda7550, 0x0)
<30>1 2022-11-22T04:04:40.348+00:00 nbapp837 infra-network-control.sh 11201 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/vendor/go.etcd.io/bbolt/db.go:293 +0x46b fp=0xc0004d0d68 sp=0xc0004cfc90 pc=0x55a4de62fa8b
<30>1 2022-11-22T04:04:40.365+00:00 nbapp837 infra-network-control.sh 11201 - - github.com/containers/podman/libpod.NewBoltState({0xc0002000f0, 0x30}, 0xc000c99380)
<30>1 2022-11-22T04:04:40.365+00:00 nbapp837 infra-network-control.sh 11201 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/libpod/boltdb_state.go:77 +0x152 fp=0xc0004d0f80 sp=0xc0004d0d68 pc=0x55a4dea61b92
<30>1 2022-11-22T04:04:40.365+00:00 nbapp837 infra-network-control.sh 11201 - - github.com/containers/podman/libpod.makeRuntime(0xc000c99380)
<30>1 2022-11-22T04:04:40.365+00:00 nbapp837 infra-network-control.sh 11201 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/libpod/runtime.go:325 +0x189 fp=0xc0004d13b0 sp=0xc0004d0f80 pc=0x55a4deb28989
<30>1 2022-11-22T04:04:40.365+00:00 nbapp837 infra-network-control.sh 11201 - - github.com/containers/podman/libpod.newRuntimeFromConfig(0xc000cc7c00, {0xc0004d17b0, 0x0, 0x55a4df5115b0})
<30>1 2022-11-22T04:04:40.365+00:00 nbapp837 infra-network-control.sh 11201 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/libpod/runtime.go:227 +0x3d7 fp=0xc0004d16d0 sp=0xc0004d13b0 pc=0x55a4deb281d7
<30>1 2022-11-22T04:04:40.365+00:00 nbapp837 infra-network-control.sh 11201 - - github.com/containers/podman/libpod.NewRuntime({0xc000c7ca00, 0x55a4ded770cb}, {0xc0004d17b0, 0x0, 0x0})
<30>1 2022-11-22T04:04:40.365+00:00 nbapp837 infra-network-control.sh 11201 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/libpod/runtime.go:170 +0x51 fp=0xc0004d1700 sp=0xc0004d16d0 pc=0x55a4deb27db1
<30>1 2022-11-22T04:04:40.366+00:00 nbapp837 infra-network-control.sh 11201 - - github.com/containers/podman/pkg/domain/infra.getRuntime({0x55a4df560498, 0xc000040048}, 0xc000c7ca00, 0xc0004d1a80)
<30>1 2022-11-22T04:04:40.366+00:00 nbapp837 infra-network-control.sh 11201 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/pkg/domain/infra/runtime_libpod.go:254 +0x19b7 fp=0xc0004d1a60 sp=0xc0004d1700 pc=0x55a4dec28437
<30>1 2022-11-22T04:04:40.366+00:00 nbapp837 infra-network-control.sh 11201 - - github.com/containers/podman/pkg/domain/infra.GetRuntime.func1()
<30>1 2022-11-22T04:04:40.366+00:00 nbapp837 infra-network-control.sh 11201 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/pkg/domain/infra/runtime_libpod.go:80 +0x49 fp=0xc0004d1ab0 sp=0xc0004d1a60 pc=0x55a4dec26a09 



<30>1 2022-11-22T20:10:23.219+00:00 nbapp817 infra-network-control.sh 192536 - - panic: invalid freelist page: 66, page type is leaf
<30>1 2022-11-22T20:10:23.219+00:00 nbapp817 infra-network-control.sh 192536 - - goroutine 1 [running]:
<30>1 2022-11-22T20:10:23.220+00:00 nbapp817 infra-network-control.sh 192536 - - panic({0x55d5c5777460, 0xc00026e920})
<30>1 2022-11-22T20:10:23.220+00:00 nbapp817 infra-network-control.sh 192536 - - #011/usr/lib/golang/src/runtime/panic.go:1147 +0x3a8 fp=0xc000a05b28 sp=0xc000a05a68 pc=0x55d5c434a0e8
<30>1 2022-11-22T20:10:23.220+00:00 nbapp817 infra-network-control.sh 192536 - - github.com/containers/podman/vendor/go.etcd.io/bbolt.(*freelist).read(0x55d5c5225981, 0x7f7f34263000)
<30>1 2022-11-22T20:10:23.220+00:00 nbapp817 infra-network-control.sh 192536 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/vendor/go.etcd.io/bbolt/freelist.go:266 +0x234 fp=0xc000a05bc8 sp=0xc000a05b28 pc=0x55d5c4ae5e94
<30>1 2022-11-22T20:10:23.220+00:00 nbapp817 infra-network-control.sh 192536 - - github.com/containers/podman/vendor/go.etcd.io/bbolt.(*DB).loadFreelist.func1()
<30>1 2022-11-22T20:10:23.220+00:00 nbapp817 infra-network-control.sh 192536 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/vendor/go.etcd.io/bbolt/db.go:323 +0xae fp=0xc000a05bf8 sp=0xc000a05bc8 pc=0x55d5c4ae0e0e
<30>1 2022-11-22T20:10:23.220+00:00 nbapp817 infra-network-control.sh 192536 - - sync.(*Once).doSlow(0xc0001d7608, 0x10)
<30>1 2022-11-22T20:10:23.220+00:00 nbapp817 infra-network-control.sh 192536 - - #011/usr/lib/golang/src/sync/once.go:68 +0xd2 fp=0xc000a05c60 sp=0xc000a05bf8 pc=0x55d5c43886f2
<30>1 2022-11-22T20:10:23.220+00:00 nbapp817 infra-network-control.sh 192536 - - sync.(*Once).Do(...)
<30>1 2022-11-22T20:10:23.220+00:00 nbapp817 infra-network-control.sh 192536 - - #011/usr/lib/golang/src/sync/once.go:59
<30>1 2022-11-22T20:10:23.220+00:00 nbapp817 infra-network-control.sh 192536 - - github.com/containers/podman/vendor/go.etcd.io/bbolt.(*DB).loadFreelist(0xc0001d7440)
<30>1 2022-11-22T20:10:23.220+00:00 nbapp817 infra-network-control.sh 192536 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/vendor/go.etcd.io/bbolt/db.go:316 +0x47 fp=0xc000a05c90 sp=0xc000a05c60 pc=0x55d5c4ae0d27
<30>1 2022-11-22T20:10:23.220+00:00 nbapp817 infra-network-control.sh 192536 - - github.com/containers/podman/vendor/go.etcd.io/bbolt.Open({0xc0001c7650, 0x30}, 0xc5258550, 0x0)
<30>1 2022-11-22T20:10:23.220+00:00 nbapp817 infra-network-control.sh 192536 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/vendor/go.etcd.io/bbolt/db.go:293 +0x46b fp=0xc000a06d68 sp=0xc000a05c90 pc=0x55d5c4ae0a8b
<30>1 2022-11-22T20:10:23.220+00:00 nbapp817 infra-network-control.sh 192536 - - github.com/containers/podman/libpod.NewBoltState({0xc0001c7650, 0x30}, 0xc000f696c0)
<30>1 2022-11-22T20:10:23.220+00:00 nbapp817 infra-network-control.sh 192536 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/libpod/boltdb_state.go:77 +0x152 fp=0xc000a06f80 sp=0xc000a06d68 pc=0x55d5c4f12b92
<30>1 2022-11-22T20:10:23.221+00:00 nbapp817 infra-network-control.sh 192536 - - github.com/containers/podman/libpod.makeRuntime(0xc000f696c0)
<30>1 2022-11-22T20:10:23.221+00:00 nbapp817 infra-network-control.sh 192536 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/libpod/runtime.go:325 +0x189 fp=0xc000a073b0 sp=0xc000a06f80 pc=0x55d5c4fd9989
<30>1 2022-11-22T20:10:23.221+00:00 nbapp817 infra-network-control.sh 192536 - - github.com/containers/podman/libpod.newRuntimeFromConfig(0xc000fa7c00, {0xc000a077b0, 0x0, 0x55d5c59c25b0})
<30>1 2022-11-22T20:10:23.221+00:00 nbapp817 infra-network-control.sh 192536 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/libpod/runtime.go:227 +0x3d7 fp=0xc000a076d0 sp=0xc000a073b0 pc=0x55d5c4fd91d7
<30>1 2022-11-22T20:10:23.221+00:00 nbapp817 infra-network-control.sh 192536 - - github.com/containers/podman/libpod.NewRuntime({0xc000f5ca00, 0x55d5c52280cb}, {0xc000a077b0, 0x0, 0x0})
<30>1 2022-11-22T20:10:23.221+00:00 nbapp817 infra-network-control.sh 192536 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/libpod/runtime.go:170 +0x51 fp=0xc000a07700 sp=0xc000a076d0 pc=0x55d5c4fd8db1
<30>1 2022-11-22T20:10:23.221+00:00 nbapp817 infra-network-control.sh 192536 - - github.com/containers/podman/pkg/domain/infra.getRuntime({0x55d5c5a11498, 0xc000040048}, 0xc000f5ca00, 0xc000a07a80)
<30>1 2022-11-22T20:10:23.221+00:00 nbapp817 infra-network-control.sh 192536 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/pkg/domain/infra/runtime_libpod.go:254 +0x19b7 fp=0xc000a07a60 sp=0xc000a07700 pc=0x55d5c50d9437
<30>1 2022-11-22T20:10:23.221+00:00 nbapp817 infra-network-control.sh 192536 - - github.com/containers/podman/pkg/domain/infra.GetRuntime.func1()
<30>1 2022-11-22T20:10:23.221+00:00 nbapp817 infra-network-control.sh 192536 - - #011/builddir/build/BUILD/containers-podman-7f5e2fd/_build/src/github.com/containers/podman/pkg/domain/infra/runtime_libpod.go:80 +0x49 fp=0xc000a07ab0 sp=0xc000a07a60 pc=0x55d5c50d7a09 


On two setups we see the following error:

<30>1 2022-11-22T03:58:14.521+00:00 nbapp836 infra-network-control.sh 10637 - - Error: container 3ef225efde16ba8c3395134ca80ae8908b9d57de38a42e450b1363e7e20c2438 missing state in DB: internal libpod error
<30>1 2022-11-22T03:58:14.522+00:00 nbapp836 podman 12681 - - Error: container 3ef225efde16ba8c3395134ca80ae8908b9d57de38a42e450b1363e7e20c2438 missing state in DB: internal libpod error
<29>1 2022-11-22T03:58:14.527+00:00 nbapp836 systemd 1 - - podman.service: Main process exited, code=exited, status=125/n/a
<28>1 2022-11-22T03:58:14.527+00:00 nbapp836 systemd 1 - - podman.service: Failed with result 'exit-code'.
https://access.redhat.com/support/cases/#/case/03330920/discussion?commentId=a0a6R00000U4HJFQA3

Comment 25 Tom Sweeney 2022-11-23 14:42:22 UTC
@bbaude please see comment: https://bugzilla.redhat.com/show_bug.cgi?id=2131836#c24

Comment 26 Jindrich Novy 2022-11-28 09:01:56 UTC
Suggest filing another bug if the orginal issue was resolved and another one found.

Comment 27 mangesh.panche 2022-11-28 20:56:36 UTC
The original issue also referred to boltdb corruption which caused the panic. It looks like that is not addressed, hence we are seeing this issue. 

Please let us know, if this needs to opened as separate issue. Also, we would like to know, if there is any workaround available for this issue.

Comment 28 mangesh.panche 2022-11-28 21:14:36 UTC
I have created a separate bugzilla 2149112 for this issue.

Comment 34 errata-xmlrpc 2023-05-16 08:22:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: container-tools:rhel8 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:2758

Comment 35 Red Hat Bugzilla 2023-09-19 04:27:37 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days


Note You need to log in before you can comment on or make changes to this bug.