RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1841485 - podman exec is fragile in the presence of signals
Summary: podman exec is fragile in the presence of signals
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 8
Classification: Red Hat
Component: podman
Version: 8.2
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: rc
: ---
Assignee: Jindrich Novy
QA Contact: Alex Jia
URL:
Whiteboard:
Depends On:
Blocks: 1186913 1823899 1913294 1913295
TreeView+ depends on / blocked
 
Reported: 2020-05-29 09:28 UTC by Michele Baldessari
Modified: 2024-03-25 15:59 UTC (History)
16 users (show)

Fixed In Version: podman-2.1.1-4.el8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1913294 1913295 (view as bug list)
Environment:
Last Closed: 2021-05-18 15:32:55 UTC
Type: Bug
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 5336961 0 None None None 2020-08-21 01:15:04 UTC

Description Michele Baldessari 2020-05-29 09:28:19 UTC
Description of problem:    
When something kills a running podman exec, we leave a stray ExecIDs for that container which then makes it impossible to remove the container without an "rm -f".
                                                                   
Podman:                                                                       
$ timeout 5 podman exec redis-bundle-podman-2 sleep 10                        
$ podman inspect redis-bundle-podman-2 | grep Exec -A3                        
        "ExecIDs": [                                                          
            "be9319d125048742241cf4e4534ac8ac52c856aaf94eac51f181119bb7c4e902"                                                                                                                       
        ],                                                                    
        "GraphDriver": {                                                      
$ podman rm redis-bundle-podman-2                                             
Error: cannot remove container 7ff933b3b4317bf2e05de5cccc799e8b1596b994aa3afa199fd0ca74e4b9c044 as it has active exec sessions: container state improper

Version-Release number of selected component (if applicable):
podman-1.6.4-11.module+el8.2.0+6369+1f4293b4.x86_64

Comment 2 Matthew Heon 2020-05-29 13:12:26 UTC
This is already fixed on master via https://github.com/containers/libpod/pull/4692/files

Comment 3 Daniel Walsh 2020-05-29 13:54:40 UTC
Fixed in podman 2.0 which will be released in 8.3 release.

Comment 4 Tom Sweeney 2020-05-29 18:20:10 UTC
Closing as this is fixed in an upcoming release.

Comment 6 Tom Sweeney 2020-08-20 14:27:40 UTC
Assigning to Jindrich so he can handle any packaging needs.

Comment 25 Laurie Friedman 2020-09-15 18:08:16 UTC
Mike Burns - Please confirm that we have agreement from OSP to delay 1841485 until RHEL 8.4, Apr-2021.  It is not clear from the recent comments that everyone on OSP has agreed that is OK.

Comment 32 Alex Jia 2020-12-17 11:42:47 UTC
I can't find fixed version podman-2.1.1-4 in brew system, please update
fixed version if it indeed doesn't exist.

I can reproduce this bug on podman-1.6.4-11.module+el8.2.0+6369+1f4293b4
and verify it on podman-2.1.1-5.module+el8.3.1+8448+932c72cf.

1. can reproduce this bug on podman-1.6.4-11.module+el8.2.0+6369+1f4293b4

[root@kvm-06-guest10 ~]# rpm -q podman runc
podman-1.6.4-11.module+el8.2.0+6369+1f4293b4.x86_64
runc-1.0.0-64.rc10.module+el8.2.0+6369+1f4293b4.x86_64


[root@kvm-06-guest10 ~]# sh -x run.sh
+ IMAGE=quay.io/libpod/alpine
+ podman run --name myctr1 -td quay.io/libpod/alpine
Trying to pull quay.io/libpod/alpine...
Getting image source signatures
Copying blob 9d16cba9fb96 done
Copying config 9617696764 done
Writing manifest to image destination
Storing signatures
1c5e2b019aa2e01be333ecfe895164a6f57dbc4a4fab04e1d03120b499201b32
+ podman run --name myctr2 -td quay.io/libpod/alpine
74ee0ea7f6de64ca0dc53b674fe1ac033fed7ea1501fed833de36d61a143c096
+ timeout 5 podman exec myctr1 sleep 10
+ podman kill myctr1
1c5e2b019aa2e01be333ecfe895164a6f57dbc4a4fab04e1d03120b499201b32
+ grep Exec -A3
+ podman inspect myctr1
        "ExecIDs": [
            "dcdbd910bf7563bd1f931c994aa519a3e0d3dd8e21dcd8c9be388f46dc254953"
        ],
        "GraphDriver": {
+ podman rm myctr1
Error: cannot remove container 1c5e2b019aa2e01be333ecfe895164a6f57dbc4a4fab04e1d03120b499201b32 as it has active exec sessions: container state improper
+ timeout 5 podman exec myctr2 sleep 10
+ podman stop myctr2
74ee0ea7f6de64ca0dc53b674fe1ac033fed7ea1501fed833de36d61a143c096
+ podman inspect myctr2
+ grep Exec -A3
        "ExecIDs": [
            "9f7d3657dd03e858396e2ff74edab32c6d243b0b6ec8340492e867768a5063e2"
        ],
        "GraphDriver": {
+ podman rm myctr2
Error: cannot remove container 74ee0ea7f6de64ca0dc53b674fe1ac033fed7ea1501fed833de36d61a143c096 as it has active exec sessions: container state improper


2. and verify this bug on podman-2.1.1-5.module+el8.3.1+8448+932c72cf

[root@kvm-06-guest10 ~]# rpm -q podman runc
podman-2.1.1-5.module+el8.3.1+8448+932c72cf.x86_64
runc-1.0.0-68.rc92.module+el8.3.1+8448+932c72cf.x86_64

[root@kvm-06-guest10 ~]# sh -x run.sh
+ IMAGE=quay.io/libpod/alpine
+ podman run --name myctr1 -td quay.io/libpod/alpine
Trying to pull quay.io/libpod/alpine...
Getting image source signatures
Copying blob 9d16cba9fb96 done
Copying config 9617696764 done
Writing manifest to image destination
Storing signatures
0d4dd3f61276db9132214a854a2c50616b899c67f94147bf683afa2bee9bb98f
+ podman run --name myctr2 -td quay.io/libpod/alpine
c76f12ffd6a07f190f1f4389bd41b5905d4be1d70e2abb8f1a993a1252f21f25
+ timeout 5 podman exec myctr1 sleep 10
+ podman kill myctr1
0d4dd3f61276db9132214a854a2c50616b899c67f94147bf683afa2bee9bb98f
+ grep Exec -A3
+ podman inspect myctr1
        "ExecIDs": [
            "0c2c8116c20ea5ef90ebddbd3caff3c8e4648c28dab8357fd65fdd2e463d74e4"
        ],
        "GraphDriver": {
+ podman rm myctr1
0d4dd3f61276db9132214a854a2c50616b899c67f94147bf683afa2bee9bb98f
+ timeout 5 podman exec myctr2 sleep 10
+ podman stop myctr2
c76f12ffd6a07f190f1f4389bd41b5905d4be1d70e2abb8f1a993a1252f21f25
+ podman inspect myctr2
+ grep Exec -A3
        "ExecIDs": [
            "8594724a29d5ef1e4d93ec3c8c5824ab003b0ae20ea2e5afeaa437b8863fde35"
        ],
        "GraphDriver": {
+ podman rm myctr2
c76f12ffd6a07f190f1f4389bd41b5905d4be1d70e2abb8f1a993a1252f21f25

Comment 38 errata-xmlrpc 2021-05-18 15:32:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: container-tools:rhel8 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:1796


Note You need to log in before you can comment on or make changes to this bug.