Bug 2155828

Summary: podman rm leaves running container behind
Product: Red Hat Enterprise Linux 8 Reporter: Francois Andrieu <fandrieu>
Component: podmanAssignee: Jindrich Novy <jnovy>
Status: CLOSED ERRATA QA Contact: Alex Jia <ajia>
Severity: high Docs Contact:
Priority: urgent    
Version: 8.7CC: arajendr, bbaude, dornelas, dwalsh, gscrivan, jligon, jnovy, lsm5, mboddu, mheon, pthomas, tsweeney, umohnani, ypu
Target Milestone: rcKeywords: Triaged, ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: podman-4.4.0-1.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 2158632 2158635 (view as bug list) Environment:
Last Closed: 2023-05-16 08:22:23 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2158632, 2158635    

Description Francois Andrieu 2022-12-22 14:10:16 UTC
Description of problem:

Under some rare conditions, 'podman rm' succeeds but leaves the container running.

When a container is being stopped (or restarted), and if the main container process is not able to handle SIGTERM, podman fallback to SIGKILL after the default 10 seconds.
During these 10 seconds, a 'podman rm' command will succeed but will cancel the SIGKILL signal, leaving the container running behind.

Version-Release number of selected component (if applicable):
podman 4.2.0-4
conman 2.1.4-1
runc 1.1.4-1

How reproducible:
easily reproducible (see next section)

Steps to Reproduce:
1. Create the container (I'm also using systemd here to automatically restart the container and make it easier to catch the 10 seconds window)
$ podman create  --name test_healthcheck --health-cmd /bin/false --health-interval 5s --health-retries 3 --health-timeout 1s --health-on-failure=stop --health-start-period 5s --stop-signal SIGTERM ubi8 sleep infinity
$ podman generate systemd test_healthcheck > /etc/systemd/system/test_healthcheck.service
$ systemctl daemon-reload

2. Start the container
$ systemctl start test_healthcheck.service
$ ps -ef |grep sleep
root        6691    6689  0 14:51 ?        00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep infinity
$ pstree -ps 6691
systemd(1)───conmon(6689)───sleep(6691)

3. Try to remove the container (will only work if you are in the 10s window after the first SIGTERM)
$ podman rm test_healthcheck
test_healthcheck
$ podman ps -a
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES
<nothing>

Container has been removed, but the process is still running:

$ pstree -ps 6691
systemd(1)───conmon(6689)───sleep(6691)

Stopping the associated systemd unit does not help either:
$ systemctl stop test_healthcheck.service 
Dec 22 14:54:56 rhel9 systemd[1]: Stopping Podman container-4b8219383aa5f26e9b02b7be78a09791751b07861c0a9a7ae1899667661745a0.service...
Dec 22 14:54:56 rhel9 podman[6830]: Error: no container with name or ID "4b8219383aa5f26e9b02b7be78a09791751b07861c0a9a7ae1899667661745a0" found: no such container
Dec 22 14:54:56 rhel9 systemd[1]: test_healthcheck.service: Control process exited, code=exited, status=125/n/a
Dec 22 14:56:07 rhel9 systemd[1]: test_healthcheck.service: State 'stop-sigterm' timed out. Killing.
Dec 22 14:56:07 rhel9 systemd[1]: test_healthcheck.service: Killing process 6689 (conmon) with signal SIGKILL.
Dec 22 14:56:07 rhel9 systemd[1]: test_healthcheck.service: Main process exited, code=killed, status=9/KILL
Dec 22 14:56:07 rhel9 podman[6837]: Error: no container with name or ID "4b8219383aa5f26e9b02b7be78a09791751b07861c0a9a7ae1899667661745a0" found: no such container
Dec 22 14:56:07 rhel9 systemd[1]: test_healthcheck.service: Control process exited, code=exited, status=125/n/a
Dec 22 14:56:07 rhel9 systemd[1]: test_healthcheck.service: Failed with result 'exit-code'.
Dec 22 14:56:07 rhel9 systemd[1]: Stopped Podman container-4b8219383aa5f26e9b02b7be78a09791751b07861c0a9a7ae1899667661745a0.service.


$ pstree -ps 6691
systemd(1)───sleep(6691)


Actual results:
The container is still running, and the containerized process is orphaned.

Expected results:
The container is stopped and then removed.

Additional info:
Same issue on both 8.7 with podman 4.2.0-1 and 9.2 with podman 4.3.1-3

Comment 3 Giuseppe Scrivano 2023-01-03 15:48:56 UTC
opened a PR for upstream https://github.com/containers/podman/pull/16978

Comment 31 Alex Jia 2023-02-13 06:58:22 UTC
This bug has been verified on podman-4.4.0-1.module+el8.8.0+18060+3f21f2cc.x86_64.

[root@kvm-02-guest12 ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux release 8.8 Beta (Ootpa)

[root@kvm-02-guest12 ~]# rpm -q podman runc systemd kernel
podman-4.4.0-1.module+el8.8.0+18060+3f21f2cc.x86_64
runc-1.1.4-1.module+el8.8.0+18060+3f21f2cc.x86_64
systemd-239-71.el8.x86_64
kernel-4.18.0-458.el8.x86_64

[root@kvm-02-guest12 ~]# (
>   podman run -d --name test_remove --health-cmd /bin/false --health-interval 5s --health-retries 1 --health-timeout 1s --health-on-failure=stop --health-start-period 5s --stop-signal SIGTERM ubi8 sleep infinity
>   sleep 8
>   ps -ef|grep sleep
>   podman ps -a
>   podman rm test_remove --force
> )
Resolved "ubi8" as an alias (/etc/containers/registries.conf.d/001-rhel-shortnames.conf)
Trying to pull registry.access.redhat.com/ubi8:latest...
Getting image source signatures
Checking if image destination supports signatures
Copying blob ea0a20a2c448 skipped: already exists  
Copying config 55e87c8c55 done  
Writing manifest to image destination
Storing signatures
d67e64ff7d3f5fc22b1d61918b7aa5734c41c153a2ba6ead9bbdaee022645413
root       43103   43095  0 01:55 ?        00:00:00 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep infinity
root       43171   10034  0 01:55 ?        00:00:00 sleep 5
root       43173   42992  0 01:55 pts/0    00:00:00 grep --color=auto sleep
CONTAINER ID  IMAGE                                   COMMAND         CREATED        STATUS                PORTS       NAMES
d67e64ff7d3f  registry.access.redhat.com/ubi8:latest  sleep infinity  8 seconds ago  Stopping (unhealthy)              test_remove
test_remove

[root@kvm-02-guest12 ~]# podman ps -a
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES
[root@kvm-02-guest12 ~]# ps -fww 43103
UID          PID    PPID  C STIME TTY      STAT   TIME CMD

Comment 33 errata-xmlrpc 2023-05-16 08:22:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: container-tools:rhel8 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:2758