Bug 2100740
Summary: | podman can not force remove paused container | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 8 | Reporter: | Joy Pu <ypu> |
Component: | podman | Assignee: | Jindrich Novy <jnovy> |
Status: | CLOSED ERRATA | QA Contact: | Alex Jia <ajia> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 8.6 | CC: | ajia, bbaude, dwalsh, gscrivan, jligon, jnovy, lsm5, mheon, pthomas, santiago, tsweeney, umohnani |
Target Milestone: | rc | Keywords: | Regression, Triaged |
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | podman-4.2.0-1.el8 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2022-11-08 09:16:27 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Joy Pu
2022-06-24 07:15:36 UTC
The same steps works with runc-1.0.3-2.module+el8.6.0+15634+64c2e3db.x86_64. I think the "$RUNTIME resume $CTR" call should not fail in any case. Error reproduces easily on 1MT-RHEL-8.6.0-updates-20220614.0 with MB 15659. Cgroups is v1. Rebooting into cgroups v2, the error does not reproduce. the issue is caused by the block in removeContainer(): isV2, err := cgroups.IsCgroup2UnifiedMode() if err != nil { return err } // cgroups v1 and v2 handle signals on paused processes differently if !isV2 { if err := c.unpause(); err != nil { return err } } Matt, what do you suggest? Do we move the unpause before the KillContainer, or do we drop it completely and let the OCI runtime deal with it (in this case I'd have to change crun)? The intent of killing before unpausing was, AFAIK, to ensure that containers that were acting maliciously (e.g. forkbomb) could be paused and removed without resuming execution. Changing the order of the operations would seem to defeat that, so relying on the OCI runtime to behave correctly seems best. It would be best if we could either ignore that error out of runc or be conditional about what runtimes require it, but that seems like an overly complicated solution. opened a PR: https://github.com/containers/podman/pull/14765 (In reply to Giuseppe Scrivano from comment #7) > opened a PR: https://github.com/containers/podman/pull/14765 This PR works for me. [root@kvm-05-guest11 podman]# git pull origin pull/14765/head remote: Enumerating objects: 4, done. remote: Counting objects: 100% (4/4), done. remote: Total 4 (delta 3), reused 4 (delta 3), pack-reused 0 Unpacking objects: 100% (4/4), 1004 bytes | 502.00 KiB/s, done. From https://github.com/containers/podman * branch refs/pull/14765/head -> FETCH_HEAD Updating 653e87dd4..1affceb29 Fast-forward libpod/runtime_ctr.go | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) [root@kvm-05-guest11 podman]# make podman CGO_ENABLED=1 GOOS=linux GOARCH=amd64 go build \ \ -ldflags '-X github.com/containers/podman/v4/libpod/define.gitCommit=1affceb29f197b0c3dca13391b8eef36e39a175b -X github.com/containers/podman/v4/libpod/define.buildInfo=1656471486 -X github.com/containers/podman/v4/libpod/config._installPrefix=/usr/local -X github.com/containers/podman/v4/libpod/config._etcDir=/usr/local/etc -X github.com/containers/common/pkg/config.additionalHelperBinariesDir= ' \ -tags " selinux systemd exclude_graphdriver_devicemapper seccomp" \ -o bin/podman ./cmd/podman [root@kvm-05-guest11 podman]# ./bin/podman run --name foo -d quay.io/libpod/testimage:20210610 top 17f1031a6d54cd592401fe6b6b513de1d9ed272a0ea8f628153a2f2e95f6a52a [root@kvm-05-guest11 podman]# ./bin/podman pause foo 17f1031a6d54cd592401fe6b6b513de1d9ed272a0ea8f628153a2f2e95f6a52a [root@kvm-05-guest11 podman]# ./bin/podman ps -a --filter name=foo CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 17f1031a6d54 quay.io/libpod/testimage:20210610 top 19 seconds ago Paused foo [root@kvm-05-guest11 podman]# ./bin/podman rm -f foo 17f1031a6d54cd592401fe6b6b513de1d9ed272a0ea8f628153a2f2e95f6a52a [root@kvm-05-guest11 podman]# ./bin/podman ps -a --filter name=foo CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES This was not yet backported into v4.1.1-rhel so it won't make RHEL 8.6.0.2. Will push this out to RHEL 8.7. This bug has been verified on podman-4.2.0-1.module+el8.7.0+16493+89f82ab8.x86_64. [root@hpe-dl380pgen8-02-vm-7 ~]# cat /etc/redhat-release Red Hat Enterprise Linux release 8.7 Beta (Ootpa) [root@hpe-dl380pgen8-02-vm-7 ~]# rpm -q podman runc systemd kernel podman-4.2.0-1.module+el8.7.0+16493+89f82ab8.x86_64 runc-1.1.4-1.module+el8.7.0+16493+89f82ab8.x86_64 systemd-239-65.el8.x86_64 kernel-4.18.0-422.el8.x86_64 [root@hpe-dl380pgen8-02-vm-7 ~]# podman run --name foo -d quay.io/libpod/testimage:20210610 top Trying to pull quay.io/libpod/testimage:20210610... Getting image source signatures Copying blob 9afcdfe780b4 done Copying config 9f9ec7f2fd done Writing manifest to image destination Storing signatures e7ae48c5f1838b10daea10ccfbd835397b36f23371c63ca6e1438df8295512fd [root@hpe-dl380pgen8-02-vm-7 ~]# podman pause foo e7ae48c5f1838b10daea10ccfbd835397b36f23371c63ca6e1438df8295512fd [root@hpe-dl380pgen8-02-vm-7 ~]# podman ps -a --filter name=foo CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e7ae48c5f183 quay.io/libpod/testimage:20210610 top 10 seconds ago Paused foo [root@hpe-dl380pgen8-02-vm-7 ~]# podman rm -f foo e7ae48c5f1838b10daea10ccfbd835397b36f23371c63ca6e1438df8295512fd [root@hpe-dl380pgen8-02-vm-7 ~]# podman ps -a --filter name=foo CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: container-tools:rhel8 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:7457 The needinfo request[s] on this closed bug have been removed as they have been unresolved for 120 days |