RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2136283 - [RFE] python-podman: Podman support to perform custom actions on unhealthy containers [rhel-9.1.0.z]
Summary: [RFE] python-podman: Podman support to perform custom actions on unhealthy co...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 9
Classification: Red Hat
Component: python-podman
Version: 9.2
Hardware: Unspecified
OS: Linux
urgent
medium
Target Milestone: rc
: ---
Assignee: Jindrich Novy
QA Contact: Alex Jia
URL:
Whiteboard:
Depends On: 2131741 2132360
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-10-19 19:51 UTC by Tom Sweeney
Modified: 2022-12-06 13:51 UTC (History)
22 users (show)

Fixed In Version: python-podman-4.2.1-1.el9_1
Doc Type: Enhancement
Doc Text:
Clone Of: 2132360
Environment:
Last Closed: 2022-11-15 16:00:11 UTC
Type: ---
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker RHELPLAN-137094 0 None None None 2022-10-19 19:58:34 UTC
Red Hat Product Errata RHBA-2022:8439 0 None None None 2022-11-15 16:00:13 UTC

Comment 1 Tom Sweeney 2022-10-19 19:53:15 UTC
Setting to Post and assigning to @jnovy .  The fix from the cloned BZ in RHEL 8.7.0.ZeroDay will apply here too.

@ypu and Jindrich, can I get a QA and Dev ack please?

Comment 9 Alex Jia 2022-10-25 08:45:51 UTC
[test@kvm-08-guest24 ~]$ podman inspect gallant_mahavira|grep -iA2 Healthcheck
               "Healthcheck": {
                    "Test": [
                         "CMD-SHELL curl http://localhost || exit"
--
               "HealthcheckOnFailureAction": "invalid",
               "Umask": "0022",
               "Timeout": 0,

NOTE: the value of HealthcheckOnFailureAction is 'invalid' in here, please help confirm
whether it's acceptable, for details as follows, thanks!

[test@kvm-08-guest24 ~]$ cat /etc/redhat-release
Red Hat Enterprise Linux release 9.1 (Plow)

[test@kvm-08-guest24 ~]$ rpm -q python3-podman podman crun systemd kernel
python3-podman-4.2.1-1.el9_1.noarch
podman-4.2.0-5.el9_1.x86_64
crun-1.5-1.el9.x86_64
systemd-250-12.el9_1.x86_64
kernel-5.14.0-162.6.1.el9_1.x86_64

[test@kvm-08-guest24 ~]$ podman system service -t 0 &
[1] 21616

[test@kvm-08-guest24 ~]$ netstat -lanp|grep podman.sock
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
unix  2      [ ACC ]     STREAM     LISTENING     50341    21629/podman         /run/user/1000/podman/podman.sock

[test@kvm-08-guest24 ~]$ cat test.py 
"""Demonstrate PodmanClient."""
import json
from podman import PodmanClient

alpine_image = "quay.io/libpod/alpine:latest"
uri = "unix:///run/user/1000/podman/podman.sock"

def test_container_healtchecks():
    """Test passing various healthcheck options"""
    with PodmanClient(base_url=uri) as client:

        containers = []
        parameters = {}

        version = client.version()
        print("Release: ", version["Version"])
        print("Compatible API: ", version["ApiVersion"])
        print("Podman API: ", version["Components"][0]["Details"]["APIVersion"], "\n")

        parameters['healthcheck'] = {'Test': ['CMD-SHELL curl http://localhost || exit']}
        parameters['health_check_on_failure_action'] = 1
        container = client.containers.create(alpine_image, **parameters)
        print("current container:%s" % container)
        containers.append(container)


if __name__ == "__main__":
    test_container_healtchecks()

[test@kvm-08-guest24 ~]$ podman ps -a
CONTAINER ID  IMAGE       COMMAND     CREATED     STATUS      PORTS       NAMES
[test@kvm-08-guest24 ~]$ podman pull quay.io/libpod/alpine:latest
Trying to pull quay.io/libpod/alpine:latest...
Getting image source signatures
Copying blob 9d16cba9fb96 done  
Copying config 9617696764 done  
Writing manifest to image destination
Storing signatures
961769676411f082461f9ef46626dd7a2d1e2b2a38e6a44364bcbecf51e66dd4

[test@kvm-08-guest24 ~]$ python3 test.py
Release:  4.2.0
Compatible API:  1.41
Podman API:  4.2.0 

current container:<Container: d1c762ab95>

[test@kvm-08-guest24 ~]$ podman ps -a
CONTAINER ID  IMAGE                         COMMAND     CREATED         STATUS      PORTS       NAMES
d1c762ab957f  quay.io/libpod/alpine:latest  /bin/sh     15 seconds ago  Created                 gallant_mahavira

[test@kvm-08-guest24 ~]$ podman inspect gallant_mahavira|grep -iA2 Healthcheck
               "Healthcheck": {
                    "Test": [
                         "CMD-SHELL curl http://localhost || exit"
--
               "HealthcheckOnFailureAction": "invalid",
               "Umask": "0022",
               "Timeout": 0,

NOTE: the value of HealthcheckOnFailureAction is 'invalid' in here, please help confirm if it's acceptable, thanks!

Comment 10 Charlie Doern 2022-10-25 12:11:24 UTC
I think this is expected given: https://github.com/containers/podman/blob/7e7db23dbf163837ba3216fea09b31d2c8409fb3/libpod/define/healthchecks.go#L71-L82. Specifying 1 matches up with the invalid iota. Specgen is more robust leading users to be able to pick choices that don't necessarily help them... Can you try testing with 2 @ajia

Comment 11 Alex Jia 2022-10-27 02:45:22 UTC
(In reply to Charlie Doern from comment #10)
> I think this is expected given:
> https://github.com/containers/podman/blob/
> 7e7db23dbf163837ba3216fea09b31d2c8409fb3/libpod/define/healthchecks.go#L71-
> L82. Specifying 1 matches up with the invalid iota. Specgen is more robust
> leading users to be able to pick choices that don't necessarily help them...
> Can you try testing with 2 @ajia

Test results as expected when setting health_check_on_failure_action value to 2, thanks a lot!

[test@kvm-08-guest24 ~]$ grep health_check_on_failure_action test.py 
        parameters['health_check_on_failure_action'] = 2

[test@kvm-08-guest24 ~]$ podman inspect romantic_jang|grep -iA2 Healthcheck
               "Healthcheck": {
                    "Test": [
                         "CMD-SHELL curl http://localhost || exit"
--
               "HealthcheckOnFailureAction": "kill",
               "Umask": "0022",
               "Timeout": 0,

Comment 12 Alex Jia 2022-10-27 02:49:22 UTC
This bug has been verified on python-podman-4.2.1-1.el9_1.

Comment 16 errata-xmlrpc 2022-11-15 16:00:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (python-podman bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:8439


Note You need to log in before you can comment on or make changes to this bug.