Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 1920698

Summary: podman resource agent logs spurious failed resource actions
Product: Red Hat Enterprise Linux 8 Reporter: Scott Mayhew <smayhew>
Component: resource-agentsAssignee: Oyvind Albrigtsen <oalbrigt>
Status: CLOSED ERRATA QA Contact: cluster-qe <cluster-qe>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 8.4CC: agk, cfeist, cluster-maint, dciabrin, ekuris, fdinitto, jlibosva, mjuricek, phagara, supadhya
Target Milestone: rcKeywords: Triaged, ZStream
Target Release: 8.0Flags: pm-rhel: mirror+
Hardware: Unspecified   
OS: Linux   
Whiteboard:
Fixed In Version: resource-agents-4.1.1-91.el8 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1986273 1986868 2011934 (view as bug list) Environment:
Last Closed: 2021-11-09 17:26:02 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1914911, 1914912, 1986273, 1986868, 2011934    

Description Scott Mayhew 2021-01-26 22:08:23 UTC
Description of problem:
The podman resource agent logs spurious failed resource actions because it runs the monitor command on nodes where the container isn't running.

[root@fs-i24c-04 ~]# pcs status
Cluster name: nfs_cluster
Cluster Summary:
  * Stack: corosync
  * Current DC: fs-i24c-05-ic (version 2.0.4-6.el8_3.1-2deceaa3ae) - partition with quorum
  * Last updated: Tue Jan 26 15:54:33 2021
  * Last change:  Tue Jan 26 15:54:05 2021 by hacluster via crmd on fs-i24c-04-ic
  * 3 nodes configured
  * 21 resource instances configured

Node List:
  * Online: [ fs-i24c-04-ic fs-i24c-05-ic fs-i24c-06-ic ]

Full List of Resources:
  * fencer1	(stonith:fence_idrac):	 Started fs-i24c-05-ic
  * fencer2	(stonith:fence_idrac):	 Started fs-i24c-04-ic
  * fencer3	(stonith:fence_idrac):	 Started fs-i24c-04-ic
  * Resource Group: nfs1:
    * lvm_nfsdcld1	(ocf::heartbeat:LVM-activate):	 Started fs-i24c-06-ic
    * lvm_data1	(ocf::heartbeat:LVM-activate):	 Started fs-i24c-06-ic
    * fs_nfsdcld1	(ocf::heartbeat:Filesystem):	 Started fs-i24c-06-ic
    * fs_data1	(ocf::heartbeat:Filesystem):	 Started fs-i24c-06-ic
    * ip1	(ocf::heartbeat:IPaddr2):	 Started fs-i24c-06-ic
    * nfsd1	(ocf::heartbeat:podman):	 Started fs-i24c-06-ic
  * Resource Group: nfs2:
    * lvm_nfsdcld2	(ocf::heartbeat:LVM-activate):	 Started fs-i24c-05-ic
    * lvm_data2	(ocf::heartbeat:LVM-activate):	 Started fs-i24c-05-ic
    * fs_nfsdcld2	(ocf::heartbeat:Filesystem):	 Started fs-i24c-05-ic
    * fs_data2	(ocf::heartbeat:Filesystem):	 Started fs-i24c-05-ic
    * ip2	(ocf::heartbeat:IPaddr2):	 Started fs-i24c-05-ic
    * nfsd2	(ocf::heartbeat:podman):	 Started fs-i24c-05-ic
  * Resource Group: nfs3:
    * lvm_varlibnfs3	(ocf::heartbeat:LVM-activate):	 Started fs-i24c-04-ic
    * lvm_data3	(ocf::heartbeat:LVM-activate):	 Started fs-i24c-04-ic
    * fs_varlibnfs3	(ocf::heartbeat:Filesystem):	 Started fs-i24c-04-ic
    * fs_data3	(ocf::heartbeat:Filesystem):	 Started fs-i24c-04-ic
    * ip3	(ocf::heartbeat:IPaddr2):	 Started fs-i24c-04-ic
    * nfsd3	(ocf::heartbeat:podman):	 Started fs-i24c-04-ic

Failed Resource Actions:
  * nfsd1_monitor_0 on fs-i24c-04-ic 'error' (1): call=252, status='complete', exitreason='monitor cmd failed (rc=255), output: Error: can only create exec sessions on running containers: container state improper', last-rc-change='2021-01-26 15:54:16 -05:00', queued=0ms, exec=181ms
  * nfsd3_monitor_0 on fs-i24c-04-ic 'error' (1): call=300, status='complete', exitreason='monitor cmd failed (rc=255), output: Error: can only create exec sessions on running containers: container state improper', last-rc-change='2021-01-26 15:54:16 -05:00', queued=0ms, exec=111ms
  * nfsd2_monitor_0 on fs-i24c-05-ic 'error' (1): call=189, status='complete', exitreason='monitor cmd failed (rc=255), output: Error: can only create exec sessions on running containers: container state improper', last-rc-change='2021-01-26 15:54:16 -05:00', queued=0ms, exec=196ms
  * nfsd2_monitor_0 on fs-i24c-06-ic 'error' (1): call=297, status='complete', exitreason='monitor cmd failed (rc=255), output: Error: can only create exec sessions on running containers: container state improper', last-rc-change='2021-01-26 15:54:16 -05:00', queued=0ms, exec=168ms
  * nfsd3_monitor_0 on fs-i24c-06-ic 'error' (1): call=321, status='complete', exitreason='monitor cmd failed (rc=255), output: Error: can only create exec sessions on running containers: container state improper', last-rc-change='2021-01-26 15:54:16 -05:00', queued=0ms, exec=211ms

Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

Version-Release number of selected component (if applicable):
resource-agents-4.1.1-68.el8.x86_64

How reproducible:
Easy

Steps to Reproduce:
1. Configure one or more podman resources.
2.
3.

Actual results:
'pcs status' shows a bunch of failed resource actions, even though the resources are running.  Neither 'pcs resource cleanup' nor 'pcs resource refresh' clears the failures.

Expected results:
No failed resource actions.

Additional info:

Originally I thought it was a bug for the podman resource agent to be running 'podman exec' on a node without making both sure the container exists and is running, so I had hacked up a version to use 'podman inspect' (like the docker resource agent)... then I dug through the git history and saw that the podman behavior was intentional (commit "6016283d podman: only use exec to manage container's lifecycle")... but that depends on 'podman exec' returning 126 when the container isn't running.  

It seems that podman v2 returns 255 for 'container state improper', e.g.

[root@fs-i24c-04 ~]# podman ps -a
CONTAINER ID  IMAGE                  COMMAND     CREATED      STATUS                          PORTS                                                 NAMES
341b41cdb0cd  localhost/nfs1:latest  /sbin/init  2 weeks ago  Exited (130) 4 days ago         10.16.229.14:2049->2049/tcp                           nfs1
008dcaff0a6a  localhost/nfs2:latest  /sbin/init  4 weeks ago  Exited (130) About an hour ago  10.16.229.15:2049->2049/tcp                           nfs2
d13fe9627456  localhost/nfs3:latest  /sbin/init  6 weeks ago  Up 47 minutes ago               10.16.229.16:111->111/tcp, 10.16.229.16:111->111/udp  nfs3
[root@fs-i24c-04 ~]# podman exec nfs1 /bin/true
Error: can only create exec sessions on running containers: container state improper
[root@fs-i24c-04 ~]# echo $?
255

So I suppose this could also be classified as a podman bug instead.

Comment 1 Oyvind Albrigtsen 2021-03-22 10:56:08 UTC
https://github.com/ClusterLabs/resource-agents/pull/1629

Comment 5 Dean Jansa 2021-04-08 17:06:05 UTC
ON_QA bug without Verified:Tested should be in the MODIFIED state.

Comment 7 Dean Jansa 2021-04-15 08:00:48 UTC
ON_QA bug without Verified:Tested should be in the MODIFIED state.

Comment 10 Damien Ciabrini 2021-07-09 12:47:38 UTC
*** Bug 1980735 has been marked as a duplicate of this bug. ***

Comment 11 Jakub Libosvar 2021-07-27 13:33:42 UTC
*** Bug 1919276 has been marked as a duplicate of this bug. ***

Comment 14 Chris Feist 2021-08-09 13:25:18 UTC
*** Bug 1986273 has been marked as a duplicate of this bug. ***

Comment 17 errata-xmlrpc 2021-11-09 17:26:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: resource-agents security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:4139