Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.
RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.

Bug 2133497

Summary: pgsql agent fails at cluster shutdown due to crm_mon regression
Product: Red Hat Enterprise Linux 8 Reporter: Ken Gaillot <kgaillot>
Component: pacemakerAssignee: Reid Wahl <nwahl>
Status: CLOSED ERRATA QA Contact: cluster-qe <cluster-qe>
Severity: urgent Docs Contact: Steven J. Levine <slevine>
Priority: urgent    
Version: 8.7CC: byodlows, cfeist, cluster-maint, jobaker, msmazova, nwahl, sanyadav, slevine
Target Milestone: rcKeywords: CustomerScenariosInitiative, Regression, Triaged, ZStream
Target Release: 8.8Flags: pm-rhel: mirror+
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: pacemaker-2.1.5-1.el8 Doc Type: Bug Fix
Doc Text:
.Cluster resources that call `crm_mon` now stop cleanly at shutdown Previously, the `crm_mon` utility returned a nonzero exit status while Pacemaker was in the process of shutting down. Resource agents that called `crm_mon` in their monitor action, such as `ocf:heartbeat:pqsql`, could incorrectly return a failure at cluster shutdown. With this fix, `crm_mon` returns success even if the cluster is in the process of shutting down. Resources that call `crm_mon` now stop cleanly at cluster shutdown.
Story Points: ---
Clone Of:
: 2133546 2133830 (view as bug list) Environment:
Last Closed: 2023-05-16 08:35:22 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version: 2.1.5
Embargoed:
Bug Depends On:    
Bug Blocks: 2133546, 2133830    

Description Ken Gaillot 2022-10-10 15:32:55 UTC
Description of problem: If the ocf:heartbeat:pgsql resource agent runs a monitor while Pacemaker is in the process of shutting down, it will incorrectly get an error.


Version-Release number of selected component (if applicable): RHEL 8.7 packages


How reproducible: reliably


Steps to Reproduce:
1. Configure a cluster with an ocf:heartbeat:pgsql resource with a frequent monitor. It helps to configure a slow resource (e.g. Dummy with a stop delay) to make the problem more likely.
2. Shut down the cluster.

Actual results: The pgsql resource fails a monitor.


Expected results: The cluster shuts down cleanly.


Additional info: This is a regression in the 8.7 packages.

Comment 5 Ken Gaillot 2022-10-12 21:42:28 UTC
Fixed in upstream main branch as of commit 2d4a36c0f

Comment 8 Reid Wahl 2022-12-05 00:00:27 UTC
*** Bug 2150674 has been marked as a duplicate of this bug. ***

Comment 9 Markéta Smazová 2022-12-19 12:17:29 UTC
Using reproducer from: https://bugzilla.redhat.com/show_bug.cgi?id=1948620#c36


before fix
----------

>   [root@virt-524 ~]# rpm -q pacemaker
>   pacemaker-2.1.4-5.el8.x86_64

Create a resource that takes long enough to stop:

>   [root@virt-524 ~]# pcs resource create dummy ocf:pacemaker:Dummy op_sleep=10

Wait until it is started:

>   [root@virt-524 ~]# pcs status
>   Cluster name: STSRHTS25361
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-530 (version 2.1.4-5.el8-dc6eb4362e) - partition with quorum
>     * Last updated: Mon Dec 19 11:20:27 2022
>     * Last change:  Mon Dec 19 11:20:04 2022 by root via cibadmin on virt-524
>     * 2 nodes configured
>     * 3 resource instances configured

>   Node List:
>     * Online: [ virt-524 virt-530 ]

>   Full List of Resources:
>     * fence-virt-524	(stonith:fence_xvm):	 Started virt-524
>     * fence-virt-530	(stonith:fence_xvm):	 Started virt-530
>     * dummy	(ocf::pacemaker:Dummy):	 Started virt-524 (Monitoring)

>   Daemon Status:
>     corosync: active/disabled
>     pacemaker: active/disabled
>     pcsd: active/enabled

Stop the cluster and run "pcs status" while the resource is stopping:

>   [root@virt-524 ~]# pcs cluster stop --all &>/dev/null & sleep 5; pcs status; echo $?
>   [1] 59188
>   Error: error running crm_mon, is pacemaker running?
>     crm_mon: Error: cluster is not available on this node
>   1


Result: crm_mon fails and doesn't show resource stopping. 


after fix
---------

>   [root@virt-510 ~]# rpm -q pacemaker
>   pacemaker-2.1.5-2.el8.x86_64

Create a resource that takes long enough to stop:

>   [root@virt-510 ~]# pcs resource create dummy ocf:pacemaker:Dummy op_sleep=10

Wait until it is started:

>   [root@virt-510 ~]# pcs status
>   Cluster name: STSRHTS18403
>   Status of pacemakerd: 'Pacemaker is running' (last updated 2022-12-16 15:52:18 +01:00)
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-510 (version 2.1.5-2.el8-631339ca5aa) - partition with quorum
>     * Last updated: Fri Dec 16 15:52:18 2022
>     * Last change:  Fri Dec 16 15:51:50 2022 by root via cibadmin on virt-510
>     * 2 nodes configured
>     * 3 resource instances configured

>   Node List:
>     * Online: [ virt-509 virt-510 ]

>   Full List of Resources:
>     * fence-virt-509	(stonith:fence_xvm):	 Started virt-509
>     * fence-virt-510	(stonith:fence_xvm):	 Started virt-510
>     * dummy	(ocf::pacemaker:Dummy):	 Started virt-509 (Monitoring)

>   Daemon Status:
>     corosync: active/enabled
>     pacemaker: active/enabled
>     pcsd: active/enabled

Stop the cluster and run "pcs status" while the resource is stopping:

>   [root@virt-510 ~]# pcs cluster stop --all &>/dev/null & sleep 5; pcs status; echo $?
>   [1] 624761
>   Cluster name: STSRHTS18403
>   Status of pacemakerd: 'Pacemaker daemons are shutting down' (last updated 2022-12-16 15:57:28 +01:00)
>   Cluster Summary:
>     * Stack: corosync
>     * Current DC: virt-510 (version 2.1.5-2.el8-631339ca5aa) - partition with quorum
>     * Last updated: Fri Dec 16 15:57:28 2022
>     * Last change:  Fri Dec 16 15:51:50 2022 by root via cibadmin on virt-510
>     * 2 nodes configured
>     * 3 resource instances configured

>   Node List:
>     * Online: [ virt-509 virt-510 ]

>   Full List of Resources:
>     * fence-virt-509	(stonith:fence_xvm):	 Stopped
>     * fence-virt-510	(stonith:fence_xvm):	 Stopped
>     * dummy	(ocf::pacemaker:Dummy):	 Stopping virt-509

>   Daemon Status:
>     corosync: active/enabled
>     pacemaker: inactive/enabled
>     pcsd: active/enabled
>   0


Result: crm_mon does not fail, returns 0 and resource is stopping.


marking VERIFIED in pacemaker-2.1.5-2.el8

Comment 17 errata-xmlrpc 2023-05-16 08:35:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (pacemaker bug fix and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:2818