RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1519379 - Pacemaker does not immediately recover from failed demote actions
Summary: Pacemaker does not immediately recover from failed demote actions
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: pacemaker
Version: 7.4
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: rc
: 7.6
Assignee: Ken Gaillot
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-30 16:26 UTC by Ondrej Faměra
Modified: 2021-03-11 16:28 UTC (History)
8 users (show)

Fixed In Version: pacemaker-1.1.18-12.el7
Doc Type: Bug Fix
Doc Text:
Cause: If a demote action failed, Pacemaker would always bring the resource to a full stop, even if the configuration specified a restart. Consequence: Failed demote actions would lead to the resource being stopped until the next natural recalculation (external event or cluster-recheck-interval). Fix: Pacemaker follows the configured policy for recovery from failed demote actions. Result: Failed demote actions lead to immediate recovery as specified by the configuration.
Clone Of:
Environment:
Last Closed: 2018-10-30 07:57:39 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Collected data mentioned in BZ description (450.03 KB, application/x-gzip)
2017-11-30 16:26 UTC, Ondrej Faměra
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3321711 0 None None None 2018-01-16 12:19:44 UTC
Red Hat Product Errata RHBA-2018:3055 0 None None None 2018-10-30 07:59:13 UTC

Description Ondrej Faměra 2017-11-30 16:26:20 UTC
Created attachment 1360990 [details]
Collected data mentioned in BZ description

=== Description of problem:
pacemaker seems to wait for 'cluster-recheck-interval' time (by default 15 minutes)
to continue doing operations that "should happen" immediately. 
During time when nothing is happening the 'crm_simulate -LR' shows outstanding transactions
that pacemaker plans to do, but they don't happen until "some action" appears in cluster.
"some action" can be reaching of 'cluster-recheck-interval' or creation/deletion or cluster resources.
Then the outstanding transactions continues.

=== Version-Release number of selected component (if applicable):
pacemaker-1.1.16-12.el7_4.4

=== How reproducible:
Always with DB2 Master/Slave resource after failure of Master instance.

=== Steps to Reproduce:
1. Have cluster running Master/Slave resource DB2 from bz1516180 (so it can be promoted to Master after killing)
2. 'kill -9 21448' processes that are runned by DB2 Master instance 
  (on node running Master instance grep for 'db2wdog' process)
  # ps aux|grep db2wdog
  root     21448  0.0  5.4 1191508 48220 ?       Sl   16:47   0:00 db2wdog 0 [db2inst1]
3. Observe the actions in cluster.

=== Actual results:
Master DB2 instance:
- fails the 'monitor' operation
- fails the 'demote' operation
- do 'stop' operation
- [!!] waits for 15 minutes (cluster-recheck-interval)
- do 'start' operation
- do 'promote' operation if possible

=== Expected results:
Master DB2 instance:
- fails the 'monitor' operation
- fails the 'demote' operation
- do 'stop' operation
- no unnecessary waiting
- do 'start' operation
- do 'promote' operation if possible

=== Additional info:
Cluster behaves as expected when 'record-pending=true' is set on all resources.
  # pcs resource op defaults record-pending=true

This behaviour is very consistent with the DB2 resource agent.

Decreasing 'cluster-recheck-interval' results in decreased time to continue the
actions so this is suspected to be relevant here.

Attached to this bugzilla (attachment.tar.gz) are logs and test that were done in 2 scenarios:
- with_long_delay - original 15 minute issue
- with_record_pending - the improvement with 'record-pending=true'
(included are: crm_report, corosync.log, test-* with steps and timings, PCMK_DEBUG=yes in all tests)

If needed I can setup dedicated systems running above configuration for tests.

Comment 2 Ken Gaillot 2017-12-15 22:17:46 UTC
Looking at the logs, at 13:37:36 everything is fine -- fastvm-rhel-7-4-96 is running the master, and fastvm-rhel-7-4-95 is running the slave (pe-input-233). Node 95 cannot be the master, according to the agent:

Nov 30 13:37:36  db2(DB2_HADR)[21237]:    WARNING: DB2 database db2inst1(0)/sample in status STANDBY/DISCONNECTED_PEER/DISCONNECTED can never be promoted


At 13:37:49 (presumably when the DB2 processes were killed), the monitor fails, and the first thing that happens is the resource agent takes away node 96's ability to be master:

Nov 30 13:37:49 [10170] fastvm-rhel-7-4-96      attrd:     info: attrd_peer_update:     Setting master-DB2_HADR[fastvm-rhel-7-4-96]: 10000 -> (null) from fastvm-rhel-7-4-96

The cluster responds appropriately, by wanting to demote the instance (pe-input-234). Before it can try, the resource's fail count gets updated, and the failed operation is recorded. The cluster responds appropriately again, by wanting to demote, stop, and start the instance as a slave (pe-input-235 and pe-input-236).

Now it tries the demote, and the resource agent returns failure. As a side note, the agent returns 1 (generic error), not 7 (not running). 7 may not have been appropriate in this case (it should be returned only if it's cleanly stopped), but if it had returned 7, pacemaker would have done the right thing from this point.

The failed operation is recorded, and the fail count is updated again. Now, pacemaker makes a mistake: it decides to stop the instance, without a subsequent start (pe-input-237). Currently, pacemaker responds to all failed demotes with a full stop. But in cases such as this, we should continue with a start.

At the next decision point, Pacemaker correctly determines that the start is needed. This is why it works with record-pending=true or at the cluster-recheck-interval -- both of those trigger a new decision.

I will investigate a fix.

Comment 3 Ken Gaillot 2017-12-19 23:38:19 UTC
Bumping to 7.6 due to time constraints. We can consider a 7.5.z if necessary.

Comment 4 Ken Gaillot 2018-01-15 17:59:33 UTC
Fixed upstream as of commit a962eb7

Comment 12 michal novacek 2018-04-17 12:47:02 UTC
qa_ack+: Ondrej is able to test it in the original environment

Comment 17 michal novacek 2018-08-27 11:16:12 UTC
Problem reported fixed in version pacemaker-cli-1.1.18-11.el7_5.3.x86_64
by reporter.

Comment 19 errata-xmlrpc 2018-10-30 07:57:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:3055


Note You need to log in before you can comment on or make changes to this bug.