Bug 1519379
| Summary: | Pacemaker does not immediately recover from failed demote actions | ||||||
|---|---|---|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Ondrej Faměra <ofamera> | ||||
| Component: | pacemaker | Assignee: | Ken Gaillot <kgaillot> | ||||
| Status: | CLOSED ERRATA | QA Contact: | cluster-qe <cluster-qe> | ||||
| Severity: | medium | Docs Contact: | |||||
| Priority: | medium | ||||||
| Version: | 7.4 | CC: | abeekhof, cfeist, cluster-maint, dkinkead, jenander, mnovacek, ofamera, qguo | ||||
| Target Milestone: | rc | ||||||
| Target Release: | 7.6 | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | pacemaker-1.1.18-12.el7 | Doc Type: | Bug Fix | ||||
| Doc Text: |
Cause: If a demote action failed, Pacemaker would always bring the resource to a full stop, even if the configuration specified a restart.
Consequence: Failed demote actions would lead to the resource being stopped until the next natural recalculation (external event or cluster-recheck-interval).
Fix: Pacemaker follows the configured policy for recovery from failed demote actions.
Result: Failed demote actions lead to immediate recovery as specified by the configuration.
|
Story Points: | --- | ||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2018-10-30 07:57:39 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Attachments: |
|
||||||
|
Description
Ondrej Faměra
2017-11-30 16:26:20 UTC
Looking at the logs, at 13:37:36 everything is fine -- fastvm-rhel-7-4-96 is running the master, and fastvm-rhel-7-4-95 is running the slave (pe-input-233). Node 95 cannot be the master, according to the agent: Nov 30 13:37:36 db2(DB2_HADR)[21237]: WARNING: DB2 database db2inst1(0)/sample in status STANDBY/DISCONNECTED_PEER/DISCONNECTED can never be promoted At 13:37:49 (presumably when the DB2 processes were killed), the monitor fails, and the first thing that happens is the resource agent takes away node 96's ability to be master: Nov 30 13:37:49 [10170] fastvm-rhel-7-4-96 attrd: info: attrd_peer_update: Setting master-DB2_HADR[fastvm-rhel-7-4-96]: 10000 -> (null) from fastvm-rhel-7-4-96 The cluster responds appropriately, by wanting to demote the instance (pe-input-234). Before it can try, the resource's fail count gets updated, and the failed operation is recorded. The cluster responds appropriately again, by wanting to demote, stop, and start the instance as a slave (pe-input-235 and pe-input-236). Now it tries the demote, and the resource agent returns failure. As a side note, the agent returns 1 (generic error), not 7 (not running). 7 may not have been appropriate in this case (it should be returned only if it's cleanly stopped), but if it had returned 7, pacemaker would have done the right thing from this point. The failed operation is recorded, and the fail count is updated again. Now, pacemaker makes a mistake: it decides to stop the instance, without a subsequent start (pe-input-237). Currently, pacemaker responds to all failed demotes with a full stop. But in cases such as this, we should continue with a start. At the next decision point, Pacemaker correctly determines that the start is needed. This is why it works with record-pending=true or at the cluster-recheck-interval -- both of those trigger a new decision. I will investigate a fix. Bumping to 7.6 due to time constraints. We can consider a 7.5.z if necessary. Fixed upstream as of commit a962eb7 qa_ack+: Ondrej is able to test it in the original environment Problem reported fixed in version pacemaker-cli-1.1.18-11.el7_5.3.x86_64 by reporter. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3055 |