Bug 1464068
| Summary: | [GANESHA] pcs status shows all nodes in started state for ~15 mins even when hit "partition WITHOUT quorum" with IO's still resuming | |||
|---|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Manisha Saini <msaini> | |
| Component: | pacemaker | Assignee: | Ken Gaillot <kgaillot> | |
| Status: | CLOSED ERRATA | QA Contact: | cluster-qe <cluster-qe> | |
| Severity: | urgent | Docs Contact: | ||
| Priority: | urgent | |||
| Version: | 7.4 | CC: | abeekhof, aherr, cfeist, cluster-maint, fwestpha, jruemker, jthottan, kgaillot, kkeithle, mnovacek, nbarcet, phagara, rhs-bugs, rkhan, skoduri, storage-qa-internal | |
| Target Milestone: | rc | Keywords: | ZStream | |
| Target Release: | 7.5 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | ||||
| Fixed In Version: | pacemaker-1.1.18-1.el7 | Doc Type: | If docs needed, set a value | |
| Doc Text: |
Previously, quorum loss did not trigger Pacemaker to recheck resource placement. As a consequence, in certain situations Pacemaker required a long time, up to the cluster recheck interval, before stopping resources after quorum loss. This happened only when several conditions were met: a node that was correctly shutting down dropped the cluster below the quorum; that node was not running any resources at the time; and a cluster transition was already in progress. With this update, Pacemaker always cancels the current transition when quorum is lost and recalculates resource placement immediately. As a result, the long delay no longer occurs.
|
Story Points: | --- | |
| Clone Of: | 1463992 | |||
| : | 1481140 (view as bug list) | Environment: | ||
| Last Closed: | 2018-04-10 15:30:29 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1463992, 1481140 | |||
|
Comment 4
Ken Gaillot
2017-06-23 17:07:42 UTC
The issue is even observed in Rhel 7.3 and RHGS 3.2 # cat /etc/redhat-release Red Hat Enterprise Linux Server release 7.3 (Maipo) # rpm -qa | grep ganesha nfs-ganesha-2.4.1-11.el7rhgs.x86_64 nfs-ganesha-gluster-2.4.1-11.el7rhgs.x86_64 glusterfs-ganesha-3.8.4-18.4.el7rhgs.x86_64 Preliminary investigation suggests that quorum loss due to a clean node shutdown has never triggered an immediate recheck of resource placement, though it clearly should. I'm not sure why this hasn't been more of an issue before, so I'm still investigating whether anything changed recently to make this more likely to have an effect. We're past the deadlines to make it into 7.4 GA, but I will ask for a z-stream. Fix is upstream as of commit 0b68905 The issue only occurs under a fairly narrow set of circumstances: - A node cleanly shutting down drops the cluster below quorum - The node was not running any resources at the time (e.g. it was in standby mode) - A transition was in progress Testing procedure: 1. Configure a cluster of at least three nodes, one dummy resource that takes a long time to stop, and at least one other resource. 2. Stop enough nodes so that the cluster is one node away from losing quorum. 3. Put one of the remaining nodes in standby, and wait until it has no resources running on it. 4. Disable the dummy resource so that it initiates a stop, and before it complete the stop, shut down the standby node. Before the change, the cluster will not stop the remaining resource(s) on the active node(s) until the next cluster-recheck-interval. After the change, the cluster will immediate stop all remaining resources. Unable to reproduce the issue using provided procedure (1.1.16-12.el7). I've set the cluster-recheck-interval attribute to 3600 seconds and created two ocf:pacemaker:Dummy resources, one of them has op_sleep attribute set to 60 seconds (and all operation timeouts adjusted to 90s). Both resources stop immediately upon quorum loss no matter how the standby node gets shut down (panic, clean system shutdown, pcs cluster stop). Is there any other condition required to trigger this bug? (In reply to Patrik Hagara from comment #20) > Unable to reproduce the issue using provided procedure (1.1.16-12.el7). I've > set the cluster-recheck-interval attribute to 3600 seconds and created two > ocf:pacemaker:Dummy resources, one of them has op_sleep attribute set to 60 > seconds (and all operation timeouts adjusted to 90s). Both resources stop > immediately upon quorum loss no matter how the standby node gets shut down > (panic, clean system shutdown, pcs cluster stop). Is there any other > condition required to trigger this bug? That's surprising, I thought this one was pretty reliable. I'm guessing something else must be happening in your cluster at the same time as quorum loss. Can you attach logs? The only thing I can think of is to make sure record-pending=false (the default). BTW the proper behavior is expected if the standby node is panicked and fenced. It's only a clean quorum loss that triggers the behavior. I think I may have confused two bugs when describing the reproducer. Try it again without the slow resource. Managed to reproduce with a 7-node cluster on 1.1.16-8.el7-94ff4df (stonith disabled, cluster-recheck-interval set to a high value and a single ocf:heartbeat:Dummy resource). First clean node shutdown did not trigger the bug, the trick was to: 1) "pcs cluster stop" 3 out of 7 nodes 2) put another one into standby 3) cleanly shut down the standby node 4) "pcs cluster start" one of the stopped nodes 5) put that one into standby 6) and then cleanly shut it down The dummy resource will stay in the "Started" role until the next cluster recheck. Same steps on 1.1.18-9.el7-2b07d5c5a9 result in the dummy resource getting stopped immediately after quorum loss. Marking verified. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2018:0860 |