RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 880249 - Deleting Master/slave set results in node fence
Summary: Deleting Master/slave set results in node fence
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: pacemaker
Version: 6.4
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: rc
: ---
Assignee: Andrew Beekhof
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On: 893221
Blocks: 768522 895654
TreeView+ depends on / blocked
 
Reported: 2012-11-26 15:07 UTC by Jaroslav Kortus
Modified: 2013-02-21 09:51 UTC (History)
5 users (show)

Fixed In Version: pacemaker-1.1.8-7.el6
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-02-21 09:51:27 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
crm_report output (65.37 KB, application/x-bzip2)
2012-12-11 16:12 UTC, Jaroslav Kortus
no flags Details
crm_report output of fence on dummystateful resource (128.05 KB, application/x-bzip2)
2013-01-11 16:37 UTC, Jaroslav Kortus
no flags Details
crm_report for issue in comment 21 (296.25 KB, application/x-bzip2)
2013-01-22 14:16 UTC, Jaroslav Kortus
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2013:0375 0 normal SHIPPED_LIVE pacemaker bug fix and enhancement update 2013-02-20 20:52:23 UTC

Description Jaroslav Kortus 2012-11-26 15:07:15 UTC
Description of problem:
Deleting master/slave resource results in one of the nodes (elected master node for the set) to be fenced.

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. pcs resource create dummystateful ocf:pacemaker:Stateful
2. pcs resource master MasterResource dummystateful
3. pcs resource delete MasterResource
  
Actual results:
* resource deleted
* failures in status report
* one of the nodes is fenced

Expected results:
* resource deleted as expected
* no node fenced


Additional info:
pcs status info at the time of the fence action:
Failed actions:
    dummystateful_stop_0 (node=marathon-03c1-node01, call=33, rc=8, status=complete): master
    dummystateful_demote_0 (node=marathon-03c1-node03, call=23, rc=7, status=complete): not running
    dummystateful_demote_0 (node=marathon-03c1-node02, call=34, rc=7, status=complete): not running

Comment 3 Andrew Beekhof 2012-12-04 03:03:49 UTC
Ok, we'll look into it.
If you still have the cluster set up, could you run crm_report for the time when the test was run?

Comment 5 Jaroslav Kortus 2012-12-11 16:12:15 UTC
Created attachment 661533 [details]
crm_report output

the output may be incomplete due to https://bugzilla.redhat.com/show_bug.cgi?id=886153.

Comment 6 David Vossel 2012-12-13 18:52:53 UTC
The logs are incomplete.  We really need to see what is happening on node2.  The logs only show node1. In this case node2 is the dc, so it will have all the pengine and transition information that gives us visibility into why the fencing operation occurred.

Comment 7 Andrew Beekhof 2012-12-14 01:03:01 UTC
David managed to reproduce this but it took him a few goes - it doesn't happen every time.

Its pretty clear from the output below (and I've confirmed by looking at the .dot file), there is no ordering between the stop and demote operations.  This can lead to the stop actions failing for which the correct recovery is fencing.

I should be able to fix this shortly.  Definitely a blocker.



# tools/crm_simulate -Sx ~/rhbz880249/pe-error-0.bz2 -D foo.dot

Current cluster status:
Online: [ 18node1 18node2 18node3 ]

 shoot1	(stonith:fence_xvm):	Started 18node1
 shoot2	(stonith:fence_xvm):	Started 18node2
 dummystateful	(ocf::pacemaker:Stateful ORPHANED):	Master [ 18node2 18node1 18node3 ]

Transition Summary:
 * Demote  dummystateful	(Master -> Stopped 18node2)

Executing cluster transition:
 * Resource action: dummystateful   stop on 18node3
 * Resource action: dummystateful   stop on 18node1
 * Resource action: dummystateful   stop on 18node2
 * Resource action: dummystateful   demote on 18node3
 * Resource action: dummystateful   demote on 18node1
 * Resource action: dummystateful   demote on 18node2
 * Pseudo action:   all_stopped

Revised cluster status:
Online: [ 18node1 18node2 18node3 ]

 shoot1	(stonith:fence_xvm):	Started 18node1
 shoot2	(stonith:fence_xvm):	Started 18node2
 dummystateful	(ocf::pacemaker:Stateful ORPHANED):	Slave [ 18node2 18node1 18node3 ]

Comment 8 Andrew Beekhof 2012-12-14 01:04:14 UTC
(In reply to comment #5)
> Created attachment 661533 [details]
> crm_report output

How did you run crm_report for this?
It only contains the details for one of the nodes.

Comment 9 Andrew Beekhof 2012-12-14 01:14:50 UTC
A related patch has been committed upstream:
  https://github.com/beekhof/pacemaker/commit/c20ad90

with subject:

   High: PE: Bug rhbz#880249 - Ensure orphan masters are demoted before being stopped

Further details (if any):

Comment 10 Andrew Beekhof 2012-12-14 01:55:03 UTC
A related patch has been committed upstream:
  https://github.com/beekhof/pacemaker/commit/19484a4

with subject:

   High: PE: Bug rhbz#880249 - Teach the PE how to recover masters into primitives

Further details (if any):

 If a master/slave is replaced with a primitive before the old status
entries are cleaned up, the PE needs to be able to get resources from
the Master state to the Started state sanely.

Comment 11 Andrew Beekhof 2012-12-14 02:04:43 UTC
All good now.  Regression test added:
+do_test bug-rh-880249 "Handle replacement of an m/s resource with a primitive"


# tools/crm_simulate -Sx ~/rhbz880249/pe-error-1.bz2 -D foo.dot

Current cluster status:
Online: [ 18node1 18node2 18node3 ]

 shoot1	(stonith:fence_xvm):	Started 18node1
 shoot2	(stonith:fence_xvm):	Started 18node2
 dummystateful	(ocf::pacemaker:Stateful):	Master [ 18node2 18node1 18node3 ]

Transition Summary:
 * Demote  dummystateful	(Master -> Started 18node2)
 * Restart dummystateful	(Master 18node3)
 * Move    dummystateful	(Started 18node2 -> 18node3)

Executing cluster transition:
 * Resource action: dummystateful   demote on 18node3
 * Resource action: dummystateful   demote on 18node1
 * Resource action: dummystateful   demote on 18node2
 * Resource action: dummystateful   stop on 18node3
 * Resource action: dummystateful   stop on 18node1
 * Resource action: dummystateful   stop on 18node2
 * Pseudo action:   all_stopped
 * Resource action: dummystateful   start on 18node3

Revised cluster status:
Online: [ 18node1 18node2 18node3 ]

 shoot1	(stonith:fence_xvm):	Started 18node1
 shoot2	(stonith:fence_xvm):	Started 18node2
 dummystateful	(ocf::pacemaker:Stateful):	Started 18node3

Comment 12 Jaroslav Kortus 2012-12-14 12:01:58 UTC
ad comment 8, I ran crm_report -f "<timespec>" --nodes "<space separated node names">. Supplied ssh password when required.

Comment 13 David Vossel 2012-12-14 15:35:23 UTC
(In reply to comment #12)
> ad comment 8, I ran crm_report -f "<timespec>" --nodes "<space separated
> node names">. Supplied ssh password when required.

I believe this is the command I ran.

crm_report --cluster corosync --nodes '18node1 18node2 18node3' -f "2012-12-13 12:30:00"

Minus the cluster type, it is the same as yours.  I do have ssh keys on all my nodes, so I'm never prompted for a password.  That might help.  We need to get this worked out for you though.  Let me know if you need help debugging what's going on.  I believe we have a few hours that overlap on irc.

-- Vossel

Comment 14 Andrew Beekhof 2012-12-20 01:38:47 UTC
(In reply to comment #13)
> We need to get this worked out for you though. 

Agreed.  Jaroslav, are you running from within the cluster or from another machine?

Comment 15 Jaroslav Kortus 2012-12-20 09:54:04 UTC
The missing bits are due to bug 886151 (I've installed the dependency on first node only, the rest did not visibly complain).

Comment 17 Jaroslav Kortus 2013-01-11 16:35:56 UTC
I'm still seeing the unwanted fencing behaviour with plain dummystateful resource.

Scenario is as follows:
1. setup 3-node pacemaker cluster
2. on node2: pcs resource create dummystateful ocf:pacemaker:Stateful; sleep 10; pcs resource delete dummystateful; sleep 60; pcs resource create dummystateful ocf:pacemaker:Stateful
3. node2 gets fenced in 1-2 minutes

pacemaker-1.1.8-7.el6.x86_64.

Moving back to ASSIGNED, please kill this bug as well :).

Comment 18 Jaroslav Kortus 2013-01-11 16:37:30 UTC
Created attachment 676988 [details]
crm_report output of fence on dummystateful resource

crm_report collected during test in comment 17.

Comment 19 David Vossel 2013-01-11 21:03:54 UTC
(In reply to comment #17)
> I'm still seeing the unwanted fencing behaviour with plain dummystateful
> resource.
> 
> Scenario is as follows:
> 1. setup 3-node pacemaker cluster
> 2. on node2: pcs resource create dummystateful ocf:pacemaker:Stateful; sleep
> 10; pcs resource delete dummystateful; sleep 60; pcs resource create
> dummystateful ocf:pacemaker:Stateful
> 3. node2 gets fenced in 1-2 minutes

The above commands don't make a master/slave resource, the create a instance of the Stateful resource that is treated like a normal resource (no promote/demote actions take place).

Looking at the crm_report, this looks very similar to the problem I experienced recently in issue 893221.

Take a look at this comment. https://bugzilla.redhat.com/show_bug.cgi?id=893221#c3

My results show the stop action failing with nearly the exact same pcs commands you used.  Removing the pcs call to clear the resource from the lrmd on deletion fixed this.

Can you try the current upstream version of pcs, or any version of pcs with the patch given in issue 893221 to verify these two issues are related?

-- Vossel

Comment 20 Fabio Massimo Di Nitto 2013-01-15 08:02:26 UTC
Moving ON_QA. This bug will require pcs fixes that are already targeted for SNAP4, but otherwise there are no code changes for pacemaker.

Comment 21 Jaroslav Kortus 2013-01-22 14:15:33 UTC
I had this small script:
#!/bin/bash
pcs resource create dummystateful ocf:pacemaker:Stateful
sleep 5
pcs resource master MasterResource dummystateful
sleep 5
pcs resource delete MasterResource
pcs resource create dummystateful ocf:pacemaker:Stateful

And after the last step is finished, then about one minute later (while the newly created resource is still in Stopped instead of Started), the following appears:

Failed actions:
    dummystateful_demote_0 (node=marathon-03c2-node03, call=-1, rc=1, status=Timed Out): unknown error
    dummystateful_demote_0 (node=marathon-03c2-node02, call=-1, rc=1, status=Timed Out): unknown error
    dummystateful_demote_0 (node=marathon-03c2-node01, call=-1, rc=1, status=Timed Out): unknown error

Can you please confirm that this is related to this bug or bug 902459?. Good news is that it's no longer fencing the node and the resource eventually starts (but increases failcount for the nodes).

Comment 22 Jaroslav Kortus 2013-01-22 14:16:25 UTC
Created attachment 685178 [details]
crm_report for issue in comment 21

Comment 23 David Vossel 2013-01-22 16:30:01 UTC
This appears related to https://bugzilla.redhat.com/show_bug.cgi?id=893221

In 893221 pcs was calling crm_resource -C -r  immediately after deleting the resource from the cib. This causes some problems which are outlined in this comment, https://bugzilla.redhat.com/show_bug.cgi?id=893221#c3.

Issue 893221 fixed the problem for all resource types expect for Master/Slave, which is what you are encountering here.  Apparently there is a separate code path being used to delete Master/Slave resources in pcs compared to everything else.

To verify this issue using the current version of pcs, you can by-pass the issue by using a file which will not call 'crm_resource -C' during the deletion.  The script below should work.

---------
#!/bin/bash
pcs resource create dummystateful ocf:pacemaker:Stateful
sleep 5
pcs resource master MasterResource dummystateful
sleep 5
pcs cluster cib cib_file.xml
pcs -f cib_file.xml resource delete MasterResource
pcs -f cib_file.xml resource create dummystateful ocf:pacemaker:Stateful
pcs cluster push cib cib_file.xml



-- Vossel

Comment 24 Jaroslav Kortus 2013-01-23 10:50:43 UTC
I'm no longer able to reproduce the issue. Issue in comment 21 was indeed missing pcs patch (included in 0.9.26-10). 

Thank you for fixing this bug.

Marking as verified with:
pcs-0.9.26-10.el6.noarch
pacemaker-1.1.8-7.el6.x86_64

Comment 26 errata-xmlrpc 2013-02-21 09:51:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-0375.html


Note You need to log in before you can comment on or make changes to this bug.