Bug 1324240 - Rebase to sync sbd with upstream
Summary: Rebase to sync sbd with upstream
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: sbd
Version: 7.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: 7.3
Assignee: Klaus Wenninger
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1337236
TreeView+ depends on / blocked
 
Reported: 2016-04-05 23:03 UTC by Klaus Wenninger
Modified: 2019-03-06 00:56 UTC (History)
5 users (show)

Fixed In Version: sbd-1.2.1-20.el7
Doc Type: Rebase: Bug Fixes and Enhancements
Doc Text:
Clone Of:
: 1337236 (view as bug list)
Environment:
Last Closed: 2016-11-04 03:04:53 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2016:2306 0 normal SHIPPED_LIVE sbd bug fix update 2016-11-03 13:40:15 UTC

Description Klaus Wenninger 2016-04-05 23:03:09 UTC
Description of problem:

SBD is not a super active upstream project but recently there had been
a couple of interesting changes that interfere with a lot of files
like stuff for pacemaker-remote and clarification of licensing in
most of the files.
So it is much cleaner and easier to support to base on the 
upstream version and patch what we want differently (like e.g. not
supporting block-devices for now) instead of doing it the other
way round.

Comment 3 michal novacek 2016-09-27 12:59:34 UTC
In our internal testing the following scenarios do work:

  * sbd as the only mean of fencing
    (internally: pacemaker,recovery,recovery-all,sbd,sbd-only,kill_sysrq_panic)
  * sbd as the secondary fencing, where the primary fencing works
    (internally: pacemaker,recovery,recovery-all,sbd,sbd-with-other-fencing,kill_sysrq_panic)
  * sbd as the secondary fencing, where the primary fencing never works (/bin/false)
    (internally: pacemaker,recovery,recovery-all,sbd,sbd-with-fake-fencing,kill_sysrq_panic)

It means that the following expected recovery behaviour of the cluster occurs
in the follwing scenarios:
    * killing node where the resource is active
    * causing one node to fall off the network
    * causing switch failure (nodes do not see each other)
    * killing random (one or more) node:
        * less than quorum
        * more than quorum
    * killing pacemaker on one or more nodes


Marking verified.

Comment 5 errata-xmlrpc 2016-11-04 03:04:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2306.html


Note You need to log in before you can comment on or make changes to this bug.