Bug 1336948

Summary: [NFS-Ganesha] : stonith-enabled option not set with new versions of cman,pacemaker,corosync and pcs
Product: [Community] GlusterFS Reporter: Kaleb KEITHLEY <kkeithle>
Component: common-haAssignee: Kaleb KEITHLEY <kkeithle>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.7.11CC: asoman, bugs, jthottan, kgaillot, kkeithle, ndevos, nlevinki, skoduri, sraj, storage-qa-internal
Target Milestone: ---Keywords: Triaged
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.7.12 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1336947 Environment:
Last Closed: 2016-06-28 11:35:21 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1334092, 1336945, 1336947    
Bug Blocks:    

Comment 1 Vijay Bellur 2016-05-17 21:43:29 UTC
REVIEW: http://review.gluster.org/14406 (common-ha: stonith-enabled option set error in new pacemaker) posted (#1) for review on release-3.7 by Kaleb KEITHLEY (kkeithle)

Comment 2 Vijay Bellur 2016-05-19 10:17:33 UTC
COMMIT: http://review.gluster.org/14406 committed in release-3.7 by Kaleb KEITHLEY (kkeithle) 
------
commit 4aa4eee079487061b620393e5058a84021259ad7
Author: Kaleb S KEITHLEY <kkeithle>
Date:   Tue May 17 17:40:36 2016 -0400

    common-ha: stonith-enabled option set error in new pacemaker
    
    Setting the option too early results in an error in newer versions
    of pacemaker. Postpone setting the option in order for it to succeed.
    
    N.B. We do not use a fencing agent. Yes, we know this is "not supported."
    
    Backport of mainline
    >> http://review.gluster.org/#/c/14404/
    >> BUG: 1336945
    >> Change-Id: I86953fdd67e6736294dbd2d0795611837188bd9d
    release-3.8
    > http://review.gluster.org/#/c/14405/
    > BUG: 1336947
    > Change-Id: I402992bcb90a92dbcc915a75fe03b25221625e98
    
    Change-Id: I6f75a4d67618b41a4b30c341f5b7e9ea976b553e
    BUG: 1336948
    Signed-off-by: Kaleb S KEITHLEY <kkeithle>
    Reviewed-on: http://review.gluster.org/14406
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>

Comment 3 Vijay Bellur 2016-05-19 17:09:43 UTC
REVIEW: http://review.gluster.org/14428 (common-ha: wait for cluster to elect DC before accessing CIB) posted (#1) for review on release-3.7 by Kaleb KEITHLEY (kkeithle)

Comment 4 Vijay Bellur 2016-05-24 09:36:38 UTC
COMMIT: http://review.gluster.org/14428 committed in release-3.7 by Kaleb KEITHLEY (kkeithle) 
------
commit df931b2c6b2755e57b9d49e3fb045646e6e892fd
Author: Kaleb S KEITHLEY <kkeithle>
Date:   Thu May 19 13:08:38 2016 -0400

    common-ha: wait for cluster to elect DC before accessing CIB
    
    access attempts, e.g. `pcs property set stonith-enabled=false`
    will fail (or time out) if attempted "too early", i.e. before
    the cluster has elected its DC.
    
    see https://bugzilla.redhat.com/show_bug.cgi?id=1336947#c3 and
    https://bugzilla.redhat.com/show_bug.cgi?id=1320740
    
    Change-Id: Ifc0aa7ce652c1da339b9eb8fe17e40e8a09b1096
    BUG: 1336948
    Signed-off-by: Kaleb S KEITHLEY <kkeithle>
    Reviewed-on: http://review.gluster.org/14428
    Smoke: Gluster Build System <jenkins.com>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.com>
    Reviewed-by: soumya k <skoduri>
    Reviewed-by: jiffin tony Thottan <jthottan>

Comment 5 Kaushal 2016-06-28 12:18:31 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.12, please open a new bug report.

glusterfs-3.7.12 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://www.gluster.org/pipermail/gluster-devel/2016-June/049918.html
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user