Bug 1184344

Summary: [SNAPSHOT]: In a n-way replica volume, snapshot should not be taken, even if one brick is down.
Product: [Community] GlusterFS Reporter: Avra Sengupta <asengupt>
Component: snapshotAssignee: Avra Sengupta <asengupt>
Status: CLOSED CURRENTRELEASE QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: pre-releaseCC: bugs, gluster-bugs, rjoseph, senaik, storage-qa-internal
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard: SNAPSHOT
Fixed In Version: glusterfs-3.7.0 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: 1182554 Environment:
Last Closed: 2015-05-14 17:29:02 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1182554    
Bug Blocks:    

Description Avra Sengupta 2015-01-21 07:06:55 UTC
+++ This bug was initially created as a clone of Bug #1182554 +++

Description of problem:
In a n-way replica volume, snapshot create should fail even if one brick is down.


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:
snapshot create checks for quorum and if quorum is met, snapshot is taken, even if a few bricks are down.


Expected results:
snapshot create should fail even if one brick is down.

Additional info:

Comment 1 Anand Avati 2015-01-21 09:54:07 UTC
REVIEW: http://review.gluster.org/9470 (glusterd/snapshot: Fail snap create even if one brick is down.) posted (#1) for review on master by Avra Sengupta (asengupt)

Comment 2 Anand Avati 2015-01-22 08:39:48 UTC
REVIEW: http://review.gluster.org/9470 (glusterd/snapshot: Fail snap create even if one brick is down.) posted (#2) for review on master by Avra Sengupta (asengupt)

Comment 3 Anand Avati 2015-01-22 09:00:14 UTC
REVIEW: http://review.gluster.org/9470 (glusterd/snapshot: Fail snap create even if one brick is down.) posted (#3) for review on master by Avra Sengupta (asengupt)

Comment 4 Anand Avati 2015-01-23 05:48:15 UTC
REVIEW: http://review.gluster.org/9470 (glusterd/snapshot: Fail snap create even if one brick is down.) posted (#4) for review on master by Avra Sengupta (asengupt)

Comment 5 Anand Avati 2015-01-27 10:23:39 UTC
REVIEW: http://review.gluster.org/9470 (glusterd/snapshot: Fail snap create even if one brick is down.) posted (#5) for review on master by Avra Sengupta (asengupt)

Comment 6 Anand Avati 2015-01-28 06:41:03 UTC
REVIEW: http://review.gluster.org/9470 (glusterd/snapshot: Fail snap create even if one brick is down.) posted (#6) for review on master by Avra Sengupta (asengupt)

Comment 7 Anand Avati 2015-01-29 07:19:13 UTC
COMMIT: http://review.gluster.org/9470 committed in master by Kaushal M (kaushal) 
------
commit 4493bfd8421116b5f45638b2f839874921f73fb3
Author: Avra Sengupta <asengupt>
Date:   Wed Jan 21 08:25:23 2015 +0000

    glusterd/snapshot: Fail snap create even if one brick is down.
    
    In a n-way replication, where n>=3  fail snapshot,
    even if one brick is down.
    
    Also check for glusterd quorum, irrespective of the force option
    
    Modified testcase tests/bugs/snapshot/bug-1090042.t because
    it tested the successful creation of snapshot with force
    command.
    
    Change-Id: I72666f8f1484bd1766b9d6799c20766e4547f6c5
    BUG: 1184344
    Signed-off-by: Avra Sengupta <asengupt>
    Reviewed-on: http://review.gluster.org/9470
    Reviewed-by: Rajesh Joseph <rjoseph>
    Reviewed-by: Atin Mukherjee <amukherj>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Kaushal M <kaushal>

Comment 8 Niels de Vos 2015-05-14 17:29:02 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 9 Niels de Vos 2015-05-14 17:35:49 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 10 Niels de Vos 2015-05-14 17:38:11 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user

Comment 11 Niels de Vos 2015-05-14 17:45:47 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.0, please open a new bug report.

glusterfs-3.7.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/10939
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user