Description of problem: ======================= As per my knowledge we decided to fail the snapshot create when any of the brick is down until force is applied to the CLI. If the cli is forcefully executed than we check the quorum and take a decision to create or fail snapshot based on quorum. But at the first place when a brick is down and we fail a snapshot, a proper message should be logged along with usage to use force. Currently: ========== When of the brick process in vol0 is offline as mentioned below: ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ [root@snapshot-09 ~]# gluster volume status vol0 Status of volume: vol0 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.42.220:/brick0/b0 49152 Y 14735 Brick 10.70.43.20:/brick0/b0 N/A N 10685 Brick 10.70.43.186:/brick0/b0 49152 Y 1277 Brick 10.70.43.70:/brick0/b0 49152 Y 13938 NFS Server on localhost 2049 Y 14916 Self-heal Daemon on localhost N/A Y 14923 NFS Server on 10.70.43.20 2049 Y 10819 Self-heal Daemon on 10.70.43.20 N/A Y 10826 NFS Server on 10.70.43.186 2049 Y 1423 Self-heal Daemon on 10.70.43.186 N/A Y 1430 NFS Server on 10.70.43.70 2049 Y 14075 Self-heal Daemon on 10.70.43.70 N/A Y 14082 Task Status of Volume vol0 ------------------------------------------------------------------------------ There are no active volume tasks [root@snapshot-09 ~]# Creation of snapshot fails as expected: +++++++++++++++++++++++++++++++++++++++ [root@snapshot-09 ~]# gluster snapshot create snap1 vol0 snapshot create: failed: Commit failed on 10.70.43.20. Please check log file for details. Snapshot command failed [root@snapshot-09 ~]# But the output is ambiguous. It can be something similar to below output (Can be discussed): =============================================================== [root@snapshot-09 ~]# gluster snapshot create snap1 vol0 Can not create snapshot of a volume when bricks are offline. (If you are certain you need snapshot create, then confirm by using force.) Usage: snapshot create <snapname> <volname(s)> [description <description>] [force] [root@snapshot-09 ~]# Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.4.1.7.snap.mar27.2014git-1.el6.x86_64 How reproducible: ================= 1/1 Steps to Reproduce: =================== 1. Offline brick(s) of a volume 2. Create a snapshot of a volume. Actual results: =============== [root@snapshot-09 ~]# gluster snapshot create snap1 vol0 snapshot create: failed: Commit failed on 10.70.43.20. Please check log file for details. Snapshot command failed [root@snapshot-09 ~]# Expected results: ================= Something like below: [root@snapshot-09 ~]# gluster snapshot create snap1 vol0 Can not create snapshot of a volume when bricks are offline. (If you are certain you need snapshot create, then confirm by using force.) Usage: snapshot create <snapname> <volname(s)> [description <description>] [force] [root@snapshot-09 ~]#
Marking snapshot BZs to RHS 3.0.
Fixed with http://review.gluster.org/#/c/7520/
This bug is depended on bug 1089527 as the fix for both are the same. Though its not a duplicate bug as they both deal with different issues.
Setting flags required to add BZs to RHS 3.0 Errata
Version :glusterfs-server-3.6.0.3-1 ======== Creating Snapshot when brick is down, gives the following message : gluster v status vol1 Status of volume: vol1 Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.44.54:/brick1/b1 N/A N 15876 Brick 10.70.44.54:/brick5/b5 49155 Y 15887 Brick 10.70.44.55:/brick1/b1 49159 Y 11164 Brick 10.70.44.55:/brick5/b5 49160 Y 11175 NFS Server on localhost 2049 Y 16439 Self-heal Daemon on localhost N/A Y 16446 NFS Server on 10.70.44.55 2049 Y 11659 Self-heal Daemon on 10.70.44.55 N/A Y 11666 Task Status of Volume vol1 ------------------------------------------------------------------------------ There are no active volume tasks [root@snapshot01 ~]# gluster snapshot create snap_new vol1 snapshot create: failed: brick 10.70.44.54:/brick1/b1 is not started. Please start the stopped brick and then issue snapshot create command or use [force] option in snapshot create to override this behavior. Snapshot command failed Marking the bug as 'Verified'
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-1278.html