Bug 1083502 - [SNAPSHOT]: snapshot create when one of the brick is down should output the proper message
Summary: [SNAPSHOT]: snapshot create when one of the brick is down should output the p...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: snapshot
Version: rhgs-3.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: RHGS 3.0.0
Assignee: Joseph Elwin Fernandes
QA Contact: Rahul Hinduja
URL:
Whiteboard: SNAPSHOT
Depends On: 1089527
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-04-02 11:05 UTC by Rahul Hinduja
Modified: 2016-09-17 13:05 UTC (History)
6 users (show)

Fixed In Version: glusterfs-3.6.0-4.0.el6rhs
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-09-22 19:33:25 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1278 0 normal SHIPPED_LIVE Red Hat Storage Server 3.0 bug fix and enhancement update 2014-09-22 23:26:55 UTC

Description Rahul Hinduja 2014-04-02 11:05:29 UTC
Description of problem:
=======================

As per my knowledge we decided to fail the snapshot create when any of the brick is down until force is applied to the CLI. If the cli is forcefully executed than we check the quorum and take a decision to create or fail snapshot based on quorum.

But at the first place when a brick is down and we fail a snapshot, a proper message should be logged along with usage to use force.

Currently:
==========

When of the brick process in vol0 is offline as mentioned below:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

[root@snapshot-09 ~]# gluster volume status vol0
Status of volume: vol0
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.42.220:/brick0/b0				49152	Y	14735
Brick 10.70.43.20:/brick0/b0				N/A	N	10685
Brick 10.70.43.186:/brick0/b0				49152	Y	1277
Brick 10.70.43.70:/brick0/b0				49152	Y	13938
NFS Server on localhost					2049	Y	14916
Self-heal Daemon on localhost				N/A	Y	14923
NFS Server on 10.70.43.20				2049	Y	10819
Self-heal Daemon on 10.70.43.20				N/A	Y	10826
NFS Server on 10.70.43.186				2049	Y	1423
Self-heal Daemon on 10.70.43.186			N/A	Y	1430
NFS Server on 10.70.43.70				2049	Y	14075
Self-heal Daemon on 10.70.43.70				N/A	Y	14082
 
Task Status of Volume vol0
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@snapshot-09 ~]# 


Creation of snapshot fails as expected:
+++++++++++++++++++++++++++++++++++++++

[root@snapshot-09 ~]# gluster snapshot create snap1 vol0
snapshot create: failed: Commit failed on 10.70.43.20. Please check log file for details.
Snapshot command failed
[root@snapshot-09 ~]#


But the output is ambiguous.

It can be something similar to below output (Can be discussed):
===============================================================

[root@snapshot-09 ~]# gluster snapshot create snap1 vol0
Can not create snapshot of a volume when bricks are offline. (If you are certain you need snapshot create, then confirm by using force.)
Usage: snapshot create <snapname> <volname(s)> [description <description>] [force]
[root@snapshot-09 ~]#

Version-Release number of selected component (if applicable):
=============================================================

glusterfs-3.4.1.7.snap.mar27.2014git-1.el6.x86_64


How reproducible:
=================
1/1


Steps to Reproduce:
===================
1. Offline brick(s) of a volume
2. Create a snapshot of a volume.


Actual results:
===============


[root@snapshot-09 ~]# gluster snapshot create snap1 vol0
snapshot create: failed: Commit failed on 10.70.43.20. Please check log file for details.
Snapshot command failed
[root@snapshot-09 ~]#


Expected results:
=================

Something like below:


[root@snapshot-09 ~]# gluster snapshot create snap1 vol0
Can not create snapshot of a volume when bricks are offline. (If you are certain you need snapshot create, then confirm by using force.)
Usage: snapshot create <snapname> <volname(s)> [description <description>] [force]
[root@snapshot-09 ~]#

Comment 3 Nagaprasad Sathyanarayana 2014-04-21 06:18:13 UTC
Marking snapshot BZs to RHS 3.0.

Comment 4 Joseph Elwin Fernandes 2014-04-22 03:56:10 UTC
Fixed with http://review.gluster.org/#/c/7520/

Comment 5 Joseph Elwin Fernandes 2014-04-22 13:23:42 UTC
This bug is depended on bug 1089527 as the fix for both are the same. Though its not a duplicate bug as they both deal with different issues.

Comment 6 Nagaprasad Sathyanarayana 2014-05-19 10:56:32 UTC
Setting flags required to add BZs to RHS 3.0 Errata

Comment 7 senaik 2014-05-20 07:01:33 UTC
Version :glusterfs-server-3.6.0.3-1
========

Creating Snapshot when brick is down, gives the following message :

gluster v status vol1
Status of volume: vol1
Gluster process						Port	Online	Pid
------------------------------------------------------------------------------
Brick 10.70.44.54:/brick1/b1				N/A	N	15876
Brick 10.70.44.54:/brick5/b5				49155	Y	15887
Brick 10.70.44.55:/brick1/b1				49159	Y	11164
Brick 10.70.44.55:/brick5/b5				49160	Y	11175
NFS Server on localhost					2049	Y	16439
Self-heal Daemon on localhost				N/A	Y	16446
NFS Server on 10.70.44.55				2049	Y	11659
Self-heal Daemon on 10.70.44.55				N/A	Y	11666
 
Task Status of Volume vol1
------------------------------------------------------------------------------
There are no active volume tasks

 
[root@snapshot01 ~]# gluster snapshot create snap_new vol1
snapshot create: failed: brick 10.70.44.54:/brick1/b1 is not started. Please start the stopped brick and then issue snapshot create command or use [force] option in snapshot create to override this behavior.
Snapshot command failed


Marking the bug as 'Verified'

Comment 9 errata-xmlrpc 2014-09-22 19:33:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1278.html


Note You need to log in before you can comment on or make changes to this bug.