Bug 801610

Summary: [FEAT] Brick status not available in gluster volume status.
Product: [Community] GlusterFS Reporter: Darrel O'Pry <darrel.opry>
Component: cliAssignee: Kaushal <kaushal>
Status: CLOSED CURRENTRELEASE QA Contact: Vijaykumar Koppad <vkoppad>
Severity: low Docs Contact:
Priority: medium    
Version: 3.2.5CC: bbandari, gluster-bugs, joe, psharma, rwheeler, vagarwal
Target Milestone: ---Keywords: FutureFeature
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: glusterfs-3.5.0 Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-04-17 11:38:08 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Darrel O'Pry 2012-03-08 23:58:31 UTC
I have a gluster server on which the drive for a brick not mounted. If I run the command gluster volume info volume. It tells me gluster is started and lists all the bricks... 

I expect to see some sort of status next to the brick such as (ok), (failed), etc to indicate the status of the brick. 


If you want to reproduce my exact use case create a two node replicated gluster volume with 1 hd for os and 1 hd for gluster. create the volume and add some files.  shut down one of the nodes and replace the gluster hd with a new unformatted hd. (we're doing an hd upgrade. Boot the new node and run gluster volume info all. It will show that all is well. You can do peer status from both nodes and all looks okay. There is nothing to indicate that the brick on one of the nodes is not accessible or present.

Comment 1 Rajesh 2012-03-09 04:21:32 UTC
In the 3.3 mainline, one can always issue "gluster volume status <volname> detail" command to see info about the back-end file system of each brick. If its an ext2/3/4 or xfs volume, its details are shown (subject to availability of respective progs like xfsprogs). If you don't see the back-end fs info, then there's probably something wrong. This is not available in 3.2.5 though.

Comment 2 Amar Tumballi 2012-03-09 05:02:35 UTC
The issue here is allowing a new directory as a backend instead of already existing directory with data.

We should ideally control this behavior, fix is to make sure 'start' doesn't write the 'volume-id' xattr, instead its written only in 'create' and 'add-brick', so if by 'start' time, its not found, don't even create it, but exit. That way 'volume status' will say brick not available, which fixes the issue for user.

Comment 3 Anand Avati 2012-03-18 08:09:32 UTC
CHANGE: http://review.gluster.com/2921 (mgmt/glusterd: don't create the brick path in 'volume start') merged in master by Anand Avati (avati)

Comment 4 pushpesh sharma 2012-06-01 07:25:31 UTC
As per the description of the problem suggets,details and state of each brick of volume is now available as follows:-

[root@dhcp201-121 ~]# gluster volume status test1 detail
Status of volume: test1
------------------------------------------------------------------------------
Brick                : Brick dhcp201-171.englab.pnq.redhat.com:/vol/test1
Port                 : 24009               
Online               : Y                   
Pid                  : 4385                
File System          : ext4                
Device               : /dev/mapper/vg_dhcp201171-lv_root
Mount Options        : rw                  
Inode Size           : 256                 
Disk Space Free      : 9.2GB               
Total Disk Space     : 10.4GB              
Inode Count          : 693600              
Free Inodes          : 664118              
------------------------------------------------------------------------------
Brick                : Brick dhcp201-147.englab.pnq.redhat.com:/vol/test1
Port                 : 24009               
Online               : Y                   
Pid                  : 1483                
File System          : ext4                
Device               : /dev/mapper/vg_dhcp201147-lv_root
Mount Options        : rw                  
Inode Size           : 256                 
Disk Space Free      : 9.1GB               
Total Disk Space     : 10.4GB              
Inode Count          : 693600              
Free Inodes          : 644646              
 
Although i am not very sure about how to verifiy suggestions given in the comment#2  at user level.If the just use case tried above can verify this then we can mark this issue as verified?

Comment 5 Vijaykumar Koppad 2012-06-08 06:09:08 UTC
The patch which was sent doesn't fix the issue.

Comment 6 Amar Tumballi 2012-06-12 05:29:26 UTC
Will be working on this for next release. This is not a blocker behavior as this issue existed since 3.1.x releases.

Comment 7 Niels de Vos 2014-04-17 11:38:08 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report.

glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user