Bug 886865 - Volume status detail of brick fails to display output if the brick is down.
Summary: Volume status detail of brick fails to display output if the brick is down.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: unspecified
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
: ---
Assignee: Kaushal
QA Contact: Ben Turner
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-12-13 12:17 UTC by Vijaykumar Koppad
Modified: 2014-08-25 00:50 UTC (History)
5 users (show)

Fixed In Version: 3.4.0.12rhs.beta3
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-09-23 22:39:23 UTC
Embargoed:


Attachments (Terms of Use)

Description Vijaykumar Koppad 2012-12-13 12:17:23 UTC
Description of problem: If the brick is down, the volume status brick detail doesn't give any output 


Version-Release number of selected component (if applicable):
rpm -qa | grep gluster
glusterfs-rdma-3.4.0qa4-1.el6rhs.x86_64
glusterfs-server-3.4.0qa4-1.el6rhs.x86_64
glusterfs-fuse-3.4.0qa4-1.el6rhs.x86_64
glusterfs-geo-replication-3.4.0qa4-1.el6rhs.x86_64
glusterfs-debuginfo-3.4.0qa4-1.el6rhs.x86_64
glusterfs-devel-3.4.0qa4-1.el6rhs.x86_64
glusterfs-3.4.0qa4-1.el6rhs.x86_64



How reproducible: Consistently 


Steps to Reproduce:
1.Create a volume 
2.Kill one of the brick 
3.get volume status as gluster volume status <VOLNAME> <BRICK> detail
4.<BRICK> should be the killed brick.

Actual results: It fails to display any output 


Expected results: It should display the output as usual.


Additional info:

Comment 2 Scott Haines 2013-03-08 21:02:26 UTC
Per 03/05 email exchange w/ PM, targeting for Big Bend.

Comment 3 Kaushal 2013-07-10 10:13:52 UTC
As of glusterfs-v3.4.0.12rhs.beta3, this doesn't happen anymore. Moving to ON_QA.

Comment 4 Ben Turner 2013-08-13 19:11:30 UTC
Verified on glusterfs-3.4.0.18rhs-1.el6rhs.x86_64:

[root@storage-qe08 ~]# gluster volume status
Status of volume: testvol
Gluster process                                         Port    Online  Pid
------------------------------------------------------------------------------
Brick storage-qe08.lab.eng.rdu2.redhat.com:/brick1      49152   Y       23369
Brick storage-qe09.lab.eng.rdu2.redhat.com:/brick1      49152   Y       14485
Brick storage-qe10.lab.eng.rdu2.redhat.com:/brick1      49152   Y       17768
Brick storage-qe11.lab.eng.rdu2.redhat.com:/brick1      49152   Y       14587
NFS Server on localhost                                 2049    Y       23382
Self-heal Daemon on localhost                           N/A     Y       23389
NFS Server on storage-qe09.lab.eng.rdu2.redhat.com      2049    Y       14497
Self-heal Daemon on storage-qe09.lab.eng.rdu2.redhat.co
m                                                       N/A     Y       14505
NFS Server on storage-qe11.lab.eng.rdu2.redhat.com      2049    Y       14599
Self-heal Daemon on storage-qe11.lab.eng.rdu2.redhat.co
m                                                       N/A     Y       14606
NFS Server on storage-qe10.lab.eng.rdu2.redhat.com      2049    Y       17780
Self-heal Daemon on storage-qe10.lab.eng.rdu2.redhat.co
m                                                       N/A     Y       17788
 
There are no active volume tasks
[root@storage-qe08 ~]# gluster volume status testvol storage-qe08.lab.eng.rdu2.redhat.com:/brick1 detail
Status of volume: testvol
------------------------------------------------------------------------------
Brick                : Brick storage-qe08.lab.eng.rdu2.redhat.com:/brick1
Port                 : 49152               
Online               : Y                   
Pid                  : 23369               
File System          : xfs                 
Device               : /dev/mapper/TestVolume001-mybrick
Mount Options        : rw                  
Inode Size           : 512                 
Disk Space Free      : 99.9GB              
Total Disk Space     : 100.0GB             
Inode Count          : 52428800            
Free Inodes          : 52428791            
 
[root@storage-qe08 ~]# kill -9 23369
[root@storage-qe08 ~]# gluster volume status testvol storage-qe08.lab.eng.rdu2.redhat.com:/brick1 detail
Status of volume: testvol
------------------------------------------------------------------------------
Brick                : Brick storage-qe08.lab.eng.rdu2.redhat.com:/brick1
Port                 : N/A                 
Online               : N                   
Pid                  : 23369               
File System          : xfs                 
Device               : /dev/mapper/TestVolume001-mybrick
Mount Options        : rw                  
Inode Size           : 512                 
Disk Space Free      : 99.9GB              
Total Disk Space     : 100.0GB             
Inode Count          : 52428800            
Free Inodes          : 52428791

Comment 5 Scott Haines 2013-09-23 22:39:23 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html

Comment 6 Scott Haines 2013-09-23 22:43:43 UTC
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. 

For information on the advisory, and where to find the updated files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2013-1262.html


Note You need to log in before you can comment on or make changes to this bug.