Description of problem: If the brick is down, the volume status brick detail doesn't give any output Version-Release number of selected component (if applicable): rpm -qa | grep gluster glusterfs-rdma-3.4.0qa4-1.el6rhs.x86_64 glusterfs-server-3.4.0qa4-1.el6rhs.x86_64 glusterfs-fuse-3.4.0qa4-1.el6rhs.x86_64 glusterfs-geo-replication-3.4.0qa4-1.el6rhs.x86_64 glusterfs-debuginfo-3.4.0qa4-1.el6rhs.x86_64 glusterfs-devel-3.4.0qa4-1.el6rhs.x86_64 glusterfs-3.4.0qa4-1.el6rhs.x86_64 How reproducible: Consistently Steps to Reproduce: 1.Create a volume 2.Kill one of the brick 3.get volume status as gluster volume status <VOLNAME> <BRICK> detail 4.<BRICK> should be the killed brick. Actual results: It fails to display any output Expected results: It should display the output as usual. Additional info:
Per 03/05 email exchange w/ PM, targeting for Big Bend.
As of glusterfs-v3.4.0.12rhs.beta3, this doesn't happen anymore. Moving to ON_QA.
Verified on glusterfs-3.4.0.18rhs-1.el6rhs.x86_64: [root@storage-qe08 ~]# gluster volume status Status of volume: testvol Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick storage-qe08.lab.eng.rdu2.redhat.com:/brick1 49152 Y 23369 Brick storage-qe09.lab.eng.rdu2.redhat.com:/brick1 49152 Y 14485 Brick storage-qe10.lab.eng.rdu2.redhat.com:/brick1 49152 Y 17768 Brick storage-qe11.lab.eng.rdu2.redhat.com:/brick1 49152 Y 14587 NFS Server on localhost 2049 Y 23382 Self-heal Daemon on localhost N/A Y 23389 NFS Server on storage-qe09.lab.eng.rdu2.redhat.com 2049 Y 14497 Self-heal Daemon on storage-qe09.lab.eng.rdu2.redhat.co m N/A Y 14505 NFS Server on storage-qe11.lab.eng.rdu2.redhat.com 2049 Y 14599 Self-heal Daemon on storage-qe11.lab.eng.rdu2.redhat.co m N/A Y 14606 NFS Server on storage-qe10.lab.eng.rdu2.redhat.com 2049 Y 17780 Self-heal Daemon on storage-qe10.lab.eng.rdu2.redhat.co m N/A Y 17788 There are no active volume tasks [root@storage-qe08 ~]# gluster volume status testvol storage-qe08.lab.eng.rdu2.redhat.com:/brick1 detail Status of volume: testvol ------------------------------------------------------------------------------ Brick : Brick storage-qe08.lab.eng.rdu2.redhat.com:/brick1 Port : 49152 Online : Y Pid : 23369 File System : xfs Device : /dev/mapper/TestVolume001-mybrick Mount Options : rw Inode Size : 512 Disk Space Free : 99.9GB Total Disk Space : 100.0GB Inode Count : 52428800 Free Inodes : 52428791 [root@storage-qe08 ~]# kill -9 23369 [root@storage-qe08 ~]# gluster volume status testvol storage-qe08.lab.eng.rdu2.redhat.com:/brick1 detail Status of volume: testvol ------------------------------------------------------------------------------ Brick : Brick storage-qe08.lab.eng.rdu2.redhat.com:/brick1 Port : N/A Online : N Pid : 23369 File System : xfs Device : /dev/mapper/TestVolume001-mybrick Mount Options : rw Inode Size : 512 Disk Space Free : 99.9GB Total Disk Space : 100.0GB Inode Count : 52428800 Free Inodes : 52428791
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html