Bug 886865
Summary: | Volume status detail of brick fails to display output if the brick is down. | ||
---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Vijaykumar Koppad <vkoppad> |
Component: | glusterd | Assignee: | Kaushal <kaushal> |
Status: | CLOSED ERRATA | QA Contact: | Ben Turner <bturner> |
Severity: | high | Docs Contact: | |
Priority: | medium | ||
Version: | unspecified | CC: | bbandari, bturner, rhs-bugs, shaines, vbellur |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | 3.4.0.12rhs.beta3 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2013-09-23 22:39:23 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Vijaykumar Koppad
2012-12-13 12:17:23 UTC
Per 03/05 email exchange w/ PM, targeting for Big Bend. As of glusterfs-v3.4.0.12rhs.beta3, this doesn't happen anymore. Moving to ON_QA. Verified on glusterfs-3.4.0.18rhs-1.el6rhs.x86_64: [root@storage-qe08 ~]# gluster volume status Status of volume: testvol Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick storage-qe08.lab.eng.rdu2.redhat.com:/brick1 49152 Y 23369 Brick storage-qe09.lab.eng.rdu2.redhat.com:/brick1 49152 Y 14485 Brick storage-qe10.lab.eng.rdu2.redhat.com:/brick1 49152 Y 17768 Brick storage-qe11.lab.eng.rdu2.redhat.com:/brick1 49152 Y 14587 NFS Server on localhost 2049 Y 23382 Self-heal Daemon on localhost N/A Y 23389 NFS Server on storage-qe09.lab.eng.rdu2.redhat.com 2049 Y 14497 Self-heal Daemon on storage-qe09.lab.eng.rdu2.redhat.co m N/A Y 14505 NFS Server on storage-qe11.lab.eng.rdu2.redhat.com 2049 Y 14599 Self-heal Daemon on storage-qe11.lab.eng.rdu2.redhat.co m N/A Y 14606 NFS Server on storage-qe10.lab.eng.rdu2.redhat.com 2049 Y 17780 Self-heal Daemon on storage-qe10.lab.eng.rdu2.redhat.co m N/A Y 17788 There are no active volume tasks [root@storage-qe08 ~]# gluster volume status testvol storage-qe08.lab.eng.rdu2.redhat.com:/brick1 detail Status of volume: testvol ------------------------------------------------------------------------------ Brick : Brick storage-qe08.lab.eng.rdu2.redhat.com:/brick1 Port : 49152 Online : Y Pid : 23369 File System : xfs Device : /dev/mapper/TestVolume001-mybrick Mount Options : rw Inode Size : 512 Disk Space Free : 99.9GB Total Disk Space : 100.0GB Inode Count : 52428800 Free Inodes : 52428791 [root@storage-qe08 ~]# kill -9 23369 [root@storage-qe08 ~]# gluster volume status testvol storage-qe08.lab.eng.rdu2.redhat.com:/brick1 detail Status of volume: testvol ------------------------------------------------------------------------------ Brick : Brick storage-qe08.lab.eng.rdu2.redhat.com:/brick1 Port : N/A Online : N Pid : 23369 File System : xfs Device : /dev/mapper/TestVolume001-mybrick Mount Options : rw Inode Size : 512 Disk Space Free : 99.9GB Total Disk Space : 100.0GB Inode Count : 52428800 Free Inodes : 52428791 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHBA-2013-1262.html |