Bug 801610
Summary: | [FEAT] Brick status not available in gluster volume status. | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Darrel O'Pry <darrel.opry> |
Component: | cli | Assignee: | Kaushal <kaushal> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | Vijaykumar Koppad <vkoppad> |
Severity: | low | Docs Contact: | |
Priority: | medium | ||
Version: | 3.2.5 | CC: | bbandari, gluster-bugs, joe, psharma, rwheeler, vagarwal |
Target Milestone: | --- | Keywords: | FutureFeature |
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.5.0 | Doc Type: | Enhancement |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2014-04-17 11:38:08 UTC | Type: | --- |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Darrel O'Pry
2012-03-08 23:58:31 UTC
In the 3.3 mainline, one can always issue "gluster volume status <volname> detail" command to see info about the back-end file system of each brick. If its an ext2/3/4 or xfs volume, its details are shown (subject to availability of respective progs like xfsprogs). If you don't see the back-end fs info, then there's probably something wrong. This is not available in 3.2.5 though. The issue here is allowing a new directory as a backend instead of already existing directory with data. We should ideally control this behavior, fix is to make sure 'start' doesn't write the 'volume-id' xattr, instead its written only in 'create' and 'add-brick', so if by 'start' time, its not found, don't even create it, but exit. That way 'volume status' will say brick not available, which fixes the issue for user. CHANGE: http://review.gluster.com/2921 (mgmt/glusterd: don't create the brick path in 'volume start') merged in master by Anand Avati (avati) As per the description of the problem suggets,details and state of each brick of volume is now available as follows:- [root@dhcp201-121 ~]# gluster volume status test1 detail Status of volume: test1 ------------------------------------------------------------------------------ Brick : Brick dhcp201-171.englab.pnq.redhat.com:/vol/test1 Port : 24009 Online : Y Pid : 4385 File System : ext4 Device : /dev/mapper/vg_dhcp201171-lv_root Mount Options : rw Inode Size : 256 Disk Space Free : 9.2GB Total Disk Space : 10.4GB Inode Count : 693600 Free Inodes : 664118 ------------------------------------------------------------------------------ Brick : Brick dhcp201-147.englab.pnq.redhat.com:/vol/test1 Port : 24009 Online : Y Pid : 1483 File System : ext4 Device : /dev/mapper/vg_dhcp201147-lv_root Mount Options : rw Inode Size : 256 Disk Space Free : 9.1GB Total Disk Space : 10.4GB Inode Count : 693600 Free Inodes : 644646 Although i am not very sure about how to verifiy suggestions given in the comment#2 at user level.If the just use case tried above can verify this then we can mark this issue as verified? The patch which was sent doesn't fix the issue. Will be working on this for next release. This is not a blocker behavior as this issue existed since 3.1.x releases. This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report. glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user |