Hide Forgot
I have a gluster server on which the drive for a brick not mounted. If I run the command gluster volume info volume. It tells me gluster is started and lists all the bricks... I expect to see some sort of status next to the brick such as (ok), (failed), etc to indicate the status of the brick. If you want to reproduce my exact use case create a two node replicated gluster volume with 1 hd for os and 1 hd for gluster. create the volume and add some files. shut down one of the nodes and replace the gluster hd with a new unformatted hd. (we're doing an hd upgrade. Boot the new node and run gluster volume info all. It will show that all is well. You can do peer status from both nodes and all looks okay. There is nothing to indicate that the brick on one of the nodes is not accessible or present.
In the 3.3 mainline, one can always issue "gluster volume status <volname> detail" command to see info about the back-end file system of each brick. If its an ext2/3/4 or xfs volume, its details are shown (subject to availability of respective progs like xfsprogs). If you don't see the back-end fs info, then there's probably something wrong. This is not available in 3.2.5 though.
The issue here is allowing a new directory as a backend instead of already existing directory with data. We should ideally control this behavior, fix is to make sure 'start' doesn't write the 'volume-id' xattr, instead its written only in 'create' and 'add-brick', so if by 'start' time, its not found, don't even create it, but exit. That way 'volume status' will say brick not available, which fixes the issue for user.
CHANGE: http://review.gluster.com/2921 (mgmt/glusterd: don't create the brick path in 'volume start') merged in master by Anand Avati (avati)
As per the description of the problem suggets,details and state of each brick of volume is now available as follows:- [root@dhcp201-121 ~]# gluster volume status test1 detail Status of volume: test1 ------------------------------------------------------------------------------ Brick : Brick dhcp201-171.englab.pnq.redhat.com:/vol/test1 Port : 24009 Online : Y Pid : 4385 File System : ext4 Device : /dev/mapper/vg_dhcp201171-lv_root Mount Options : rw Inode Size : 256 Disk Space Free : 9.2GB Total Disk Space : 10.4GB Inode Count : 693600 Free Inodes : 664118 ------------------------------------------------------------------------------ Brick : Brick dhcp201-147.englab.pnq.redhat.com:/vol/test1 Port : 24009 Online : Y Pid : 1483 File System : ext4 Device : /dev/mapper/vg_dhcp201147-lv_root Mount Options : rw Inode Size : 256 Disk Space Free : 9.1GB Total Disk Space : 10.4GB Inode Count : 693600 Free Inodes : 644646 Although i am not very sure about how to verifiy suggestions given in the comment#2 at user level.If the just use case tried above can verify this then we can mark this issue as verified?
The patch which was sent doesn't fix the issue.
Will be working on this for next release. This is not a blocker behavior as this issue existed since 3.1.x releases.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.5.0, please reopen this bug report. glusterfs-3.5.0 has been announced on the Gluster Developers mailinglist [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/6137 [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user