Description of problem: The cli is loosing information about attached remote bricks, the port is suddenly 0 It also lost the info that rebalancing is running and shows only a status on few of the nodes Status of volume: content1 ------------------------------------------------------------------------------ Brick : Brick hq-dfs-001:/vol/content1 Port : 0 <<<<<<< Issue Online : Y Pid : 1810 File System : xfs Device : /dev/mapper/vg_hqdfs001-content1 Mount Options : rw,noatime,nodiratime,nobarrier,logbufs=8 Inode Size : 256 Disk Space Free : 202.9GB Total Disk Space : 959.6GB Inode Count : 886386816 Free Inodes : 851724430 ------------------------------------------------------------------------------ Brick : Brick hq-dfs-002:/vol/content1 Port : 49153 Online : Y Pid : 1840 File System : xfs Device : /dev/mapper/vg_hqdfs002-content1 Mount Options : rw,noatime,nodiratime,nobarrier,logbufs=8 Inode Size : 256 Disk Space Free : 204.7GB Total Disk Space : 959.6GB Inode Count : 893892352 Free Inodes : 859568425 ------------------------------------------------------------------------------ Brick : Brick hq-dfs-003:/vol/content1 Port : 49153 Online : Y Pid : 2070 File System : xfs Device : /dev/mapper/vg_hqdfs003-content1 Mount Options : rw,noatime,nodiratime,nobarrier,logbufs=8 Inode Size : 256 Disk Space Free : 179.3GB Total Disk Space : 959.6GB Inode Count : 786525456 Free Inodes : 752185359 ------------------------------------------------------------------------------ Brick : Brick hq-dfs-004:/vol/content1 Port : 49153 Online : Y Pid : 4872 File System : xfs Device : /dev/mapper/vg_hqdfs004-content1 Mount Options : rw,noatime,nodiratime,nobarrier,logbufs=8 Inode Size : 256 Disk Space Free : 264.0GB Total Disk Space : 959.6GB Inode Count : 1006632960 Free Inodes : 973513263 ------------------------------------------------------------------------------ Brick : Brick hq-dfs-005:/vol/content1 Port : 0 <<<<<<< Issue Online : Y Pid : 1890 File System : xfs Device : /dev/mapper/vg_hqdfs005-content1 Mount Options : rw,noatime,nodiratime,nobarrier,logbufs=8 Inode Size : 256 Disk Space Free : 505.0GB Total Disk Space : 959.6GB Inode Count : 1006632960 Free Inodes : 985217518 ------------------------------------------------------------------------------ Brick : Brick hq-dfs-006:/vol/content1 Port : 0 <<<<<<< Issue Online : Y Pid : 1897 File System : xfs Device : /dev/mapper/vg_hqdfs006-content1 Mount Options : rw,noatime,nodiratime,nobarrier,logbufs=8 Inode Size : 256 Disk Space Free : 593.4GB Total Disk Space : 959.6GB Inode Count : 1006632960 Free Inodes : 985238778 Version-Release number of selected component (if applicable): 3.4.0 How reproducible: It appears within few hours after I started the volume Steps to Reproduce: 1. 2. 3. Actual results: See above Expected results: 3.x behavior which always showed the port. Additional info:
Created attachment 792485 [details] glusterd config
Created attachment 792486 [details] brick-log
Created attachment 792487 [details] logs
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5. This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs". If there is no response by the end of the month, this bug will get automatically closed.
GlusterFS 3.4.x has reached end-of-life. If this bug still exists in a later release please reopen this and change the version or open a new bug.