Bug 1003216 - CLI glitch : looses info in volume status
CLI glitch : looses info in volume status
Status: CLOSED EOL
Product: GlusterFS
Classification: Community
Component: cli (Show other bugs)
3.4.0
x86_64 Linux
unspecified Severity high
: ---
: ---
Assigned To: bugs@gluster.org
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2013-08-31 18:38 EDT by Bjoern Teipel
Modified: 2015-10-07 08:20 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-10-07 08:20:16 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
glusterd config (44.75 KB, application/x-zip-compressed)
2013-08-31 18:52 EDT, Bjoern Teipel
no flags Details
brick-log (15.32 MB, application/x-zip-compressed)
2013-08-31 18:58 EDT, Bjoern Teipel
no flags Details
logs (10.20 MB, application/x-zip-compressed)
2013-08-31 19:09 EDT, Bjoern Teipel
no flags Details

  None (edit)
Description Bjoern Teipel 2013-08-31 18:38:04 EDT
Description of problem:

The cli is loosing information about attached remote bricks, the port is suddenly 0
It also lost the info that rebalancing is running and shows only a status on few of the nodes

Status of volume: content1
------------------------------------------------------------------------------
Brick                : Brick hq-dfs-001:/vol/content1
Port                 : 0 <<<<<<< Issue 
Online               : Y
Pid                  : 1810
File System          : xfs
Device               : /dev/mapper/vg_hqdfs001-content1
Mount Options        : rw,noatime,nodiratime,nobarrier,logbufs=8
Inode Size           : 256
Disk Space Free      : 202.9GB
Total Disk Space     : 959.6GB
Inode Count          : 886386816
Free Inodes          : 851724430
------------------------------------------------------------------------------
Brick                : Brick hq-dfs-002:/vol/content1
Port                 : 49153
Online               : Y
Pid                  : 1840
File System          : xfs
Device               : /dev/mapper/vg_hqdfs002-content1
Mount Options        : rw,noatime,nodiratime,nobarrier,logbufs=8
Inode Size           : 256
Disk Space Free      : 204.7GB
Total Disk Space     : 959.6GB
Inode Count          : 893892352
Free Inodes          : 859568425
------------------------------------------------------------------------------
Brick                : Brick hq-dfs-003:/vol/content1
Port                 : 49153
Online               : Y
Pid                  : 2070
File System          : xfs
Device               : /dev/mapper/vg_hqdfs003-content1
Mount Options        : rw,noatime,nodiratime,nobarrier,logbufs=8
Inode Size           : 256
Disk Space Free      : 179.3GB
Total Disk Space     : 959.6GB
Inode Count          : 786525456
Free Inodes          : 752185359
------------------------------------------------------------------------------
Brick                : Brick hq-dfs-004:/vol/content1
Port                 : 49153
Online               : Y
Pid                  : 4872
File System          : xfs
Device               : /dev/mapper/vg_hqdfs004-content1
Mount Options        : rw,noatime,nodiratime,nobarrier,logbufs=8
Inode Size           : 256
Disk Space Free      : 264.0GB
Total Disk Space     : 959.6GB
Inode Count          : 1006632960
Free Inodes          : 973513263
------------------------------------------------------------------------------
Brick                : Brick hq-dfs-005:/vol/content1
Port                 : 0 <<<<<<< Issue
Online               : Y
Pid                  : 1890
File System          : xfs
Device               : /dev/mapper/vg_hqdfs005-content1
Mount Options        : rw,noatime,nodiratime,nobarrier,logbufs=8
Inode Size           : 256
Disk Space Free      : 505.0GB
Total Disk Space     : 959.6GB
Inode Count          : 1006632960
Free Inodes          : 985217518
------------------------------------------------------------------------------
Brick                : Brick hq-dfs-006:/vol/content1
Port                 : 0 <<<<<<< Issue
Online               : Y
Pid                  : 1897
File System          : xfs
Device               : /dev/mapper/vg_hqdfs006-content1
Mount Options        : rw,noatime,nodiratime,nobarrier,logbufs=8
Inode Size           : 256
Disk Space Free      : 593.4GB
Total Disk Space     : 959.6GB
Inode Count          : 1006632960
Free Inodes          : 985238778


Version-Release number of selected component (if applicable):

3.4.0

How reproducible:

It appears within few hours after I started the volume

Steps to Reproduce:
1.
2.
3.

Actual results:

See above

Expected results:

3.x behavior which always showed the port.

Additional info:
Comment 1 Bjoern Teipel 2013-08-31 18:52:34 EDT
Created attachment 792485 [details]
glusterd config
Comment 2 Bjoern Teipel 2013-08-31 18:58:27 EDT
Created attachment 792486 [details]
brick-log
Comment 3 Bjoern Teipel 2013-08-31 19:09:50 EDT
Created attachment 792487 [details]
logs
Comment 4 Niels de Vos 2015-05-17 18:00:55 EDT
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.

This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs@gluster.org".

If there is no response by the end of the month, this bug will get automatically closed.
Comment 5 Kaleb KEITHLEY 2015-10-07 08:20:16 EDT
GlusterFS 3.4.x has reached end-of-life.

If this bug still exists in a later release please reopen this and change the version or open a new bug.

Note You need to log in before you can comment on or make changes to this bug.