Bug 1003216 - CLI glitch : looses info in volume status
Summary: CLI glitch : looses info in volume status
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: cli
Version: 3.4.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-08-31 22:38 UTC by Bjoern Teipel
Modified: 2015-10-07 12:20 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-10-07 12:20:16 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)
glusterd config (44.75 KB, application/x-zip-compressed)
2013-08-31 22:52 UTC, Bjoern Teipel
no flags Details
brick-log (15.32 MB, application/x-zip-compressed)
2013-08-31 22:58 UTC, Bjoern Teipel
no flags Details
logs (10.20 MB, application/x-zip-compressed)
2013-08-31 23:09 UTC, Bjoern Teipel
no flags Details

Description Bjoern Teipel 2013-08-31 22:38:04 UTC
Description of problem:

The cli is loosing information about attached remote bricks, the port is suddenly 0
It also lost the info that rebalancing is running and shows only a status on few of the nodes

Status of volume: content1
------------------------------------------------------------------------------
Brick                : Brick hq-dfs-001:/vol/content1
Port                 : 0 <<<<<<< Issue 
Online               : Y
Pid                  : 1810
File System          : xfs
Device               : /dev/mapper/vg_hqdfs001-content1
Mount Options        : rw,noatime,nodiratime,nobarrier,logbufs=8
Inode Size           : 256
Disk Space Free      : 202.9GB
Total Disk Space     : 959.6GB
Inode Count          : 886386816
Free Inodes          : 851724430
------------------------------------------------------------------------------
Brick                : Brick hq-dfs-002:/vol/content1
Port                 : 49153
Online               : Y
Pid                  : 1840
File System          : xfs
Device               : /dev/mapper/vg_hqdfs002-content1
Mount Options        : rw,noatime,nodiratime,nobarrier,logbufs=8
Inode Size           : 256
Disk Space Free      : 204.7GB
Total Disk Space     : 959.6GB
Inode Count          : 893892352
Free Inodes          : 859568425
------------------------------------------------------------------------------
Brick                : Brick hq-dfs-003:/vol/content1
Port                 : 49153
Online               : Y
Pid                  : 2070
File System          : xfs
Device               : /dev/mapper/vg_hqdfs003-content1
Mount Options        : rw,noatime,nodiratime,nobarrier,logbufs=8
Inode Size           : 256
Disk Space Free      : 179.3GB
Total Disk Space     : 959.6GB
Inode Count          : 786525456
Free Inodes          : 752185359
------------------------------------------------------------------------------
Brick                : Brick hq-dfs-004:/vol/content1
Port                 : 49153
Online               : Y
Pid                  : 4872
File System          : xfs
Device               : /dev/mapper/vg_hqdfs004-content1
Mount Options        : rw,noatime,nodiratime,nobarrier,logbufs=8
Inode Size           : 256
Disk Space Free      : 264.0GB
Total Disk Space     : 959.6GB
Inode Count          : 1006632960
Free Inodes          : 973513263
------------------------------------------------------------------------------
Brick                : Brick hq-dfs-005:/vol/content1
Port                 : 0 <<<<<<< Issue
Online               : Y
Pid                  : 1890
File System          : xfs
Device               : /dev/mapper/vg_hqdfs005-content1
Mount Options        : rw,noatime,nodiratime,nobarrier,logbufs=8
Inode Size           : 256
Disk Space Free      : 505.0GB
Total Disk Space     : 959.6GB
Inode Count          : 1006632960
Free Inodes          : 985217518
------------------------------------------------------------------------------
Brick                : Brick hq-dfs-006:/vol/content1
Port                 : 0 <<<<<<< Issue
Online               : Y
Pid                  : 1897
File System          : xfs
Device               : /dev/mapper/vg_hqdfs006-content1
Mount Options        : rw,noatime,nodiratime,nobarrier,logbufs=8
Inode Size           : 256
Disk Space Free      : 593.4GB
Total Disk Space     : 959.6GB
Inode Count          : 1006632960
Free Inodes          : 985238778


Version-Release number of selected component (if applicable):

3.4.0

How reproducible:

It appears within few hours after I started the volume

Steps to Reproduce:
1.
2.
3.

Actual results:

See above

Expected results:

3.x behavior which always showed the port.

Additional info:

Comment 1 Bjoern Teipel 2013-08-31 22:52:34 UTC
Created attachment 792485 [details]
glusterd config

Comment 2 Bjoern Teipel 2013-08-31 22:58:27 UTC
Created attachment 792486 [details]
brick-log

Comment 3 Bjoern Teipel 2013-08-31 23:09:50 UTC
Created attachment 792487 [details]
logs

Comment 4 Niels de Vos 2015-05-17 22:00:55 UTC
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.

This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs".

If there is no response by the end of the month, this bug will get automatically closed.

Comment 5 Kaleb KEITHLEY 2015-10-07 12:20:16 UTC
GlusterFS 3.4.x has reached end-of-life.

If this bug still exists in a later release please reopen this and change the version or open a new bug.


Note You need to log in before you can comment on or make changes to this bug.