Bug 1415131

Summary: Improve output of "gluster volume status detail"
Product: [Community] GlusterFS Reporter: Xavi Hernandez <jahernan>
Component: glusterdAssignee: Xavi Hernandez <jahernan>
Status: CLOSED EOL QA Contact:
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 3.9CC: amukherj, bugs, kjohnson
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1411334 Environment:
Last Closed: 2017-03-08 12:31:51 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1411334, 1416416    
Bug Blocks:    

Description Xavi Hernandez 2017-01-20 11:17:17 UTC
+++ This bug was initially created as a clone of Bug #1411334 +++

Description of problem:

Currently "gluster volume status detail" only gives all available information for linux hosts. Additionally, if the command is executed from a FreeBSD node, some brick information is hidden, even if it comes from a linux brick.


Version-Release number of selected component (if applicable): mainline


How reproducible:

Always

Steps to Reproduce:
1. Create a distributed volume with one brick on CentOS and another one on FreeBSD
2. Run gluster volume status <volname> detail on CentOS
3. Run gluster volume status <volname> detail on FreeBSD

Actual results:

On CentOS, some information from the brick hosted by FreeBSD appears as "N/A". On FreeBSD some information is missing, even from the CentOS brick.

Expected results:

Both commands should return the same output and all available info should be filled instead of writing "N/A".

Additional info:

Comment 1 Worker Ant 2017-01-20 11:20:25 UTC
REVIEW: http://review.gluster.org/16441 (cli: keep 'gluster volume status detail' consistent) posted (#1) for review on release-3.9 by Xavier Hernandez (xhernandez)

Comment 2 Worker Ant 2017-01-25 04:50:44 UTC
COMMIT: https://review.gluster.org/16441 committed in release-3.9 by Atin Mukherjee (amukherj) 
------
commit c2ff6cfb35edbc3ec42e43272f816bd22874e332
Author: Xavier Hernandez <xhernandez>
Date:   Tue Jan 10 11:21:06 2017 +0100

    cli: keep 'gluster volume status detail' consistent
    
    The output of the command 'gluster volume status <volname> detail' is
    not consistent between operating systems. On linux hosts it shows the
    file system type, the device name, mount options and inode size of each
    brick. However the same command executed on a FreeBSD host doesn't show
    all this information, even for bricks stored on a linux.
    
    Additionally, for hosts other than linux, this information is shown as
    'N/A' many times. This has been fixed to show as much information as it
    can be retrieved from the operating system.
    
    The file contrib/mount/mntent.c has been mostly rewriten because it
    contained many errors that caused mount information to not be retrieved
    on some operating systems.
    
    > Change-Id: Icb6e19e8af6ec82255e7792ad71914ef679fc316
    > BUG: 1411334
    > Signed-off-by: Xavier Hernandez <xhernandez>
    > Reviewed-on: http://review.gluster.org/16371
    > Smoke: Gluster Build System <jenkins.org>
    > NetBSD-regression: NetBSD Build System <jenkins.org>
    > CentOS-regression: Gluster Build System <jenkins.org>
    > Reviewed-by: Atin Mukherjee <amukherj>
    > Reviewed-by: Kaleb KEITHLEY <kkeithle>
    
    Change-Id: I1976e4765df9204aea6ca923a6d2b2bed06ed3b9
    BUG: 1415131
    Signed-off-by: Xavier Hernandez <xhernandez>
    Reviewed-on: https://review.gluster.org/16441
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Kaleb KEITHLEY <kkeithle>

Comment 3 Kaushal 2017-03-08 12:31:51 UTC
This bug is getting closed because GlusterFS-3.9 has reached its end-of-life [1].

Note: This bug is being closed using a script. No verification has been performed to check if it still exists on newer releases of GlusterFS.
If this bug still exists in newer GlusterFS releases, please open a new bug against the newer release.

[1]: https://www.gluster.org/community/release-schedule/