Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1351732 - gluster volume status <volume> client" isn't showing any information when one of the nodes in a 3-way Distributed-Replicate volume is shut down
gluster volume status <volume> client" isn't showing any information when one...
Status: CLOSED ERRATA
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
3.1
Unspecified Linux
unspecified Severity low
: ---
: RHGS 3.2.0
Assigned To: Atin Mukherjee
Byreddy
:
Depends On:
Blocks: 1351515 1351530 1351880 1352926
  Show dependency treegraph
 
Reported: 2016-06-30 12:55 EDT by Cal Calhoun
Modified: 2017-10-02 08:05 EDT (History)
6 users (show)

See Also:
Fixed In Version: glusterfs-3.8.4-1
Doc Type: Bug Fix
Doc Text:
Previously, when a server node was unavailable, the client details of the bricks on that node were not displayed when the 'gluster volume status VOLNAME clients' command was run. This has been corrected and client details are now displayed as expected.
Story Points: ---
Clone Of:
: 1351880 (view as bug list)
Environment:
Last Closed: 2017-03-23 01:38:23 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHSA-2017:0486 normal SHIPPED_LIVE Moderate: Red Hat Gluster Storage 3.2.0 security, bug fix, and enhancement update 2017-03-23 05:18:45 EDT

  None (edit)
Description Cal Calhoun 2016-06-30 12:55:20 EDT
Description of problem:

3-way Distributed-Replicate volume...

[root@node1 ~]# gluster volume status vol1
Status of volume: vol1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
node1:brick1                                49165     0          Y       3665
node2:brick2                                49164     0          Y       17094
node3:brick3                                49154     0          Y       19888
node1:brick4                                49172     0          Y       3670
node2:brick5                                49171     0          Y       17099
node3:brick6                                49155     0          Y       19907
NFS Server on localhost                     2049      0          Y       3679
Self-heal Daemon on localhost               N/A       N/A        Y       3689
NFS Server on node3                         2049      0          Y       19927
Self-heal Daemon on node3                   N/A       N/A        Y       19935
NFS Server on node2                         2049      0          Y       20374
Self-heal Daemon on node2                   N/A       N/A        Y       20382

Task Status of Volume vol1
------------------------------------------------------------------------------
There are no active volume tasks

[root@node1 ~]#

If the first node in the cluster is shut down, the one where the volume was created and started from, when I try and display on the two remaining nodes information about the clients connected to the volume I don't get any information back.

[root@node2 glusterfs]# gluster volume status vol1 client
Client connections for volume vol1
----------------------------------------------
----------------------------------------------

[root@node3 glusterfs]# gluster volume status vol1 clients
Client connections for volume vol1
----------------------------------------------
----------------------------------------------

As soon as the node (node1) is powered back up and everything is up and running again, the clients are visible.

==========
Version-Release number of selected component (if applicable):

All nodes:

gluster-nagios-addons-0.2.5-1.el7rhgs.x86_64
gluster-nagios-common-0.2.3-1.el7rhgs.noarch
glusterfs-3.7.5-19.el7rhgs.x86_64
glusterfs-api-3.7.5-19.el7rhgs.x86_64
glusterfs-cli-3.7.5-19.el7rhgs.x86_64
glusterfs-client-xlators-3.7.5-19.el7rhgs.x86_64
glusterfs-fuse-3.7.5-19.el7rhgs.x86_64
glusterfs-geo-replication-3.7.5-19.el7rhgs.x86_64
glusterfs-libs-3.7.5-19.el7rhgs.x86_64
glusterfs-rdma-3.7.5-19.el7rhgs.x86_64
glusterfs-server-3.7.5-19.el7rhgs.x86_64
python-gluster-3.7.5-19.el7rhgs.noarch
samba-vfs-glusterfs-4.2.4-13.el7rhgs.x86_64
vdsm-gluster-4.16.30-1.3.el7rhgs.noarch

==========
How reproducible:

Completely

==========
Steps to Reproduce:

See description
Comment 2 Atin Mukherjee 2016-07-01 01:43:09 EDT
RCA:

In CLI side the response dictionary is parsed assuming all the bricks to be up. Since in this case one of the node was brought down client details for the bricks hosted by the same node were not available in the dictionary resulting into a blank output for 'gluster volume status <volname> clients'
Comment 3 Atin Mukherjee 2016-07-01 02:23:18 EDT
Upstream patch http://review.gluster.org/#/c/14842 posted for review
Comment 10 Byreddy 2016-10-05 01:29:34 EDT
Verified this bug using the the build - glusterfs-3.8.4-2.

I am able to see the command "gluster volume status clients" result on the active cluster nodes when one of the cluster node is down.


Moving to verified state.
Comment 13 Atin Mukherjee 2017-03-06 00:16:19 EST
The reason is explained in the patch http://review.gluster.org/#/c/14842

"In cli the response dictionary is parsed assuming all the bricks to be up. If in a given cluster one of the node is down client details for the bricks hosted by the same node are not available in the dictionary resulting into a blank output for 'gluster volume status <volname> clients'"
Comment 18 errata-xmlrpc 2017-03-23 01:38:23 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2017-0486.html

Note You need to log in before you can comment on or make changes to this bug.