Bug 1566067 - Volume status inode is broken with brickmux
Summary: Volume status inode is broken with brickmux
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: mainline
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
Assignee: hari gowtham
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1559452 1569336 1569346
TreeView+ depends on / blocked
 
Reported: 2018-04-11 13:01 UTC by hari gowtham
Modified: 2018-10-23 14:21 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.12.15
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1559452
: 1569336 1569346 (view as bug list)
Environment:
Last Closed: 2018-06-20 18:03:42 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Comment 1 hari gowtham 2018-04-11 13:03:08 UTC
Description of problem:

Gluster volume status inode command failing on subsequent volumes.


Version-Release number of selected component (if applicable):

3.8.4-54-2

How reproducible:

Every time

Steps to Reproduce:
1.Create 3 node cluster(n1, n2, n3).
2.Create 2 replica-3 volumes (v1, v2).
3.Mount 2 volumes on two different clients(c1, c2).
4.Start running I/O parallel on two mount points.
5.While running I/O's , start executing 'Gluster volume status v1 inode' and
'Gluster volume status v1 fd' frequently with some time gap
6.In sameway run volume status inode command for v2 also
7.Then create new volume  v3 (distirubted_replicated)
8. Then perform "gluster volume status v3 inode" and "gluster volume status v3 fd" on node n1
9. 'Gluster volume status inode' and 'gluster volume status fd' commands are failing for newly created volume.
10. node n1 bricks of volume v3  went to offline 

Actual results:

root@dhcp37-113 home]# gluster vol status rp1 fd
Error : Request timed out
[root@dhcp37-113 home]# gluster vol status drp1 inode
Error : Request timed out

gluster vol status drp1
Status of volume: drp1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.70.37.113:/bricks/brick1/drp1      N/A       N/A        N       N/A  
Brick 10.70.37.157:/bricks/brick1/drp1      49152     0          Y       2125 
Brick 10.70.37.174:/bricks/brick1/drp1      49152     0          Y       2306 
Brick 10.70.37.113:/bricks/brick2/drp1      N/A       N/A        N       N/A  
Brick 10.70.37.157:/bricks/brick2/drp1      49152     0          Y       2125 
Brick 10.70.37.174:/bricks/brick2/drp1      49152     0          Y       2306 
Self-heal Daemon on localhost               N/A       N/A        Y       4507 
Self-heal Daemon on 10.70.37.157            N/A       N/A        Y       4006 
Self-heal Daemon on 10.70.37.174            N/A       N/A        Y       4111 
 
Task Status of Volume drp1



Expected results:

Bricks should not go to offline and gluster volume status inode and fd commands should get executed successfully

Comment 2 Worker Ant 2018-04-11 13:32:20 UTC
REVIEW: https://review.gluster.org/19846 (glusterd: volume inode/fd status broken with brick mux) posted (#1) for review on master by hari gowtham

Comment 3 Worker Ant 2018-04-19 02:55:20 UTC
COMMIT: https://review.gluster.org/19846 committed in master by "Atin Mukherjee" <amukherj> with a commit message- glusterd: volume inode/fd status broken with brick mux

Problem:
The values for inode/fd was populated from the ctx received
from the server xlator.
Without brickmux, every brick from a volume belonged to a
single brick from the volume.
So searching the server and populating it worked.

With brickmux, a number of bricks can be confined to a single
process. These bricks can be from different volumes too (if
we use the max-bricks-per-process option).
If they are from different volumes, using the server xlator
to populate causes problem.

Fix:
Use the brick to validate and populate the inode/fd status.

Signed-off-by: hari gowtham <hgowtham>

Change-Id: I2543fa5397ea095f8338b518460037bba3dfdbfd
fixes: bz#1566067

Comment 4 Worker Ant 2018-04-19 06:22:35 UTC
REVIEW: https://review.gluster.org/19903 (glusterd: volume inode/fd status broken with brick mux) posted (#1) for review on release-3.12 by hari gowtham

Comment 5 Worker Ant 2018-04-19 06:37:55 UTC
REVIEW: https://review.gluster.org/19904 (glusterd: volume inode/fd status broken with brick mux) posted (#1) for review on release-4.0 by hari gowtham

Comment 6 Shyamsundar 2018-06-20 18:03:42 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report.

glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html
[2] https://www.gluster.org/pipermail/gluster-users/

Comment 7 Shyamsundar 2018-10-23 14:21:35 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.15, please open a new bug report.

glusterfs-3.12.15 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] https://lists.gluster.org/pipermail/announce/2018-October/000114.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.