Bug 1569346 - Volume status inode is broken with brickmux
Summary: Volume status inode is broken with brickmux
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 4.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
Assignee: hari gowtham
QA Contact:
Depends On: 1566067
Blocks: 1559452 1569336
TreeView+ depends on / blocked
Reported: 2018-04-19 06:23 UTC by hari gowtham
Modified: 2018-06-20 18:30 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1566067
Last Closed: 2018-06-20 18:30:10 UTC
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:

Attachments (Terms of Use)

Comment 1 hari gowtham 2018-04-19 06:24:58 UTC
Description of problem:

Gluster volume status inode command failing on subsequent volumes.

Version-Release number of selected component (if applicable):


How reproducible:

Every time

Steps to Reproduce:
1.Create 3 node cluster(n1, n2, n3).
2.Create 2 replica-3 volumes (v1, v2).
3.Mount 2 volumes on two different clients(c1, c2).
4.Start running I/O parallel on two mount points.
5.While running I/O's , start executing 'Gluster volume status v1 inode' and
'Gluster volume status v1 fd' frequently with some time gap
6.In sameway run volume status inode command for v2 also
7.Then create new volume  v3 (distirubted_replicated)
8. Then perform "gluster volume status v3 inode" and "gluster volume status v3 fd" on node n1
9. 'Gluster volume status inode' and 'gluster volume status fd' commands are failing for newly created volume.
10. node n1 bricks of volume v3  went to offline 

Actual results:

root@dhcp37-113 home]# gluster vol status rp1 fd
Error : Request timed out
[root@dhcp37-113 home]# gluster vol status drp1 inode
Error : Request timed out

gluster vol status drp1
Status of volume: drp1
Gluster process                             TCP Port  RDMA Port  Online  Pid
Brick      N/A       N/A        N       N/A  
Brick      49152     0          Y       2125 
Brick      49152     0          Y       2306 
Brick      N/A       N/A        N       N/A  
Brick      49152     0          Y       2125 
Brick      49152     0          Y       2306 
Self-heal Daemon on localhost               N/A       N/A        Y       4507 
Self-heal Daemon on            N/A       N/A        Y       4006 
Self-heal Daemon on            N/A       N/A        Y       4111 
Task Status of Volume drp1

Expected results:

Bricks should not go to offline and gluster volume status inode and fd commands should get executed successfully

Comment 2 Shyamsundar 2018-06-20 18:30:10 UTC
This bug reported is against a version of Gluster that is no longer maintained
(or has been EOL'd). See https://www.gluster.org/release-schedule/ for the
versions currently maintained.

As a result this bug is being closed.

If the bug persists on a maintained version of gluster or against the mainline
gluster repository, request that it be reopened and the Version field be marked

Note You need to log in before you can comment on or make changes to this bug.