Description of problem: Gluster volume status inode command failing on subsequent volumes. Version-Release number of selected component (if applicable): 3.8.4-54-2 How reproducible: Every time Steps to Reproduce: 1.Create 3 node cluster(n1, n2, n3). 2.Create 2 replica-3 volumes (v1, v2). 3.Mount 2 volumes on two different clients(c1, c2). 4.Start running I/O parallel on two mount points. 5.While running I/O's , start executing 'Gluster volume status v1 inode' and 'Gluster volume status v1 fd' frequently with some time gap 6.In sameway run volume status inode command for v2 also 7.Then create new volume v3 (distirubted_replicated) 8. Then perform "gluster volume status v3 inode" and "gluster volume status v3 fd" on node n1 9. 'Gluster volume status inode' and 'gluster volume status fd' commands are failing for newly created volume. 10. node n1 bricks of volume v3 went to offline Actual results: root@dhcp37-113 home]# gluster vol status rp1 fd Error : Request timed out [root@dhcp37-113 home]# gluster vol status drp1 inode Error : Request timed out gluster vol status drp1 Status of volume: drp1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.37.113:/bricks/brick1/drp1 N/A N/A N N/A Brick 10.70.37.157:/bricks/brick1/drp1 49152 0 Y 2125 Brick 10.70.37.174:/bricks/brick1/drp1 49152 0 Y 2306 Brick 10.70.37.113:/bricks/brick2/drp1 N/A N/A N N/A Brick 10.70.37.157:/bricks/brick2/drp1 49152 0 Y 2125 Brick 10.70.37.174:/bricks/brick2/drp1 49152 0 Y 2306 Self-heal Daemon on localhost N/A N/A Y 4507 Self-heal Daemon on 10.70.37.157 N/A N/A Y 4006 Self-heal Daemon on 10.70.37.174 N/A N/A Y 4111 Task Status of Volume drp1 Expected results: Bricks should not go to offline and gluster volume status inode and fd commands should get executed successfully
REVIEW: https://review.gluster.org/19846 (glusterd: volume inode/fd status broken with brick mux) posted (#1) for review on master by hari gowtham
COMMIT: https://review.gluster.org/19846 committed in master by "Atin Mukherjee" <amukherj> with a commit message- glusterd: volume inode/fd status broken with brick mux Problem: The values for inode/fd was populated from the ctx received from the server xlator. Without brickmux, every brick from a volume belonged to a single brick from the volume. So searching the server and populating it worked. With brickmux, a number of bricks can be confined to a single process. These bricks can be from different volumes too (if we use the max-bricks-per-process option). If they are from different volumes, using the server xlator to populate causes problem. Fix: Use the brick to validate and populate the inode/fd status. Signed-off-by: hari gowtham <hgowtham> Change-Id: I2543fa5397ea095f8338b518460037bba3dfdbfd fixes: bz#1566067
REVIEW: https://review.gluster.org/19903 (glusterd: volume inode/fd status broken with brick mux) posted (#1) for review on release-3.12 by hari gowtham
REVIEW: https://review.gluster.org/19904 (glusterd: volume inode/fd status broken with brick mux) posted (#1) for review on release-4.0 by hari gowtham
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-v4.1.0, please open a new bug report. glusterfs-v4.1.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2018-June/000102.html [2] https://www.gluster.org/pipermail/gluster-users/
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.12.15, please open a new bug report. glusterfs-3.12.15 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000114.html [2] https://www.gluster.org/pipermail/gluster-users/