Description of problem: Gluster volume status inode command failing on subsequent volumes. Version-Release number of selected component (if applicable): 3.8.4-54-2 How reproducible: Every time Steps to Reproduce: 1.Create 3 node cluster(n1, n2, n3). 2.Create 2 replica-3 volumes (v1, v2). 3.Mount 2 volumes on two different clients(c1, c2). 4.Start running I/O parallel on two mount points. 5.While running I/O's , start executing 'Gluster volume status v1 inode' and 'Gluster volume status v1 fd' frequently with some time gap 6.In sameway run volume status inode command for v2 also 7.Then create new volume v3 (distirubted_replicated) 8. Then perform "gluster volume status v3 inode" and "gluster volume status v3 fd" on node n1 9. 'Gluster volume status inode' and 'gluster volume status fd' commands are failing for newly created volume. 10. node n1 bricks of volume v3 went to offline Actual results: root@dhcp37-113 home]# gluster vol status rp1 fd Error : Request timed out [root@dhcp37-113 home]# gluster vol status drp1 inode Error : Request timed out gluster vol status drp1 Status of volume: drp1 Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.37.113:/bricks/brick1/drp1 N/A N/A N N/A Brick 10.70.37.157:/bricks/brick1/drp1 49152 0 Y 2125 Brick 10.70.37.174:/bricks/brick1/drp1 49152 0 Y 2306 Brick 10.70.37.113:/bricks/brick2/drp1 N/A N/A N N/A Brick 10.70.37.157:/bricks/brick2/drp1 49152 0 Y 2125 Brick 10.70.37.174:/bricks/brick2/drp1 49152 0 Y 2306 Self-heal Daemon on localhost N/A N/A Y 4507 Self-heal Daemon on 10.70.37.157 N/A N/A Y 4006 Self-heal Daemon on 10.70.37.174 N/A N/A Y 4111 Task Status of Volume drp1 Expected results: Bricks should not go to offline and gluster volume status inode and fd commands should get executed successfully
This bug reported is against a version of Gluster that is no longer maintained (or has been EOL'd). See https://www.gluster.org/release-schedule/ for the versions currently maintained. As a result this bug is being closed. If the bug persists on a maintained version of gluster or against the mainline gluster repository, request that it be reopened and the Version field be marked appropriately.