Description of problem: After glusterd restart on same node, brick goes offline. Version-Release number of selected component (if applicable): 3.8.4-50 How reproducible: 3/3 Steps to Reproduce: 1. Created a distribute volume with 3 bricks of each node and start it. 2. Stopped glusterd on other two node and check the volume status where glusterd is running. 3. Restart glusterd on node where glusterd is running and check volume status. Actual results: Before restart glusterd Status of volume: testvol Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.37.52:/bricks/brick0/testvol 49160 0 Y 17734 Task Status of Volume testvol ------------------------------------------------------------------------------ There are no active volume tasks After restart glusterd Status of volume: testvol Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick 10.70.37.52:/bricks/brick0/testvol N/A N/A N N/A Task Status of Volume testvol ------------------------------------------------------------------------------ There are no active volume tasks Expected results: Brick must be online after restart glusterd. Additional info: Glusterd is stopped on other two nodes.
upstream patch : https://review.gluster.org/18669
There's an issue with this patch as it causes regression to brick multiplexing node reboot scenario. One more patch https://review.gluster.org/19134 is required to fix this completely.
Verified this bug for distributed volume and replica3 volume with 6 node cluster. verified scenario: -> Created distribute volume with each brick from each node in 6 node cluster. -> Then stop the glusterd service of 5 nodes -> Then verified gluster volume status from where glusterd is running. -> Volume status showing correct and brick is online from which node glusterd is running -> Then restarted glusterd service and verified gluster vol status from where glusterd is running. -> Gluster volume status showing correct and brick is online. Same steps followed for replica3 volume also, verified for replica3 volume. Moving this bug to verified state verified version : glusterfs-3.12.2-4
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607