Description of problem: ----------------------- In the Commvault Hyperscale like setup, there are 3 volumes - engine (replica 3), commserve_vol ( replica 3 ), backupvol (disperse). Brick multipexing feature is enabled on disperse volume, as the result enabled on all the volumes. RHHI-V specific replica 3 volumes will have server-side quorum and client-side quorum enabled. When server quorum is not met, bricks are killed, but when the quorum is regained, there are more number of brick processes that are running on that host Version-Release number of selected component (if applicable): ------------------------------------------------------------- RHGS 3.5.0 ( glusterfs-6.0-13 ) How reproducible: ----------------- Always Steps to Reproduce: ------------------- 1. Create a replica 3 volume with 3 node gluster-cluster and start it 2. Enable brick multipexing on that volume 3. Enable server-quorum on that volume 4. Stop glusterd on node2 and node3 5. On node1, server-quorum is lost and bricks would be killed 6. Restart glusterd on other 2 nodes 7. Check for glusterfsd(brick) processes running on the node1 Actual results: --------------- There are many brick (glusterfsd) process running on that host for the same brick Expected results: ----------------- There should be only one glusterfsd(brick) process running
Created attachment 1622520 [details] glusterd.log from node1
Created attachment 1622521 [details] glusterd.log from node2
Created attachment 1622522 [details] glusterd.log from node3
At one point of time, I could observe that there are more than 21 glusterfsd (brick) process running for the same brick, consuming different ports. This is again the resource leaks and wasting the resource on that machine, but **no** functional impact observed
Created attachment 1626284 [details] Recording that describes the issue
Created attachment 1626285 [details] glusterd log file from node1
Created attachment 1626286 [details] glusterd log file from node2
Created attachment 1626287 [details] glusterd log file from node3
*** Bug 1609450 has been marked as a duplicate of this bug. ***