Description of problem: ========================= As part of clean-up phase of "gluster peer detach <hostname> force" command execution, all volumes information is removed from the host which is detached. i.e. the command execution removes all the directories under "/var/lib/glusterd/vols/*" .But it fails to stop the brick process running on that host which is detached. In case if the peer is re-attached using the command "gluster peer probe <hostname>" , the volumes information is synced to the host. The host is unable to start the brick process as there is already existing brick process running on the host. Expected Result ================= Brick processes running on the host should be stopped as part of the clenup process when the "gluster peer detach <hostname> force" is executed. Version-Release number of selected component (if applicable): ============================================================== root@king [Jul-12-2013-11:40:46] >rpm -qa | grep glusterfs-server glusterfs-server-3.4.0.12rhs.beta3-1.el6rhs.x86_64 root@king [Jul-12-2013-11:40:50] >gluster --version glusterfs 3.4.0.12rhs.beta3 built on Jul 6 2013 14:35:18 How reproducible: ================= Often Steps to Reproduce: ==================== 1. Create a replicate volume 2 x 2 on 4 storage nodes { node1, node2, node3 and node4 }. Start the volume. 2. on node1 execute : "gluster peer detach <node4> force" The replicate volume information is deleted on node4. But the brick process will be still running. 3. from node1 execute : "gluster peer probe <node4>" Actual results: ============= The volume information is synced but we will not be able to re-start the brick process as there is already a brick process running.
There was a bug logged in upstream to remove peer detach force https://bugzilla.redhat.com/show_bug.cgi?id=983590
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/ If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.