Bug 983852 - "gluster peer detach <hostname> force" command not working as expected
"gluster peer detach <hostname> force" command not working as expected
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: glusterd (Show other bugs)
Unspecified Unspecified
medium Severity medium
: ---
: ---
Assigned To: Bug Updates Notification Mailing List
Depends On:
  Show dependency treegraph
Reported: 2013-07-12 02:16 EDT by spandura
Modified: 2015-12-03 12:22 EST (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2015-12-03 12:22:08 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

  None (edit)
Description spandura 2013-07-12 02:16:07 EDT
Description of problem:
As part of clean-up phase of "gluster peer detach <hostname> force" command execution,  all volumes information is removed from the host which is detached. i.e. the command execution removes all the directories under "/var/lib/glusterd/vols/*" .But it fails to stop the brick process running on that host which is detached. 

In case if the peer is re-attached using the command "gluster peer probe <hostname>" , the volumes information is synced to the host. The host is unable to start the brick process as there is already existing brick process running on the host. 

Expected Result
Brick processes running on the host should be stopped as part of the clenup process when the "gluster peer detach <hostname> force" is executed. 

Version-Release number of selected component (if applicable):
root@king [Jul-12-2013-11:40:46] >rpm -qa | grep glusterfs-server

root@king [Jul-12-2013-11:40:50] >gluster --version
glusterfs built on Jul  6 2013 14:35:18

How reproducible:

Steps to Reproduce:
1. Create a replicate volume 2 x 2 on 4 storage nodes { node1, node2, node3 and node4 }. Start the volume. 

2. on node1 execute : "gluster peer detach <node4> force"

The replicate volume information is deleted on node4. But the brick process will be still running. 

3. from node1 execute : "gluster peer probe <node4>"

Actual results:
The volume information is synced but we will not be able to re-start the brick process as there is already a brick process running.
Comment 2 M S Vishwanath Bhat 2013-07-12 04:59:40 EDT
There was a bug logged in upstream to remove peer detach force https://bugzilla.redhat.com/show_bug.cgi?id=983590
Comment 3 Vivek Agarwal 2015-12-03 12:22:08 EST
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.

Note You need to log in before you can comment on or make changes to this bug.