Bug 983852 - "gluster peer detach <hostname> force" command not working as expected
Summary: "gluster peer detach <hostname> force" command not working as expected
Keywords:
Status: CLOSED EOL
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: 2.1
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: spandura
URL:
Whiteboard: glusterd
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-07-12 06:16 UTC by spandura
Modified: 2015-12-03 17:22 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-12-03 17:22:08 UTC
Embargoed:


Attachments (Terms of Use)

Description spandura 2013-07-12 06:16:07 UTC
Description of problem:
=========================
As part of clean-up phase of "gluster peer detach <hostname> force" command execution,  all volumes information is removed from the host which is detached. i.e. the command execution removes all the directories under "/var/lib/glusterd/vols/*" .But it fails to stop the brick process running on that host which is detached. 

In case if the peer is re-attached using the command "gluster peer probe <hostname>" , the volumes information is synced to the host. The host is unable to start the brick process as there is already existing brick process running on the host. 

Expected Result
=================
Brick processes running on the host should be stopped as part of the clenup process when the "gluster peer detach <hostname> force" is executed. 

Version-Release number of selected component (if applicable):
==============================================================
root@king [Jul-12-2013-11:40:46] >rpm -qa | grep glusterfs-server
glusterfs-server-3.4.0.12rhs.beta3-1.el6rhs.x86_64

root@king [Jul-12-2013-11:40:50] >gluster --version
glusterfs 3.4.0.12rhs.beta3 built on Jul  6 2013 14:35:18

How reproducible:
=================
Often

Steps to Reproduce:
====================
1. Create a replicate volume 2 x 2 on 4 storage nodes { node1, node2, node3 and node4 }. Start the volume. 

2. on node1 execute : "gluster peer detach <node4> force"

The replicate volume information is deleted on node4. But the brick process will be still running. 

3. from node1 execute : "gluster peer probe <node4>"

Actual results:
=============
The volume information is synced but we will not be able to re-start the brick process as there is already a brick process running.

Comment 2 M S Vishwanath Bhat 2013-07-12 08:59:40 UTC
There was a bug logged in upstream to remove peer detach force https://bugzilla.redhat.com/show_bug.cgi?id=983590

Comment 3 Vivek Agarwal 2015-12-03 17:22:08 UTC
Thank you for submitting this issue for consideration in Red Hat Gluster Storage. The release for which you requested us to review, is now End of Life. Please See https://access.redhat.com/support/policy/updates/rhs/

If you can reproduce this bug against a currently maintained version of Red Hat Gluster Storage, please feel free to file a new report against the current release.


Note You need to log in before you can comment on or make changes to this bug.