Bug 1117509 - Gluster peer detach does not cleanup peer records causing peer to get added back
Summary: Gluster peer detach does not cleanup peer records causing peer to get added back
Keywords:
Status: CLOSED EOL
Alias: None
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.4.2
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-08 20:29 UTC by Anirban Ghoshal
Modified: 2015-10-07 13:50 UTC (History)
3 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed:
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Anirban Ghoshal 2014-07-08 20:29:45 UTC
Description of problem:

When you do a gluster peer detach <peer-name>, you would expect to delete all 
records for a peer? Unfortunately on 3.4.2 I can see that it does not happen. Even though glusterd would report temporarily that the peer has been detached, restarting glusterd would cause the peer to be added back.

I could see that this had to do with the peer record not getting deleted from /var/lib/glusterd/peers/ through the peer detach command. To illustrate,

root@server1]$ gluster peer detach server2
peer detach: success

root@server1]$ cat /var/lib/glusterd/peers/5226a3d7-9a35-44b6-b9ca-d7
uuid=bd8c9f46-0fb7-4958-b0ca-9f5fab18e5ec
state=3
hostname1=192.168.24.81
 
root@server1]$ gluster p status
peer status: No peers present

root@server1]$ /etc/init.d/glusterd restart
[ OK ]

root@server1]$ gluster p status
uuid=bd8c9f46-0fb7-4958-b0ca-9f5fab18e5ec
state=3
hostname1=192.168.24.81

Now, if I change detach and also rm -f the peer record from /var/lib/glusterd, then after restart the record of the peer would disappear (correctly). That is to say, the following:

a) gluster peer detach server2
b) rm -f /var/lib/glusterd/peers/5226a3d7-9a35-44b6-b9ca-d7

IMHO peer detach needs to correct itself so that peer does not get added back automatically.

Version-Release number of selected component (if applicable):


How reproducible:
100%

Steps to Reproduce:
1. Create a twin replicated cluster comprising server1 and server2.
2. Detach server2 on server1 using gluster peer detach server2. [Note: For my testing, I had also powered down server2].
3. restart /etc/init.d/glusterd on server1.
4. Execute gluster peer status on server1 and check for server2 in peer list.

Actual results:

Before restart of glusterd. gluster correctly reports that no peer is present. But after restart of glusterd, it re-inits with the record of the deleted peer which causes the latter to show up in the peer list.

Expected results:

Deleted peer should not feature in gluster peer status output.

Additional info:
None.

Comment 1 Atin Mukherjee 2014-10-31 01:42:40 UTC
Anirban,

3.4.2 codebase is quite old, can you please re-test this in 3.6? I don't think this issue is any more present.

-Atin

Comment 2 Anirban Ghoshal 2015-01-22 16:49:52 UTC
(In reply to Atin Mukherjee from comment #1)
> Anirban,
> 
> 3.4.2 codebase is quite old, can you please re-test this in 3.6? I don't
> think this issue is any more present.
> 
> -Atin

We'll be moving to 3.6 in a week or 2. Will retest then.

Comment 3 Niels de Vos 2015-05-17 21:59:55 UTC
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5.

This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs".

If there is no response by the end of the month, this bug will get automatically closed.

Comment 4 Kaleb KEITHLEY 2015-10-07 13:49:43 UTC
GlusterFS 3.4.x has reached end-of-life.

If this bug still exists in a later release please reopen this and change the version or open a new bug.

Comment 5 Kaleb KEITHLEY 2015-10-07 13:49:43 UTC
GlusterFS 3.4.x has reached end-of-life.

If this bug still exists in a later release please reopen this and change the version or open a new bug.

Comment 6 Kaleb KEITHLEY 2015-10-07 13:50:53 UTC
GlusterFS 3.4.x has reached end-of-life.\                                                   \                                                                               If this bug still exists in a later release please reopen this and change the version or open a new bug.


Note You need to log in before you can comment on or make changes to this bug.