Description of problem: When you do a gluster peer detach <peer-name>, you would expect to delete all records for a peer? Unfortunately on 3.4.2 I can see that it does not happen. Even though glusterd would report temporarily that the peer has been detached, restarting glusterd would cause the peer to be added back. I could see that this had to do with the peer record not getting deleted from /var/lib/glusterd/peers/ through the peer detach command. To illustrate, root@server1]$ gluster peer detach server2 peer detach: success root@server1]$ cat /var/lib/glusterd/peers/5226a3d7-9a35-44b6-b9ca-d7 uuid=bd8c9f46-0fb7-4958-b0ca-9f5fab18e5ec state=3 hostname1=192.168.24.81 root@server1]$ gluster p status peer status: No peers present root@server1]$ /etc/init.d/glusterd restart [ OK ] root@server1]$ gluster p status uuid=bd8c9f46-0fb7-4958-b0ca-9f5fab18e5ec state=3 hostname1=192.168.24.81 Now, if I change detach and also rm -f the peer record from /var/lib/glusterd, then after restart the record of the peer would disappear (correctly). That is to say, the following: a) gluster peer detach server2 b) rm -f /var/lib/glusterd/peers/5226a3d7-9a35-44b6-b9ca-d7 IMHO peer detach needs to correct itself so that peer does not get added back automatically. Version-Release number of selected component (if applicable): How reproducible: 100% Steps to Reproduce: 1. Create a twin replicated cluster comprising server1 and server2. 2. Detach server2 on server1 using gluster peer detach server2. [Note: For my testing, I had also powered down server2]. 3. restart /etc/init.d/glusterd on server1. 4. Execute gluster peer status on server1 and check for server2 in peer list. Actual results: Before restart of glusterd. gluster correctly reports that no peer is present. But after restart of glusterd, it re-inits with the record of the deleted peer which causes the latter to show up in the peer list. Expected results: Deleted peer should not feature in gluster peer status output. Additional info: None.
Anirban, 3.4.2 codebase is quite old, can you please re-test this in 3.6? I don't think this issue is any more present. -Atin
(In reply to Atin Mukherjee from comment #1) > Anirban, > > 3.4.2 codebase is quite old, can you please re-test this in 3.6? I don't > think this issue is any more present. > > -Atin We'll be moving to 3.6 in a week or 2. Will retest then.
GlusterFS 3.7.0 has been released (http://www.gluster.org/pipermail/gluster-users/2015-May/021901.html), and the Gluster project maintains N-2 supported releases. The last two releases before 3.7 are still maintained, at the moment these are 3.6 and 3.5. This bug has been filed against the 3,4 release, and will not get fixed in a 3.4 version any more. Please verify if newer versions are affected with the reported problem. If that is the case, update the bug with a note, and update the version if you can. In case updating the version is not possible, leave a comment in this bug report with the version you tested, and set the "Need additional information the selected bugs from" below the comment box to "bugs". If there is no response by the end of the month, this bug will get automatically closed.
GlusterFS 3.4.x has reached end-of-life. If this bug still exists in a later release please reopen this and change the version or open a new bug.
GlusterFS 3.4.x has reached end-of-life.\ \ If this bug still exists in a later release please reopen this and change the version or open a new bug.