Bug 763658 - (GLUSTER-1926) peer detach from one cluster and adding it to another cluster
peer detach from one cluster and adding it to another cluster
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: glusterd (Show other bugs)
3.1-alpha
All Linux
low Severity low
: ---
: ---
Assigned To: Kaushal
:
: 763517 (view as bug list)
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-10-12 01:21 EDT by Lakshmipathi G
Modified: 2011-10-03 00:56 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed:
Type: ---
Regression: RTP
Mount Type: All
Documentation: DNR
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:


Attachments (Terms of Use)

  None (edit)
Description Lakshmipathi G 2010-10-11 22:59:27 EDT
after adding bricks to afr -it created untar kernel to fail on dht and afr  clusters.
Comment 1 Lakshmipathi G 2010-10-12 01:21:26 EDT
1.created a dht cluster on brick1 and brick2.
2.created a afr gluster on brick3 and brick4.
and start them.

Now,by mistake,peer probe for new brick 5,brick 6 started from dht cluster instead of afr cluster.

then did  peer detach for brick5 and brick6 from dht cluster.

and did peer probe afr gluster -for these new bricks.

now volume info shows dht clusters too on afr bricks and showmount says 

afr-brick#showmount -e localhost
Export list for localhost:
/afr46 *
/dht46 *
========

#gluster volume info

Volume Name: afr46
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.214.231.112:/mnt/afr46
Brick2: 10.198.110.16:/mnt/afr46
Brick3: 10.240.94.228:/mnt/afr46
Brick4: 10.212.70.131:/mnt/afr46

Volume Name: dht46
Type: Distribute
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.192.134.144:/mnt/dht46
Brick2: 10.192.141.187:/mnt/dht46
Brick3: 10.202.151.207:/mnt/dht46
Comment 2 Amar Tumballi 2011-04-25 05:33:44 EDT
Please update the status of this bug as its been more than 6months since its filed (bug id < 2000)

Please resolve it with proper resolution if its not valid anymore. If its still valid and not critical, move it to 'enhancement' severity.
Comment 3 Amar Tumballi 2011-09-12 22:09:19 EDT
Lets see what needs to be done here. Check the behavior again, and we will come up with a solution for this. Work with Lakshmipati on this.
Comment 4 Kaushal 2011-09-14 02:45:44 EDT
The problem here is that volume files of the cluster are leftover in the peer when peer detach happens. So when the peer is added to a new cluster the volumes from the old cluster are added to the new cluster even if those volumes do not belong to the new cluster.

A solution would be to purge the detached peer of any volumes belonging to the cluster from which it was detached. This could be at the time of detach, or as a new command which could clean unneeded volume configs.

Any other ideas?
Comment 5 Anand Avati 2011-10-01 05:54:59 EDT
CHANGE: http://review.gluster.com/431 (Performs cleanup on the detached peer and in the cluster after a) merged in master by Vijay Bellur (vijay@gluster.com)
Comment 6 Kaushal 2011-10-02 21:56:40 EDT
*** Bug 1785 has been marked as a duplicate of this bug. ***

Note You need to log in before you can comment on or make changes to this bug.