Bug 763658 (GLUSTER-1926) - peer detach from one cluster and adding it to another cluster
Summary: peer detach from one cluster and adding it to another cluster
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: GLUSTER-1926
Product: GlusterFS
Classification: Community
Component: glusterd
Version: 3.1-alpha
Hardware: All
OS: Linux
low
low
Target Milestone: ---
Assignee: Kaushal
QA Contact:
URL:
Whiteboard:
: 763517 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-10-12 05:21 UTC by Lakshmipathi G
Modified: 2011-10-03 04:56 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: RTP
Mount Type: All
Documentation: DNR
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Lakshmipathi G 2010-10-12 02:59:27 UTC
after adding bricks to afr -it created untar kernel to fail on dht and afr  clusters.

Comment 1 Lakshmipathi G 2010-10-12 05:21:26 UTC
1.created a dht cluster on brick1 and brick2.
2.created a afr gluster on brick3 and brick4.
and start them.

Now,by mistake,peer probe for new brick 5,brick 6 started from dht cluster instead of afr cluster.

then did  peer detach for brick5 and brick6 from dht cluster.

and did peer probe afr gluster -for these new bricks.

now volume info shows dht clusters too on afr bricks and showmount says 

afr-brick#showmount -e localhost
Export list for localhost:
/afr46 *
/dht46 *
========

#gluster volume info

Volume Name: afr46
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 10.214.231.112:/mnt/afr46
Brick2: 10.198.110.16:/mnt/afr46
Brick3: 10.240.94.228:/mnt/afr46
Brick4: 10.212.70.131:/mnt/afr46

Volume Name: dht46
Type: Distribute
Status: Started
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: 10.192.134.144:/mnt/dht46
Brick2: 10.192.141.187:/mnt/dht46
Brick3: 10.202.151.207:/mnt/dht46

Comment 2 Amar Tumballi 2011-04-25 09:33:44 UTC
Please update the status of this bug as its been more than 6months since its filed (bug id < 2000)

Please resolve it with proper resolution if its not valid anymore. If its still valid and not critical, move it to 'enhancement' severity.

Comment 3 Amar Tumballi 2011-09-13 02:09:19 UTC
Lets see what needs to be done here. Check the behavior again, and we will come up with a solution for this. Work with Lakshmipati on this.

Comment 4 Kaushal 2011-09-14 06:45:44 UTC
The problem here is that volume files of the cluster are leftover in the peer when peer detach happens. So when the peer is added to a new cluster the volumes from the old cluster are added to the new cluster even if those volumes do not belong to the new cluster.

A solution would be to purge the detached peer of any volumes belonging to the cluster from which it was detached. This could be at the time of detach, or as a new command which could clean unneeded volume configs.

Any other ideas?

Comment 5 Anand Avati 2011-10-01 09:54:59 UTC
CHANGE: http://review.gluster.com/431 (Performs cleanup on the detached peer and in the cluster after a) merged in master by Vijay Bellur (vijay)

Comment 6 Kaushal 2011-10-03 01:56:40 UTC
*** Bug 1785 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.