Bug 763658 (GLUSTER-1926)
Summary: | peer detach from one cluster and adding it to another cluster | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Lakshmipathi G <lakshmipathi> |
Component: | glusterd | Assignee: | Kaushal <kaushal> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | low | Docs Contact: | |
Priority: | low | ||
Version: | 3.1-alpha | CC: | amarts, gluster-bugs, rabhat, vijay |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | Type: | --- | |
Regression: | RTP | Mount Type: | All |
Documentation: | DNR | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Lakshmipathi G
2010-10-12 02:59:27 UTC
1.created a dht cluster on brick1 and brick2. 2.created a afr gluster on brick3 and brick4. and start them. Now,by mistake,peer probe for new brick 5,brick 6 started from dht cluster instead of afr cluster. then did peer detach for brick5 and brick6 from dht cluster. and did peer probe afr gluster -for these new bricks. now volume info shows dht clusters too on afr bricks and showmount says afr-brick#showmount -e localhost Export list for localhost: /afr46 * /dht46 * ======== #gluster volume info Volume Name: afr46 Type: Distributed-Replicate Status: Started Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: 10.214.231.112:/mnt/afr46 Brick2: 10.198.110.16:/mnt/afr46 Brick3: 10.240.94.228:/mnt/afr46 Brick4: 10.212.70.131:/mnt/afr46 Volume Name: dht46 Type: Distribute Status: Started Number of Bricks: 3 Transport-type: tcp Bricks: Brick1: 10.192.134.144:/mnt/dht46 Brick2: 10.192.141.187:/mnt/dht46 Brick3: 10.202.151.207:/mnt/dht46 Please update the status of this bug as its been more than 6months since its filed (bug id < 2000) Please resolve it with proper resolution if its not valid anymore. If its still valid and not critical, move it to 'enhancement' severity. Lets see what needs to be done here. Check the behavior again, and we will come up with a solution for this. Work with Lakshmipati on this. The problem here is that volume files of the cluster are leftover in the peer when peer detach happens. So when the peer is added to a new cluster the volumes from the old cluster are added to the new cluster even if those volumes do not belong to the new cluster. A solution would be to purge the detached peer of any volumes belonging to the cluster from which it was detached. This could be at the time of detach, or as a new command which could clean unneeded volume configs. Any other ideas? CHANGE: http://review.gluster.com/431 (Performs cleanup on the detached peer and in the cluster after a) merged in master by Vijay Bellur (vijay) |