Bug 764523 (GLUSTER-2791) - trying to set up geo-replication with a gluster slave on the LAN screws up geo-rep irrecoverably
Summary: trying to set up geo-replication with a gluster slave on the LAN screws up ge...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: GLUSTER-2791
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: x86_64
OS: Linux
medium
high
Target Milestone: ---
Assignee: Csaba Henk
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-04-18 06:05 UTC by Csaba Henk
Modified: 2011-05-02 08:41 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed:
Regression: RTA
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)

Description Csaba Henk 2011-04-18 06:05:18 UTC
"gluster slave on the LAN": one of the form <host>:<vol>

So if you type

  > volume geo-replication start <myvol> <host>:<vol>

then you end up with a broken entry in the geo-replication status output on which all operations fail.

By "irrecoverably" I mean that you can't get back the system to a consistent state by means of the cli, but have to hand-hack in glusterd workdir to fix up.

Comment 1 Anand Avati 2011-04-19 06:30:02 UTC
PATCH: http://patches.gluster.com/patch/6955 in master (syncdaemon: pythonize accidentally copied ruby line in code)

Comment 2 Lakshmipathi G 2011-04-29 08:05:21 UTC
with 3.2 synced with LAN. files gsynced from master to slave.

# gluster volume geo-replication masterdht 10.170.182.171:new1 start
Starting geo-replication session between masterdht & 10.170.182.171:new1 has been successful


# gluster volume geo-replication masterdht 10.170.182.171:new1 status
MASTER               SLAVE                                              STATUS    
--------------------------------------------------------------------------------
masterdht            10.170.182.171:new1                                OK


Note You need to log in before you can comment on or make changes to this bug.