Bug 820428 - [RFE] Geo-replication is not automatically restarted on remaining Masters
Summary: [RFE] Geo-replication is not automatically restarted on remaining Masters
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: All
OS: All
medium
medium
Target Milestone: ---
Assignee: Csaba Henk
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2012-05-09 22:05 UTC by Andreas Kurz
Modified: 2015-04-09 11:05 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-04-09 11:05:58 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Andreas Kurz 2012-05-09 22:05:57 UTC
Description of problem:

Starting geo-replication on a replicated volume only ever starts the gsyncd processes on the node where the command was executed. There is no automatic restard of geo-replication in case that node dies.
 
Version-Release number of selected component (if applicable):

3.2.6

How reproducible:

Always

Steps to Reproduce:
1. start geo-replication on node1
2. reset node1
3. check geo-replication status on remaining node
  
Actual results:

Geo-replication is not running

Expected results:

Geo-replication is running

Comment 1 Niels de Vos 2014-11-27 14:45:10 UTC
Feature requests make most sense against the 'mainline' release, there is no ETA for an implementation and requests might get forgotten when filed against a particular version.

Comment 2 Aravinda VK 2015-04-09 11:05:58 UTC
This is not applicable with the Distributed Geo-replication from >3.5. Closing this bug. Please reopen if issue found again.


Note You need to log in before you can comment on or make changes to this bug.