Bug 820428 - [RFE] Geo-replication is not automatically restarted on remaining Masters
[RFE] Geo-replication is not automatically restarted on remaining Masters
Status: CLOSED CURRENTRELEASE
Product: GlusterFS
Classification: Community
Component: geo-replication (Show other bugs)
mainline
All All
medium Severity medium
: ---
: ---
Assigned To: Csaba Henk
: FutureFeature
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2012-05-09 18:05 EDT by Andreas Kurz
Modified: 2015-04-09 07:05 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2015-04-09 07:05:58 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Andreas Kurz 2012-05-09 18:05:57 EDT
Description of problem:

Starting geo-replication on a replicated volume only ever starts the gsyncd processes on the node where the command was executed. There is no automatic restard of geo-replication in case that node dies.
 
Version-Release number of selected component (if applicable):

3.2.6

How reproducible:

Always

Steps to Reproduce:
1. start geo-replication on node1
2. reset node1
3. check geo-replication status on remaining node
  
Actual results:

Geo-replication is not running

Expected results:

Geo-replication is running
Comment 1 Niels de Vos 2014-11-27 09:45:10 EST
Feature requests make most sense against the 'mainline' release, there is no ETA for an implementation and requests might get forgotten when filed against a particular version.
Comment 2 Aravinda VK 2015-04-09 07:05:58 EDT
This is not applicable with the Distributed Geo-replication from >3.5. Closing this bug. Please reopen if issue found again.

Note You need to log in before you can comment on or make changes to this bug.