Bug 1344312 - [geo-rep]: Failover/Failback sections needs additional steps
Summary: [geo-rep]: Failover/Failback sections needs additional steps
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: doc-Administration_Guide
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.1.3
Assignee: Divya
QA Contact: Rahul Hinduja
URL:
Whiteboard:
Depends On:
Blocks: 1311847
TreeView+ depends on / blocked
 
Reported: 2016-06-09 11:53 UTC by Rahul Hinduja
Modified: 2016-06-29 14:21 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-06-29 14:21:58 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Rahul Hinduja 2016-06-09 11:53:20 UTC
Document URL: 
=============

https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3.1/html/Administration_Guide/index.html

Section Number and Name: 
========================

14.6. Disaster Recovery

Describe the issue and suggestion: 
==================================

Performing a Failover and Failback:
+++++++++++++++++++++++++++++++++++

When original master comes back, it will have existing geo-rep session which would become Online. Hence the first step should be to stop the existing geo-rep session:

1. Stop the existing geo-rep session from original master to orginal slave using: 

gluster volume geo-replication ORIGINAL_MASTER_VOL ORIGINAL_SLAVE_HOST::ORIGINAL_SLAVE_VOL stop <force>

2. Mention specifically to use force. 

Create a new geo-replication session with the original slave as the new master, and the original master as the new slave with force option. For more information on setting and creating geo-replication session, see Section 14.3.4.1, “Setting Up your Environment for Geo-replication Session”.

Comment 5 Rahul Hinduja 2016-06-13 15:53:35 UTC
Documentation changes mentioned in comment 2 looks good to me. Moving the bz to verified state


Note You need to log in before you can comment on or make changes to this bug.