Bug 1344861

Summary: [geo-rep]: geo-rep config handshake doesn't happen
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Rahul Hinduja <rhinduja>
Component: geo-replicationAssignee: Bug Updates Notification Mailing List <rhs-bugs>
Status: CLOSED WONTFIX QA Contact: storage-qa-internal <storage-qa-internal>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.1CC: avishwan, bmohanra, csaba, rcyriac
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
Geo-replication configuration changes when one or more nodes are down in the Master Cluster. Due to this, the nodes that are down will have the old configuration when the nodes are up. Workaround: Execute the Geo-replication config command again once all nodes are up. With this, all nodes in Master Cluster will have same Geo-replication config options.
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-04-16 15:57:30 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1311843    

Description Rahul Hinduja 2016-06-11 19:46:52 UTC
Description of problem:
=======================

In a scenario where one of the master node is down and if the config changes are performed using another master node, it gets successful. Once the node comes online, it doesn't get notified for the changes. 

Example:
========

Node 1: While was brought down shows the config values as:

[root@dhcp37-88 ~]# gluster volume geo-replication red 10.70.37.213::hat config ignore_deletes  
false
[root@dhcp37-88 ~]#

Node 2: From where the config changes were made when node 1 was down:

[root@dhcp37-52 scripts]# gluster volume geo-replication red 10.70.37.213::hat config ignore_deletes  
true
[root@dhcp37-52 scripts]# 

Result: data inconsistency at slave depending upon which option is set. In this example, doesn't get delete at slave. 


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.9-10


How reproducible:
=================
Always


Steps to Reproduce:
===================
1. Bring down one of the master node
2. Perform config changes
3. Bring back the master node

Actual results:
===============
Conflict in config files


Additional info:

Comment 5 Aravinda VK 2018-02-06 08:08:36 UTC
Geo-replication support added to Glusterd2 project, which will be available with Gluster upstream 4.0 and 4.1 releases. 

Most of the issues already fixed with issue https://github.com/gluster/glusterd2/issues/271 and remaining fixes are noted in issue https://github.com/gluster/glusterd2/issues/557

We can close these issues since we are not planning any fixes for 3.x series.