Bug 1344861 - [geo-rep]: geo-rep config handshake doesn't happen
Summary: [geo-rep]: geo-rep config handshake doesn't happen
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: geo-replication
Version: rhgs-3.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: ---
Assignee: Bug Updates Notification Mailing List
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1311843
TreeView+ depends on / blocked
 
Reported: 2016-06-11 19:46 UTC by Rahul Hinduja
Modified: 2018-04-16 15:57 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
Geo-replication configuration changes when one or more nodes are down in the Master Cluster. Due to this, the nodes that are down will have the old configuration when the nodes are up. Workaround: Execute the Geo-replication config command again once all nodes are up. With this, all nodes in Master Cluster will have same Geo-replication config options.
Clone Of:
Environment:
Last Closed: 2018-04-16 15:57:30 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Rahul Hinduja 2016-06-11 19:46:52 UTC
Description of problem:
=======================

In a scenario where one of the master node is down and if the config changes are performed using another master node, it gets successful. Once the node comes online, it doesn't get notified for the changes. 

Example:
========

Node 1: While was brought down shows the config values as:

[root@dhcp37-88 ~]# gluster volume geo-replication red 10.70.37.213::hat config ignore_deletes  
false
[root@dhcp37-88 ~]#

Node 2: From where the config changes were made when node 1 was down:

[root@dhcp37-52 scripts]# gluster volume geo-replication red 10.70.37.213::hat config ignore_deletes  
true
[root@dhcp37-52 scripts]# 

Result: data inconsistency at slave depending upon which option is set. In this example, doesn't get delete at slave. 


Version-Release number of selected component (if applicable):
=============================================================
glusterfs-3.7.9-10


How reproducible:
=================
Always


Steps to Reproduce:
===================
1. Bring down one of the master node
2. Perform config changes
3. Bring back the master node

Actual results:
===============
Conflict in config files


Additional info:

Comment 5 Aravinda VK 2018-02-06 08:08:36 UTC
Geo-replication support added to Glusterd2 project, which will be available with Gluster upstream 4.0 and 4.1 releases. 

Most of the issues already fixed with issue https://github.com/gluster/glusterd2/issues/271 and remaining fixes are noted in issue https://github.com/gluster/glusterd2/issues/557

We can close these issues since we are not planning any fixes for 3.x series.


Note You need to log in before you can comment on or make changes to this bug.