Description of problem: ======================= In a scenario where one of the master node is down and if the config changes are performed using another master node, it gets successful. Once the node comes online, it doesn't get notified for the changes. Example: ======== Node 1: While was brought down shows the config values as: [root@dhcp37-88 ~]# gluster volume geo-replication red 10.70.37.213::hat config ignore_deletes false [root@dhcp37-88 ~]# Node 2: From where the config changes were made when node 1 was down: [root@dhcp37-52 scripts]# gluster volume geo-replication red 10.70.37.213::hat config ignore_deletes true [root@dhcp37-52 scripts]# Result: data inconsistency at slave depending upon which option is set. In this example, doesn't get delete at slave. Version-Release number of selected component (if applicable): ============================================================= glusterfs-3.7.9-10 How reproducible: ================= Always Steps to Reproduce: =================== 1. Bring down one of the master node 2. Perform config changes 3. Bring back the master node Actual results: =============== Conflict in config files Additional info:
Geo-replication support added to Glusterd2 project, which will be available with Gluster upstream 4.0 and 4.1 releases. Most of the issues already fixed with issue https://github.com/gluster/glusterd2/issues/271 and remaining fixes are noted in issue https://github.com/gluster/glusterd2/issues/557 We can close these issues since we are not planning any fixes for 3.x series.