Description of problem: ======================= When the geo-rep session was created between 2 slaves, one slaves bricks all becomes PASSIVE. It is only with the use of meta volume config set to true. Slave volumes: slave1 and slave2 Creating geo-rep Session between master volume and slave volumes (slave1,slave2) [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave1 create push-pem force Creating geo-replication session between master & 10.70.46.154::slave1 has been successful [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave2 create push-pem force Creating geo-replication session between master & 10.70.46.154::slave2 has been successful [root@georep1 scripts]# Setting the use-meta-volume for slave1 and slave2 volume: [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave1 config use_meta_volume true geo-replication config updated successfully [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave2 config use_meta_volume true geo-replication config updated successfully [root@georep1 scripts]# Starting geo-rep session for slave volumes slave1, slave2 [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave1 start Starting geo-replication session between master & 10.70.46.154::slave1 has been successful [root@georep1 scripts]# [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave2 start Starting geo-replication session between master & 10.70.46.154::slave2 has been successful [root@georep1 scripts]# Status: ======= [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave1 status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ----------------------------------------------------------------------------------------------------------------------------------------------------- georep1 master /rhs/brick1/b1 root 10.70.46.154::slave1 10.70.46.101 Active Changelog Crawl 2015-06-23 00:46:12 georep1 master /rhs/brick2/b2 root 10.70.46.154::slave1 10.70.46.101 Active Changelog Crawl 2015-06-23 00:46:12 georep3 master /rhs/brick1/b1 root 10.70.46.154::slave1 10.70.46.154 Passive N/A N/A georep3 master /rhs/brick2/b2 root 10.70.46.154::slave1 10.70.46.154 Passive N/A N/A georep2 master /rhs/brick1/b1 root 10.70.46.154::slave1 10.70.46.103 Passive N/A N/A georep2 master /rhs/brick2/b2 root 10.70.46.154::slave1 10.70.46.103 Passive N/A N/A [root@georep1 scripts]# [root@georep1 scripts]# [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave2 status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ------------------------------------------------------------------------------------------------------------------------------------------ georep1 master /rhs/brick1/b1 root 10.70.46.154::slave2 10.70.46.101 Passive N/A N/A georep1 master /rhs/brick2/b2 root 10.70.46.154::slave2 10.70.46.101 Passive N/A N/A georep3 master /rhs/brick1/b1 root 10.70.46.154::slave2 10.70.46.154 Passive N/A N/A georep3 master /rhs/brick2/b2 root 10.70.46.154::slave2 10.70.46.154 Passive N/A N/A georep2 master /rhs/brick1/b1 root 10.70.46.154::slave2 10.70.46.103 Passive N/A N/A georep2 master /rhs/brick2/b2 root 10.70.46.154::slave2 10.70.46.103 Passive N/A N/A [root@georep1 scripts]# The second slave volume slave2 has all the passive bricks, and hence the sync never happens to the slave2 volume. Meta volume bricks: [root@georep1 scripts]# ls /var/run/gluster/ss_brick/geo-rep/ 6f023fd5-49a5-4af7-a68a-b7071a8b9ff0_subvol_1.lock 6f023fd5-49a5-4af7-a68a-b7071a8b9ff0_subvol_2.lock [root@georep1 scripts]# Version-Release number of selected component (if applicable): ============================================================== glusterfs-3.7.1-4.el6rhs.x86_64 How reproducible: ================= 1/1
I think the description says, its fanout but bug title says cascading ?
Upstream Patch (master): http://review.gluster.org/#/c/11367/ Upstream Patch (3.7): http://review.gluster.org/#/c/11366/
Downstream Patch: https://code.engineering.redhat.com/gerrit/#/c/51566/
Verified with build: glusterfs-3.7.1-7.el6rhs.x86_64 Bricks in both the slaves becomes active. i.e, for all the subvolumes their is 1 active brick. Moving the bug to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html