Description of problem: ======================= Geo-replication status was CREATED/ACTIVE as opposed to ACTIVE/PASSIVE. Geo-replication session was started and the following was shown as the status of the session: ---------------------------------------------------------------------------------------------- [root@dhcp41-226 scripts]# gluster volume geo-replication master 10.70.41.160::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ----------------------------------------------------------------------------------------------------------------------------------------------------- 10.70.41.226 master /rhs/brick3/b7 root 10.70.41.160::slave N/A Created N/A N/A 10.70.41.226 master /rhs/brick1/b1 root 10.70.41.160::slave N/A Created N/A N/A 10.70.41.230 master /rhs/brick2/b5 root 10.70.41.160::slave N/A Created N/A N/A 10.70.41.229 master /rhs/brick2/b4 root 10.70.41.160::slave N/A Created N/A N/A 10.70.41.219 master /rhs/brick2/b6 root 10.70.41.160::slave N/A Created N/A N/A 10.70.41.227 master /rhs/brick3/b8 root 10.70.41.160::slave N/A Created N/A N/A 10.70.41.227 master /rhs/brick1/b2 root 10.70.41.160::slave N/A Created N/A N/A 10.70.41.228 master /rhs/brick3/b9 root 10.70.41.160::slave 10.70.41.160 Active Changelog Crawl 2018-04-23 06:13:53 10.70.41.228 master /rhs/brick1/b3 root 10.70.41.160::slave 10.70.42.79 Active Changelog Crawl 2018-04-23 06:13:53 Version-Release number of selected component (if applicable): ============================================================ [root@dhcp41-226 ~]# rpm -qa | grep gluster glusterfs-fuse-3.12.2-7.el7rhgs.x86_64 glusterfs-geo-replication-3.12.2-7.el7rhgs.x86_64 gluster-nagios-common-0.2.4-1.el7rhgs.noarch glusterfs-libs-3.12.2-7.el7rhgs.x86_64 glusterfs-cli-3.12.2-7.el7rhgs.x86_64 libvirt-daemon-driver-storage-gluster-3.9.0-14.el7_5.2.x86_64 glusterfs-rdma-3.12.2-7.el7rhgs.x86_64 glusterfs-events-3.12.2-7.el7rhgs.x86_64 glusterfs-3.12.2-7.el7rhgs.x86_64 glusterfs-client-xlators-3.12.2-7.el7rhgs.x86_64 glusterfs-server-3.12.2-7.el7rhgs.x86_64 vdsm-gluster-4.19.43-2.3.el7rhgs.noarch python2-gluster-3.12.2-7.el7rhgs.x86_64 gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64 glusterfs-api-3.12.2-7.el7rhgs.x86_64 How reproducible: ================= 2/2 Steps to Reproduce: =================== 1. Create Master and a Slave cluster from 6 nodes (each) 2. Create and Start master volume (Tiered: cold-tier 1x(4+2) and hot-tier 1x3) 4. Create and Start slave volume (Tiered: cold-tier 1x(4+2) and hot-tier 1x3) 5. Enable quota on master volume 6. Enable shared storage on master volume 7. Setup geo-rep session between master and slave volume 8. Mount master volume on client 9. Create data from master client Actual results: ============== gsyncd was down on 5 nodes out of 6 Once started, the geo-rep status was ACTIVE/created Expected results: ================= gsyncd should be up on all nodes Once started, the geo-rep status should be in ACTIVE/PASSIVE Sosreports at : http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/rallan/1570586/
Looks like some setup issue in some nodes.
(In reply to Aravinda VK from comment #3) > Looks like some setup issue in some nodes. AFAIK, this is something only seen while tier is involved. Otherwise in the same cluster the geo-replication is working as expected.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2607