Created attachment 1034777 [details] sosreport of master Description of problem: ======================== geo-replication failed to start on EC volume with status faulty [root@dhcp37-100 ~]# gluster v geo-replication status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- dhcp37-100.lab.eng.blr.redhat.com geo-slave /rhs/brick1/b1 root ssh://dhcp37-164::geo-master N/A Faulty N/A N/A dhcp37-100.lab.eng.blr.redhat.com geo-slave /rhs/brick2/b3 root ssh://dhcp37-164::geo-master N/A Faulty N/A N/A dhcp37-100.lab.eng.blr.redhat.com geo-slave /rhs/brick3/b5 root ssh://dhcp37-164::geo-master N/A Faulty N/A N/A dhcp37-100.lab.eng.blr.redhat.com geo-slave /rhs/brick4/b7 root ssh://dhcp37-164::geo-master N/A Faulty N/A N/A dhcp37-100.lab.eng.blr.redhat.com geo-slave /rhs/brick5/b9 root ssh://dhcp37-164::geo-master N/A Faulty N/A N/A dhcp37-100.lab.eng.blr.redhat.com geo-slave /rhs/brick6/b11 root ssh://dhcp37-164::geo-master N/A Faulty N/A N/A dhcp37-122.lab.eng.blr.redhat.com geo-slave /rhs/brick1/b2 root ssh://dhcp37-164::geo-master N/A Faulty N/A N/A dhcp37-122.lab.eng.blr.redhat.com geo-slave /rhs/brick2/b4 root ssh://dhcp37-164::geo-master N/A Faulty N/A N/A dhcp37-122.lab.eng.blr.redhat.com geo-slave /rhs/brick3/b6 root ssh://dhcp37-164::geo-master N/A Faulty N/A N/A dhcp37-122.lab.eng.blr.redhat.com geo-slave /rhs/brick4/b8 root ssh://dhcp37-164::geo-master N/A Faulty N/A N/A dhcp37-122.lab.eng.blr.redhat.com geo-slave /rhs/brick5/b10 root ssh://dhcp37-164::geo-master N/A Faulty N/A N/A dhcp37-122.lab.eng.blr.redhat.com geo-slave /rhs/brick6/b12 root ssh://dhcp37-164::geo-master N/A Faulty N/A N/A [root@dhcp37-100 ~]# Version-Release number of selected component (if applicable): ============================================================= [root@dhcp37-100 ~]# gluster --version glusterfs 3.7.0 built on Jun 1 2015 07:14:51 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root@dhcp37-100 ~]# How reproducible: ================= 100% Steps to Reproduce: 1. Create a 1x(8+4) master and slave volumes 2. create geo-replication session b/w the two 3. Actual results: =============== Faulty status Expected results: ================= Should work fine Additional info: ================
I tried to reproduce this bug with latest 3.1 code but could NOT reproduce it. Tried with 2 nodes. 1 - Created master 4+2 disperse volumes - 3 bricks on each node 2 - Created slave 4+2 disperse volumes - 3 bricks on each node 3 - Created geo-replication 4 - Started the geo-replication successfully. No Faulty status. 5 - Created 10 small files on master and these were successfully getting replicated on slave. Correct file name and content. [root@rhs3 geo-master]# gluster volume geo-replication master 10.70.42.64::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ----------------------------------------------------------------------------------------------------------------------------------------------------- rhs3 master /brick/master/A1 root 10.70.42.64::slave 10.70.43.118 Active Changelog Crawl 2015-06-11 17:44:01 rhs3 master /brick/master/A2 root 10.70.42.64::slave 10.70.42.64 Active Changelog Crawl 2015-06-11 17:44:01 rhs3 master /brick/master/A3 root 10.70.42.64::slave 10.70.43.118 Active Changelog Crawl 2015-06-11 17:44:01 rhs3 master /brick/master/A4 root 10.70.42.64::slave 10.70.42.64 Passive N/A N/A rhs3 master /brick/master/A5 root 10.70.42.64::slave 10.70.43.118 Passive N/A N/A rhs3 master /brick/master/A6 root 10.70.42.64::slave 10.70.42.64 Passive N/A N/A
Verified on 3.7.1-2 build and geo-rep is working. Marking this as fixed.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2015-1495.html
closed