Description of problem: ======================= Geo-replication session is in FAULTY state in RHEL6 as shown: [root@dhcp43-133 ~]# gluster volume geo-replication master 10.70.43.202::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED --------------------------------------------------------------------------------------------------------------------------------------- 10.70.43.133 master /rhs/brick1/b1 root 10.70.43.202::slave N/A Faulty N/A N/A 10.70.43.133 master /rhs/brick2/b4 root 10.70.43.202::slave N/A Faulty N/A N/A 10.70.43.163 master /rhs/brick1/b2 root 10.70.43.202::slave N/A Faulty N/A N/A 10.70.43.163 master /rhs/brick2/b5 root 10.70.43.202::slave N/A Faulty N/A N/A 10.70.41.234 master /rhs/brick1/b3 root 10.70.43.202::slave N/A Faulty N/A N/A 10.70.41.234 master /rhs/brick2/b6 root 10.70.43.202::slave N/A Faulty N/A N/A Version-Release number of selected component (if applicable): ============================================================= Seen in glusterfs-3.8.4-54.4.el6rhs.x86_64 How reproducible: ================= Always Steps to Reproduce: ==================== 1.Create master and slave volumes (3x3) 2.Create and start a geo-rep session Actual results: =============== Session is in FAULTY state Expected results: ================= The session should not be in faulty state
Upstream Patch: https://review.gluster.org/#/c/20221/1
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2018:2608