Description of problem: ======================= Geo-replication session is in FAULTY state in Centos 6 as shown: [root@dhcp43-133 ~]# gluster volume geo-replication master 10.70.43.202::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED --------------------------------------------------------------------------------------------------------------------------------------- 10.70.43.133 master /rhs/brick1/b1 root 10.70.43.202::slave N/A Faulty N/A N/A 10.70.43.133 master /rhs/brick2/b4 root 10.70.43.202::slave N/A Faulty N/A N/A 10.70.43.163 master /rhs/brick1/b2 root 10.70.43.202::slave N/A Faulty N/A N/A 10.70.43.163 master /rhs/brick2/b5 root 10.70.43.202::slave N/A Faulty N/A N/A 10.70.41.234 master /rhs/brick1/b3 root 10.70.43.202::slave N/A Faulty N/A N/A 10.70.41.234 master /rhs/brick2/b6 root 10.70.43.202::slave N/A Faulty N/A N/A Version-Release number of selected component (if applicable): ============================================================= mainline How reproducible: ================= Always Steps to Reproduce: ==================== 1.Create master and slave volumes (3x3) 2.Create and start a geo-rep session Actual results: =============== Session is in FAULTY state Expected results: ================= The session should not be in faulty state
REVIEW: https://review.gluster.org/20221 (geo-rep: Fix geo-rep for older versions of unshare) posted (#1) for review on master by Kotresh HR
COMMIT: https://review.gluster.org/20221 committed in master by "Kotresh HR" <khiremat> with a commit message- geo-rep: Fix geo-rep for older versions of unshare Geo-rep mounts are private to worker. It uses mount namespace using unshare command to achieve the same. Well, the unshare command has to support '--propagation' option. So geo-rep breaks on the systems with older unshare version. The patch makes it fall back to lazy umount behaviour if the unshare does not support propagation option. fixes: bz#1589782 Change-Id: Ia614f068aede288d63ac62fea4461b1865066054 Signed-off-by: Kotresh HR <khiremat>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-5.0, please open a new bug report. glusterfs-5.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] https://lists.gluster.org/pipermail/announce/2018-October/000115.html [2] https://www.gluster.org/pipermail/gluster-users/