+++ This bug was initially created as a clone of Bug #1233575 +++ Description of problem: ====================== In a scenario, where the shared volume (gluster_shared_storage) is stopped/deleted or non-existing. And the config use_meta_volume is set to false. The worker fails with "_GMaster: Meta-volume is not mounted. Worker Exiting..." [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED ---------------------------------------------------------------------------------------------------------------------------------------------------- georep1 master /rhs/brick1/b1 root 10.70.46.154::slave 10.70.46.101 Active Changelog Crawl 2015-06-19 18:10:14 georep1 master /rhs/brick2/b2 root 10.70.46.154::slave 10.70.46.101 Active Changelog Crawl 2015-06-19 18:10:14 georep2 master /rhs/brick1/b1 root 10.70.46.154::slave 10.70.46.103 Passive N/A N/A georep2 master /rhs/brick2/b2 root 10.70.46.154::slave 10.70.46.103 Passive N/A N/A georep3 master /rhs/brick1/b1 root 10.70.46.154::slave 10.70.46.154 Passive N/A N/A georep3 master /rhs/brick2/b2 root 10.70.46.154::slave 10.70.46.154 Passive N/A N/A [root@georep1 scripts]# [root@georep1 scripts]# gluster volume stop gluster_shared_storage Stopping the shared storage volume(gluster_shared_storage), will affect features like snapshot scheduler, geo-replication and NFS-Ganesha. Do you still want to continue? (y/n) y volume stop: gluster_shared_storage: success [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED -------------------------------------------------------------------------------------------------------------------------------------- georep1 master /rhs/brick1/b1 root 10.70.46.154::slave N/A Faulty N/A N/A georep1 master /rhs/brick2/b2 root 10.70.46.154::slave N/A Faulty N/A N/A georep2 master /rhs/brick1/b1 root 10.70.46.154::slave N/A Faulty N/A N/A georep2 master /rhs/brick2/b2 root 10.70.46.154::slave N/A Faulty N/A N/A georep3 master /rhs/brick1/b1 root 10.70.46.154::slave N/A Faulty N/A N/A georep3 master /rhs/brick2/b2 root 10.70.46.154::slave N/A Faulty N/A N/A [root@georep1 scripts]# [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave config use_meta_volume false geo-replication config updated successfully [root@georep1 scripts]# [root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status MASTER NODE MASTER VOL MASTER BRICK SLAVE USER SLAVE SLAVE NODE STATUS CRAWL STATUS LAST_SYNCED -------------------------------------------------------------------------------------------------------------------------------------- georep1 master /rhs/brick1/b1 root 10.70.46.154::slave N/A Faulty N/A N/A georep1 master /rhs/brick2/b2 root 10.70.46.154::slave N/A Faulty N/A N/A georep2 master /rhs/brick1/b1 root 10.70.46.154::slave N/A Faulty N/A N/A georep2 master /rhs/brick2/b2 root 10.70.46.154::slave N/A Faulty N/A N/A georep3 master /rhs/brick1/b1 root 10.70.46.154::slave N/A Faulty N/A N/A georep3 master /rhs/brick2/b2 root 10.70.46.154::slave N/A Faulty N/A N/A [root@georep1 scripts]# Version-Release number of selected component (if applicable): ============================================================== How reproducible: ================= Always
REVIEW: http://review.gluster.org/11358 (geo-rep: Fix toggling of use_meta_volume config) posted (#1) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/11358 (geo-rep: Fix toggling of use_meta_volume config) posted (#2) for review on master by Kotresh HR (khiremat)
REVIEW: http://review.gluster.org/11358 (geo-rep: Fix toggling of use_meta_volume config) posted (#3) for review on master by Kotresh HR (khiremat)
Didnot automatically update. It's merged. COMMIT: http://review.gluster.org/11358 geo-rep: Fix toggling of use_meta_volume config If meta-volume is deleted and use_meta_volume is set to false, geo-rep still fails complaining meta volume is not mounted. The patch fixes that issue. Change-Id: Iecf732197926bf9ce69112287fccbb1c34e58e6d BUG: 1234694 Signed-off-by: Kotresh HR <khiremat> Reviewed-on: http://review.gluster.org/11358 Tested-by: NetBSD Build System <jenkins.org> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Aravinda VK <avishwan>
Fix for this BZ is already present in a GlusterFS release. You can find clone of this BZ, fixed in a GlusterFS release and closed. Hence closing this mainline BZ as well.
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.8.0, please open a new bug report. glusterfs-3.8.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://blog.gluster.org/2016/06/glusterfs-3-8-released/ [2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user