Bug 1234695 - [geo-rep]: Setting meta volume config to false when meta volume is stopped/deleted leads geo-rep to faulty
Summary: [geo-rep]: Setting meta volume config to false when meta volume is stopped/de...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: 3.7.0
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
Assignee: Kotresh HR
QA Contact:
URL:
Whiteboard:
Depends On: 1233575 1234694
Blocks: 1223636
TreeView+ depends on / blocked
 
Reported: 2015-06-23 05:17 UTC by Kotresh HR
Modified: 2015-07-30 09:48 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.7.3
Clone Of: 1234694
Environment:
Last Closed: 2015-07-30 09:48:56 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kotresh HR 2015-06-23 05:17:40 UTC
+++ This bug was initially created as a clone of Bug #1234694 +++

+++ This bug was initially created as a clone of Bug #1233575 +++

Description of problem:
======================

In a scenario, where the shared volume (gluster_shared_storage) is stopped/deleted or non-existing. And the config use_meta_volume is set to false. The worker fails with "_GMaster: Meta-volume is not mounted. Worker Exiting..."

[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                  SLAVE NODE      STATUS     CRAWL STATUS       LAST_SYNCED                  
----------------------------------------------------------------------------------------------------------------------------------------------------
georep1        master        /rhs/brick1/b1    root          10.70.46.154::slave    10.70.46.101    Active     Changelog Crawl    2015-06-19 18:10:14          
georep1        master        /rhs/brick2/b2    root          10.70.46.154::slave    10.70.46.101    Active     Changelog Crawl    2015-06-19 18:10:14          
georep2        master        /rhs/brick1/b1    root          10.70.46.154::slave    10.70.46.103    Passive    N/A                N/A                          
georep2        master        /rhs/brick2/b2    root          10.70.46.154::slave    10.70.46.103    Passive    N/A                N/A                          
georep3        master        /rhs/brick1/b1    root          10.70.46.154::slave    10.70.46.154    Passive    N/A                N/A                          
georep3        master        /rhs/brick2/b2    root          10.70.46.154::slave    10.70.46.154    Passive    N/A                N/A                          
[root@georep1 scripts]# 

[root@georep1 scripts]# gluster volume stop gluster_shared_storage
Stopping the shared storage volume(gluster_shared_storage), will affect features like snapshot scheduler, geo-replication and NFS-Ganesha. Do you still want to continue? (y/n) y
volume stop: gluster_shared_storage: success
[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                  SLAVE NODE    STATUS    CRAWL STATUS    LAST_SYNCED          
--------------------------------------------------------------------------------------------------------------------------------------
georep1        master        /rhs/brick1/b1    root          10.70.46.154::slave    N/A           Faulty    N/A             N/A                  
georep1        master        /rhs/brick2/b2    root          10.70.46.154::slave    N/A           Faulty    N/A             N/A                  
georep2        master        /rhs/brick1/b1    root          10.70.46.154::slave    N/A           Faulty    N/A             N/A                  
georep2        master        /rhs/brick2/b2    root          10.70.46.154::slave    N/A           Faulty    N/A             N/A                  
georep3        master        /rhs/brick1/b1    root          10.70.46.154::slave    N/A           Faulty    N/A             N/A                  
georep3        master        /rhs/brick2/b2    root          10.70.46.154::slave    N/A           Faulty    N/A             N/A                  
[root@georep1 scripts]#

[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave config use_meta_volume false
geo-replication config updated successfully
[root@georep1 scripts]#

[root@georep1 scripts]# gluster volume geo-replication master 10.70.46.154::slave status
 
MASTER NODE    MASTER VOL    MASTER BRICK      SLAVE USER    SLAVE                  SLAVE NODE    STATUS    CRAWL STATUS    LAST_SYNCED          
--------------------------------------------------------------------------------------------------------------------------------------
georep1        master        /rhs/brick1/b1    root          10.70.46.154::slave    N/A           Faulty    N/A             N/A                  
georep1        master        /rhs/brick2/b2    root          10.70.46.154::slave    N/A           Faulty    N/A             N/A                  
georep2        master        /rhs/brick1/b1    root          10.70.46.154::slave    N/A           Faulty    N/A             N/A                  
georep2        master        /rhs/brick2/b2    root          10.70.46.154::slave    N/A           Faulty    N/A             N/A                  
georep3        master        /rhs/brick1/b1    root          10.70.46.154::slave    N/A           Faulty    N/A             N/A                  
georep3        master        /rhs/brick2/b2    root          10.70.46.154::slave    N/A           Faulty    N/A             N/A                  
[root@georep1 scripts]#


Version-Release number of selected component (if applicable):
==============================================================


How reproducible:
=================

Always

--- Additional comment from Anand Avati on 2015-06-23 01:11:41 EDT ---

REVIEW: http://review.gluster.org/11358 (geo-rep: Fix toggling of use_meta_volume config) posted (#1) for review on master by Kotresh HR (khiremat)

Comment 1 Anand Avati 2015-06-23 05:20:18 UTC
REVIEW: http://review.gluster.org/11359 (geo-rep: Fix toggling of use_meta_volume config) posted (#1) for review on release-3.7 by Kotresh HR (khiremat)

Comment 2 Anand Avati 2015-06-23 09:09:23 UTC
REVIEW: http://review.gluster.org/11359 (geo-rep: Fix toggling of use_meta_volume config) posted (#2) for review on release-3.7 by Kotresh HR (khiremat)

Comment 3 Anand Avati 2015-06-25 17:16:12 UTC
COMMIT: http://review.gluster.org/11359 committed in release-3.7 by Venky Shankar (vshankar) 
------
commit 30e2976cd6f56c661e8e3d97eec2ba23937ff23e
Author: Kotresh HR <khiremat>
Date:   Tue Jun 23 10:38:27 2015 +0530

    geo-rep: Fix toggling of use_meta_volume config
    
    If meta-volume is deleted and use_meta_volume
    is set to false, geo-rep still fails complaining
    meta volume is not mounted. The patch fixes that
    issue.
    
    BUG: 1234695
    Change-Id: I5ec7ad85399b90e85bca1d36750dd3fb29222db9
    Reviewe-On: http://review.gluster.org/11358
    Signed-off-by: Kotresh HR <khiremat>
    Reviewed-on: http://review.gluster.org/11359
    Tested-by: NetBSD Build System <jenkins.org>
    Tested-by: Gluster Build System <jenkins.com>
    Reviewed-by: Saravanakumar Arumugam <sarumuga>
    Reviewed-by: Venky Shankar <vshankar>

Comment 4 Kaushal 2015-07-30 09:48:56 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.7.3, please open a new bug report.

glusterfs-3.7.3 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://thread.gmane.org/gmane.comp.file-systems.gluster.devel/12078
[2] http://thread.gmane.org/gmane.comp.file-systems.gluster.user


Note You need to log in before you can comment on or make changes to this bug.