Bug 1433506 - [Geo-rep] Master and slave mounts are not accessible to take client profile info
Summary: [Geo-rep] Master and slave mounts are not accessible to take client profile info
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: GlusterFS
Classification: Community
Component: geo-replication
Version: mainline
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: Kotresh HR
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1438330 1503173
TreeView+ depends on / blocked
 
Reported: 2017-03-17 20:00 UTC by Kotresh HR
Modified: 2017-10-17 13:29 UTC (History)
1 user (show)

Fixed In Version: glusterfs-3.11.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1438330 1503173 (view as bug list)
Environment:
Last Closed: 2017-05-30 18:47:35 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Embargoed:


Attachments (Terms of Use)

Description Kotresh HR 2017-03-17 20:00:56 UTC
Description of problem:
Master and slave geo-rep auxiliary mounts are not accessible to user to take
client profile info or do any other client stack analysis. The mounts are lazy unmounted after changing the current directory to the respective mount point.
So that only geo-rep worker has access to it.

Version-Release number of selected component (if applicable):
mainline

How reproducible:
Always

Steps to Reproduce:
1. Setup geo-rep session between master and slave volume
2. Try to take the client side profile info of either master volume or slave volume


Actual results:
Not able to take client profile info of active geo-rep master and slave mount points on which I/O is happening

Expected results:
When needed, it should be possible to take client profile info of active geo-rep master and slave mount points on which I/O is happening

Additional info:

Comment 1 Worker Ant 2017-03-17 20:02:14 UTC
REVIEW: https://review.gluster.org/16912 (geo-rep: Optionally allow access to mounts) posted (#1) for review on master by Kotresh HR (khiremat)

Comment 2 Worker Ant 2017-03-20 03:33:40 UTC
COMMIT: https://review.gluster.org/16912 committed in master by Vijay Bellur (vbellur) 
------
commit e2a652ca6ba56235e6d64bf7df110afdc5f6ca27
Author: Kotresh HR <khiremat>
Date:   Fri Mar 17 13:03:57 2017 -0400

    geo-rep: Optionally allow access to mounts
    
    In order to improve debuggability, it is important
    to have access to geo-rep master and slave mounts.
    With the default behaviour, geo-rep lazy unmounts
    the mounts after changing the current working
    directory into the mount point. It also cleans
    up the mount points. So only geo-rep worker has
    the access and it becomes impossible to take the
    client profile info and do any other client statck
    analysis. Hence the following new config is being
    introduced to allow access to mounts.
    
    gluster vol geo-rep <mastervol> <slavehost>::<slavevol> \
    config access_mount true
    
    The default value of 'access_mount' is false.
    
    Change-Id: I53dce4ea86a6ffc979c82f9330e8954327180ca3
    BUG: 1433506
    Signed-off-by: Kotresh HR <khiremat>
    Reviewed-on: https://review.gluster.org/16912
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Vijay Bellur <vbellur>

Comment 3 Worker Ant 2017-04-10 06:10:04 UTC
REVIEW: https://review.gluster.org/17015 (geo-rep: Fix mount cleanup) posted (#1) for review on master by Kotresh HR (khiremat)

Comment 4 Worker Ant 2017-04-12 06:52:59 UTC
REVIEW: https://review.gluster.org/17015 (geo-rep: Fix mount cleanup) posted (#2) for review on master by Kotresh HR (khiremat)

Comment 5 Worker Ant 2017-04-27 05:58:54 UTC
COMMIT: https://review.gluster.org/17015 committed in master by Aravinda VK (avishwan) 
------
commit 9f5e59abfbf529b91d699143b0c69c8748ac6253
Author: Kotresh HR <khiremat>
Date:   Fri Apr 7 06:19:30 2017 -0400

    geo-rep: Fix mount cleanup
    
    On corner cases, mount cleanup might cause
    worker crash. Fixing the same.
    
    Change-Id: I38c0af51d10673765cdb37bc5b17bb37efd043b8
    BUG: 1433506
    Signed-off-by: Kotresh HR <khiremat>
    Reviewed-on: https://review.gluster.org/17015
    Smoke: Gluster Build System <jenkins.org>
    NetBSD-regression: NetBSD Build System <jenkins.org>
    CentOS-regression: Gluster Build System <jenkins.org>
    Reviewed-by: Aravinda VK <avishwan>

Comment 6 Shyamsundar 2017-05-30 18:47:35 UTC
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report.

glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution.

[1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html
[2] https://www.gluster.org/pipermail/gluster-users/


Note You need to log in before you can comment on or make changes to this bug.