Description of problem: Master and slave geo-rep auxiliary mounts are not accessible to user to take client profile info or do any other client stack analysis. The mounts are lazy unmounted after changing the current directory to the respective mount point. So that only geo-rep worker has access to it. Version-Release number of selected component (if applicable): mainline How reproducible: Always Steps to Reproduce: 1. Setup geo-rep session between master and slave volume 2. Try to take the client side profile info of either master volume or slave volume Actual results: Not able to take client profile info of active geo-rep master and slave mount points on which I/O is happening Expected results: When needed, it should be possible to take client profile info of active geo-rep master and slave mount points on which I/O is happening Additional info:
REVIEW: https://review.gluster.org/16912 (geo-rep: Optionally allow access to mounts) posted (#1) for review on master by Kotresh HR (khiremat)
COMMIT: https://review.gluster.org/16912 committed in master by Vijay Bellur (vbellur) ------ commit e2a652ca6ba56235e6d64bf7df110afdc5f6ca27 Author: Kotresh HR <khiremat> Date: Fri Mar 17 13:03:57 2017 -0400 geo-rep: Optionally allow access to mounts In order to improve debuggability, it is important to have access to geo-rep master and slave mounts. With the default behaviour, geo-rep lazy unmounts the mounts after changing the current working directory into the mount point. It also cleans up the mount points. So only geo-rep worker has the access and it becomes impossible to take the client profile info and do any other client statck analysis. Hence the following new config is being introduced to allow access to mounts. gluster vol geo-rep <mastervol> <slavehost>::<slavevol> \ config access_mount true The default value of 'access_mount' is false. Change-Id: I53dce4ea86a6ffc979c82f9330e8954327180ca3 BUG: 1433506 Signed-off-by: Kotresh HR <khiremat> Reviewed-on: https://review.gluster.org/16912 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Vijay Bellur <vbellur>
REVIEW: https://review.gluster.org/17015 (geo-rep: Fix mount cleanup) posted (#1) for review on master by Kotresh HR (khiremat)
REVIEW: https://review.gluster.org/17015 (geo-rep: Fix mount cleanup) posted (#2) for review on master by Kotresh HR (khiremat)
COMMIT: https://review.gluster.org/17015 committed in master by Aravinda VK (avishwan) ------ commit 9f5e59abfbf529b91d699143b0c69c8748ac6253 Author: Kotresh HR <khiremat> Date: Fri Apr 7 06:19:30 2017 -0400 geo-rep: Fix mount cleanup On corner cases, mount cleanup might cause worker crash. Fixing the same. Change-Id: I38c0af51d10673765cdb37bc5b17bb37efd043b8 BUG: 1433506 Signed-off-by: Kotresh HR <khiremat> Reviewed-on: https://review.gluster.org/17015 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Aravinda VK <avishwan>
This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report. glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html [2] https://www.gluster.org/pipermail/gluster-users/