Bug 1449775
Summary: | quota: limit-usage command failed with error " Failed to start aux mount" | ||
---|---|---|---|
Product: | [Community] GlusterFS | Reporter: | Sanoj Unnikrishnan <sunnikri> |
Component: | quota | Assignee: | Sanoj Unnikrishnan <sunnikri> |
Status: | CLOSED CURRENTRELEASE | QA Contact: | |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.11 | CC: | amukherj, ashah, asrivast, bugs, nbalacha, rhinduja, rhs-bugs, storage-qa-internal |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | glusterfs-3.11.0 | Doc Type: | If docs needed, set a value |
Doc Text: | Story Points: | --- | |
Clone Of: | 1433906 | Environment: | |
Last Closed: | 2017-05-30 18:52:18 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Sanoj Unnikrishnan
2017-05-10 16:07:52 UTC
REVIEW: https://review.gluster.org/17240 (Fixes quota aux mount failure) posted (#2) for review on release-3.11 by sanoj-unnikrishnan (sunnikri) COMMIT: https://review.gluster.org/17240 committed in release-3.11 by Shyamsundar Ranganathan (srangana) ------ commit 683cb46bd90d0cda42e0dfd71f5a5afad818fbbd Author: Sanoj Unnikrishnan <sunnikri> Date: Wed Mar 22 15:02:12 2017 +0530 Fixes quota aux mount failure The aux mount is created on the first limit/remove_limit/list command and it remains until volume is stopped / deleted / (quota is disabled) , where we do a lazy unmount. If the process is uncleanly terminated, then the mount entry remains and we get (Transport disconnected) error on subsequent attempts to run quota list/limit-usage/remove commands. Second issue, There is also a risk of inadvertent rm -rf on the /var/run/gluster causing data loss for the user. Ideally, /var/run is a temp path for application use and should not cause any data loss to persistent storage. Solution: 1) unmount the aux mount after each use. 2) clean stale mount before mounting, if any. One caveat with doing mount/unmount on each command is that we cannot use same mount point for both list and limit commands. The reason for this is that list command needs mount to be accessible in cli after response from glusterd, So it could be unmounted by a limit command if executed in parallel (had we used same mount point) Hence we use separate mount points for list and limit commands. >Reviewed-on: https://review.gluster.org/16938 >NetBSD-regression: NetBSD Build System <jenkins.org> >Smoke: Gluster Build System <jenkins.org> >Reviewed-by: Manikandan Selvaganesh <manikandancs333> >CentOS-regression: Gluster Build System <jenkins.org> >Reviewed-by: Raghavendra G <rgowdapp> >Reviewed-by: Atin Mukherjee <amukherj> >(cherry picked from commit 2ae4b4058691b324535d802f4e6d24cce89a10e5) Change-Id: I4f9e39da2ac2b65941399bffb6440db8a6ba59d0 BUG: 1449775 Signed-off-by: Sanoj Unnikrishnan <sunnikri> Reviewed-on: https://review.gluster.org/17240 Smoke: Gluster Build System <jenkins.org> NetBSD-regression: NetBSD Build System <jenkins.org> CentOS-regression: Gluster Build System <jenkins.org> Reviewed-by: Atin Mukherjee <amukherj> This bug is getting closed because a release has been made available that should address the reported issue. In case the problem is still not fixed with glusterfs-3.11.0, please open a new bug report. glusterfs-3.11.0 has been announced on the Gluster mailinglists [1], packages for several distributions should become available in the near future. Keep an eye on the Gluster Users mailinglist [2] and the update infrastructure for your distribution. [1] http://lists.gluster.org/pipermail/announce/2017-May/000073.html [2] https://www.gluster.org/pipermail/gluster-users/ |