Bug 1424680 - Restarting FUSE causes previous FUSE mounts to be in a bad state.
Summary: Restarting FUSE causes previous FUSE mounts to be in a bad state.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: fuse
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
high
urgent
Target Milestone: ---
: RHGS 3.3.0
Assignee: Amar Tumballi
QA Contact: Bala Konda Reddy M
URL:
Whiteboard:
: 1474408 (view as bug list)
Depends On:
Blocks: 1417147 1423640 1456420 1462254 1466217 1472370 1472372 1480125
TreeView+ depends on / blocked
 
Reported: 2017-02-18 01:45 UTC by Bradley Childs
Modified: 2020-12-14 08:11 UTC (History)
28 users (show)

Fixed In Version: glusterfs-3.8.4-27
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1456420 (view as bug list)
Environment:
Last Closed: 2017-09-21 04:30:55 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1480125 0 unspecified CLOSED [RFE] Support for auto_unmount option when GlusterFS volumes are mounted. 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHBA-2017:2774 0 normal SHIPPED_LIVE glusterfs bug fix and enhancement update 2017-09-21 08:16:29 UTC

Internal Links: 1480125

Description Bradley Childs 2017-02-18 01:45:12 UTC
Description of problem:  When FUSE is stopped/restarted, any previously mounted FUSE filesystems are not restored and results in:

[fuse-bridge.c:5439:init] 0-fuse: Mountpoint /tmp/mnt seems to have a stale mount, run 'umount /tmp/mnt' and try again.


There's a couple examples of this bug:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1402834
https://bugzilla.redhat.com/show_bug.cgi?id=1423640

I'm working on a simpler reproducer that doesn't involve a gluster setup.

Expected results:

FUSE either cleans up its stale mounts on start, or restores the connected state of the stale mounts on restart.

Comment 4 Eric Paris 2017-03-09 19:21:08 UTC
I'm going to kick this to CNS (but I'm going to leave Carlos cc'd) to see how gluster thinks this can/should be handled...

Comment 7 Amar Tumballi 2017-04-05 12:39:53 UTC
Noticed the issue in glusterfs specifically is that, it is using older rebase of libfuse / fusermount code, and hence 'auto_unmount' is not handled properly.

Once we rebase the fuse-lib code, this can be supported.

Comment 11 Eric Paris 2017-05-02 13:26:08 UTC
*** Bug 1423640 has been marked as a duplicate of this bug. ***

Comment 13 Atin Mukherjee 2017-05-10 05:28:57 UTC
upstream patch : https://review.gluster.org/#/c/17230/

Comment 14 Humble Chirammal 2017-05-10 06:20:40 UTC
(In reply to Atin Mukherjee from comment #13)
> upstream patch : https://review.gluster.org/#/c/17230/

Is this option 'on' by default and no extra parsing required when mounting ? I think such a configuration ( no parsing via mount utils) would be required atleast in kube/openshift setups to handle multiple client versions.

Comment 16 Atin Mukherjee 2017-05-23 15:49:47 UTC
there is one more upstream patch : https://review.gluster.org/#/c/17229/

Comment 17 Amar Tumballi 2017-05-23 17:33:23 UTC
this is a up(In reply to Atin Mukherjee from comment #16)
> there is one more upstream patch : https://review.gluster.org/#/c/17229/

this is a dependent patch to make sure the changes are very specific in each review. Shouldn't be an issue.

Comment 37 Eric Paris 2017-07-25 13:28:24 UTC
*** Bug 1474408 has been marked as a duplicate of this bug. ***

Comment 42 Bala Konda Reddy M 2017-08-09 12:30:31 UTC

Patch posted at : https://bugzilla.redhat.com/show_bug.cgi?id=1424680#c23 for which the bug is moved to ON_QA is with respect to fuse auto_umount. We have tested the functionality in RHGS with the following scenarios:

Followed the steps mentioned in the bug.

when mounted a volume with added mount option (auto_umount) that is the fix in gluster side, there will two glusterfs processes. When the heavy glusterfs process is killed the volume will be unmounted. The entry from /etc/mtab will no longer be available. 

If we kill the lite glusterfs process the volume won't be unmounted and we won't see "Transport endpoint is not connected" in the client and mtab entry will still be there. Still we can access data from the mount point.

Another scenario, after killing lite process the mount still will be in /etc/mtab , then if we kill the heavy glusterfs process as well then we see "Transport endpoint not connected" as earlier.

Based on these scenarios and for downstream RHGS 3.3.0, moving this bug to verified state.

Comment 43 Dana Safford 2017-08-11 02:58:39 UTC
For case 01896116, Accenture requested a backport to RHGS 3.2.0.

I raised the 'Customer Escalation' flag.

Thanks,

Comment 46 errata-xmlrpc 2017-09-21 04:30:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774

Comment 47 errata-xmlrpc 2017-09-21 04:56:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:2774


Note You need to log in before you can comment on or make changes to this bug.