Description of problem: When FUSE is stopped/restarted, any previously mounted FUSE filesystems are not restored and results in: [fuse-bridge.c:5439:init] 0-fuse: Mountpoint /tmp/mnt seems to have a stale mount, run 'umount /tmp/mnt' and try again. There's a couple examples of this bug: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1402834 https://bugzilla.redhat.com/show_bug.cgi?id=1423640 I'm working on a simpler reproducer that doesn't involve a gluster setup. Expected results: FUSE either cleans up its stale mounts on start, or restores the connected state of the stale mounts on restart.
I'm going to kick this to CNS (but I'm going to leave Carlos cc'd) to see how gluster thinks this can/should be handled...
Noticed the issue in glusterfs specifically is that, it is using older rebase of libfuse / fusermount code, and hence 'auto_unmount' is not handled properly. Once we rebase the fuse-lib code, this can be supported.
*** Bug 1423640 has been marked as a duplicate of this bug. ***
upstream patch : https://review.gluster.org/#/c/17230/
(In reply to Atin Mukherjee from comment #13) > upstream patch : https://review.gluster.org/#/c/17230/ Is this option 'on' by default and no extra parsing required when mounting ? I think such a configuration ( no parsing via mount utils) would be required atleast in kube/openshift setups to handle multiple client versions.
there is one more upstream patch : https://review.gluster.org/#/c/17229/
this is a up(In reply to Atin Mukherjee from comment #16) > there is one more upstream patch : https://review.gluster.org/#/c/17229/ this is a dependent patch to make sure the changes are very specific in each review. Shouldn't be an issue.
downstream patches: https://code.engineering.redhat.com/gerrit/#/c/107570/ https://code.engineering.redhat.com/gerrit/#/c/107571/
*** Bug 1474408 has been marked as a duplicate of this bug. ***
Patch posted at : https://bugzilla.redhat.com/show_bug.cgi?id=1424680#c23 for which the bug is moved to ON_QA is with respect to fuse auto_umount. We have tested the functionality in RHGS with the following scenarios: Followed the steps mentioned in the bug. when mounted a volume with added mount option (auto_umount) that is the fix in gluster side, there will two glusterfs processes. When the heavy glusterfs process is killed the volume will be unmounted. The entry from /etc/mtab will no longer be available. If we kill the lite glusterfs process the volume won't be unmounted and we won't see "Transport endpoint is not connected" in the client and mtab entry will still be there. Still we can access data from the mount point. Another scenario, after killing lite process the mount still will be in /etc/mtab , then if we kill the heavy glusterfs process as well then we see "Transport endpoint not connected" as earlier. Based on these scenarios and for downstream RHGS 3.3.0, moving this bug to verified state.
For case 01896116, Accenture requested a backport to RHGS 3.2.0. I raised the 'Customer Escalation' flag. Thanks,
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774