Bug 1424680
Summary: | Restarting FUSE causes previous FUSE mounts to be in a bad state. | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Bradley Childs <bchilds> | |
Component: | fuse | Assignee: | Amar Tumballi <atumball> | |
Status: | CLOSED ERRATA | QA Contact: | Bala Konda Reddy M <bmekala> | |
Severity: | urgent | Docs Contact: | ||
Priority: | high | |||
Version: | rhgs-3.0 | CC: | akhakhar, amukherj, annair, atumball, bchilds, bmchugh, csaba, dsafford, eguan, eparis, hchen, hchiramm, jarrpa, madam, mliyazud, mrobson, mzywusko, pdwyer, pprakash, rcyriac, rhs-bugs, rreddy, rtalur, ssaha, storage-qa-internal, tlarsson, vwalek, zlang | |
Target Milestone: | --- | |||
Target Release: | RHGS 3.3.0 | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | glusterfs-3.8.4-27 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1456420 (view as bug list) | Environment: | ||
Last Closed: | 2017-09-21 04:30:55 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | ||||
Bug Blocks: | 1417147, 1423640, 1456420, 1462254, 1466217, 1472370, 1472372, 1480125 |
Description
Bradley Childs
2017-02-18 01:45:12 UTC
I'm going to kick this to CNS (but I'm going to leave Carlos cc'd) to see how gluster thinks this can/should be handled... Noticed the issue in glusterfs specifically is that, it is using older rebase of libfuse / fusermount code, and hence 'auto_unmount' is not handled properly. Once we rebase the fuse-lib code, this can be supported. *** Bug 1423640 has been marked as a duplicate of this bug. *** upstream patch : https://review.gluster.org/#/c/17230/ (In reply to Atin Mukherjee from comment #13) > upstream patch : https://review.gluster.org/#/c/17230/ Is this option 'on' by default and no extra parsing required when mounting ? I think such a configuration ( no parsing via mount utils) would be required atleast in kube/openshift setups to handle multiple client versions. there is one more upstream patch : https://review.gluster.org/#/c/17229/ this is a up(In reply to Atin Mukherjee from comment #16) > there is one more upstream patch : https://review.gluster.org/#/c/17229/ this is a dependent patch to make sure the changes are very specific in each review. Shouldn't be an issue. downstream patches: https://code.engineering.redhat.com/gerrit/#/c/107570/ https://code.engineering.redhat.com/gerrit/#/c/107571/ *** Bug 1474408 has been marked as a duplicate of this bug. *** Patch posted at : https://bugzilla.redhat.com/show_bug.cgi?id=1424680#c23 for which the bug is moved to ON_QA is with respect to fuse auto_umount. We have tested the functionality in RHGS with the following scenarios: Followed the steps mentioned in the bug. when mounted a volume with added mount option (auto_umount) that is the fix in gluster side, there will two glusterfs processes. When the heavy glusterfs process is killed the volume will be unmounted. The entry from /etc/mtab will no longer be available. If we kill the lite glusterfs process the volume won't be unmounted and we won't see "Transport endpoint is not connected" in the client and mtab entry will still be there. Still we can access data from the mount point. Another scenario, after killing lite process the mount still will be in /etc/mtab , then if we kill the heavy glusterfs process as well then we see "Transport endpoint not connected" as earlier. Based on these scenarios and for downstream RHGS 3.3.0, moving this bug to verified state. For case 01896116, Accenture requested a backport to RHGS 3.2.0. I raised the 'Customer Escalation' flag. Thanks, Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2774 |