Bug 1017362
Summary: | [FEAT] Include per sub-directory export access control for FUSE | |||
---|---|---|---|---|
Product: | [Red Hat Storage] Red Hat Gluster Storage | Reporter: | Wesley Duffee-Braun <wduffee> | |
Component: | fuse | Assignee: | Amar Tumballi <atumball> | |
Status: | CLOSED ERRATA | QA Contact: | Manisha Saini <msaini> | |
Severity: | high | Docs Contact: | ||
Priority: | high | |||
Version: | unspecified | CC: | amukherj, asriram, asrivast, atumball, bkunal, ccalhoun, csaba, hamiller, jcall, lkoranda, mchangir, nbhatt, nchilaka, ndevos, omasek, pkarampu, pmulay, pousley, rcyriac, rhinduja, rhs-bugs, sheggodu, smohan, spaul, ssaha, storage-qa-internal, vanhoof, vbellur, wenshi | |
Target Milestone: | --- | Keywords: | FutureFeature, Reopened, ZStream | |
Target Release: | RHGS 3.3.1 | |||
Hardware: | All | |||
OS: | Linux | |||
Whiteboard: | rebase | |||
Fixed In Version: | glusterfs-3.8.4-47 | Doc Type: | Enhancement | |
Doc Text: |
With multiple users using a Gluster volume to run their application, there is a possibility of security issues as the users can obtain other user's information. With the subdirectory mount feature, a user can access only their part of the storage, and nothing more. This brings abstraction properly to multiple users consuming the storage. Mounting a part of the Gluster volume (i.e. a subdirectory) provides namespace isolation for users by separating out their directories. Thus, multiple users can use the storage without namespace collisions with other users.
This enhancement has been shipped as a technical preview feature with Red Hat Gluster Storage 3.3.1.
|
Story Points: | --- | |
Clone Of: | ||||
: | 1501446 (view as bug list) | Environment: | ||
Last Closed: | 2017-11-29 03:29:14 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: | ||||
Bug Depends On: | 892808, 1286783 | |||
Bug Blocks: | 1475686, 1501446 |
Description
Wesley Duffee-Braun
2013-10-09 17:37:15 UTC
Hello, I was notified that this request above may not be clear. This is for the native client only, not for NFS. Thanks, Wesley After discussion with PM, this can be marked as CLOSED WONTFIX as the use case that generated this request has been lessened in urgency. Whoops - I meant to close 1029198. Re-opening this one for evaluation. Thanks, Wesley Any action on this RFE? Last update was 18 months ago. Customer still interested in it. This BZ is still set to NEW, is HIGH SEV, and has been open for over 2 years. Can we at least guess when and if it will be added to our road-map? Can I get an update on this BZ for my customer? @pranith : Can I get an update on the status of this BZ? I see it was tentatively scheduled for upstream 3.8. Do we know if it made it and will be included in RHGS 3.2? Cal Hello, I'd like to get an update on the current status of this BZ and also our future plans for it. Given that there is no significant progress on this BZ for last 3.5 years, customer is concerned and has raised management escalation. Below I am pasting a snippet from customer's update where they have mentioned about justification/impact on business: --------- I think we have a strong business use case. Providing persistent storage for OpenShift platform in production scale - high number of projects/apps. This is currently possible only with scaling with number of nodes, but this is not acceptable due to price. --------- The BZ is still in "NEW" state. Let me know if this BZ needs any data/information from customer to proceed further? Regards, Swagato Paul Escalation Manager, CEE @pranith: Dave Carmichael is trying to set up a call to discuss expectations with IT. Do you have someone that you want to be in the call? *** Bug 1286783 has been marked as a duplicate of this bug. *** @Vijay, Please review previous comment by Neeraj and let us know if any alternative approach (other than the sub-directory export) may help here? Hi Do we have some update for this bug? Hello, Any updates regarding this bug. Regards, Neeraj Bhatt upstream patch : https://review.gluster.org/17141 downstream patches : https://code.engineering.redhat.com/gerrit/#/c/119138/ https://code.engineering.redhat.com/gerrit/#/c/119139/ Verified this BZ with gluster builds 3.8.4-48 , 3.8.4-49, 3.8.4-50. Basic sanity validation across the feature is covered. Execution of remaining cases will continue ongoing in 3.3.1 test cycle and any issues if found, new bugs will be raised. Hi Amar, I've edited the Doc Text for it's associated Errata. Request you to review the same and let me know in case of any concerns. If no changes are required, request you to provide your approval for the same. The DocText is fine as per technicality. Would like to understand from PM if this is what users want to see or they want to see different wording for the feature. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:3276 The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days |