Bug 1017362 - [FEAT] Include per sub-directory export access control for FUSE
Summary: [FEAT] Include per sub-directory export access control for FUSE
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: fuse
Version: unspecified
Hardware: All
OS: Linux
high
high
Target Milestone: ---
: RHGS 3.3.1
Assignee: Amar Tumballi
QA Contact: Manisha Saini
URL:
Whiteboard: rebase
: 1286783 (view as bug list)
Depends On: 892808 1286783
Blocks: 1475686 1501446
TreeView+ depends on / blocked
 
Reported: 2013-10-09 17:37 UTC by Wesley Duffee-Braun
Modified: 2023-09-14 23:57 UTC (History)
29 users (show)

Fixed In Version: glusterfs-3.8.4-47
Doc Type: Enhancement
Doc Text:
With multiple users using a Gluster volume to run their application, there is a possibility of security issues as the users can obtain other user's information. With the subdirectory mount feature, a user can access only their part of the storage, and nothing more. This brings abstraction properly to multiple users consuming the storage. Mounting a part of the Gluster volume (i.e. a subdirectory) provides namespace isolation for users by separating out their directories. Thus, multiple users can use the storage without namespace collisions with other users. This enhancement has been shipped as a technical preview feature with Red Hat Gluster Storage 3.3.1.
Clone Of:
: 1501446 (view as bug list)
Environment:
Last Closed: 2017-11-29 03:29:14 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 476293 0 None None None 2017-02-22 11:14:03 UTC
Red Hat Knowledge Base (Solution) 508523 0 None None None Never
Red Hat Product Errata RHBA-2017:3276 0 normal SHIPPED_LIVE glusterfs bug fix update 2017-11-29 08:28:52 UTC

Description Wesley Duffee-Braun 2013-10-09 17:37:15 UTC
Description of problem:
There is all-or-nothing Volume access currently with the Volume options. It would be useful to have functionality for the following:

/quotatestvol/q1     someIP
/quotatestvol/q2     anotherIP
/quotatestvol/q3     bothIPs

Version-Release number of selected component (if applicable):
2.1

How reproducible:
Always

Steps to Reproduce:
1. Try to setup ip range access control for volume subdirectories

Actual results:
No way to do so

Expected results:
Supported behavior

Comment 2 Wesley Duffee-Braun 2013-10-14 15:57:07 UTC
Hello,

I was notified that this request above may not be clear. This is for the native client only, not for NFS.

Thanks,
Wesley

Comment 5 Wesley Duffee-Braun 2013-12-10 15:17:49 UTC
After discussion with PM, this can be marked as CLOSED WONTFIX as the use case that generated this request has been lessened in urgency.

Comment 6 Wesley Duffee-Braun 2013-12-10 17:08:56 UTC
Whoops - I meant to close 1029198. Re-opening this one for evaluation.

Thanks,
Wesley

Comment 7 Harold Miller 2015-09-15 16:39:17 UTC
Any action on this RFE? Last update was 18 months ago. Customer still interested in it.

Comment 9 Harold Miller 2015-11-23 14:58:37 UTC
This BZ is still set to NEW, is HIGH SEV, and has been open for over 2 years.
Can we at least guess when and if it will be added to our road-map?

Comment 11 Cal Calhoun 2016-03-21 12:54:29 UTC
Can I get an update on this BZ for my customer?

Comment 12 Cal Calhoun 2017-01-06 15:40:26 UTC
@pranith : Can I get an update on the status of this BZ?  I see it was tentatively scheduled for upstream 3.8.  Do we know if it made it and will be included in RHGS 3.2?

Cal

Comment 13 Swagato Paul 2017-01-17 12:26:43 UTC
Hello,

I'd like to get an update on the current status of this BZ and also our future plans for it. Given that there is no significant progress on this BZ for last 3.5 years, customer is concerned and has raised management escalation.

Below I am pasting a snippet from customer's update where they have mentioned about justification/impact on business:
---------
I think we have a strong business use case. Providing persistent storage for OpenShift platform in production scale - high number of projects/apps. This is currently possible only with scaling with number of nodes, but this is not acceptable due to price.
---------

The BZ is still in "NEW" state. Let me know if this BZ needs any data/information from customer to proceed further?

Regards,
Swagato Paul
Escalation Manager, CEE

Comment 15 Cal Calhoun 2017-01-26 21:02:32 UTC
@pranith: Dave Carmichael is trying to set up a call to discuss expectations with IT.  Do you have someone that you want to be in the call?

Comment 16 Bipin Kunal 2017-02-22 11:17:24 UTC
*** Bug 1286783 has been marked as a duplicate of this bug. ***

Comment 19 Alok 2017-06-13 09:09:06 UTC
@Vijay, Please review previous comment by Neeraj and let us know if any alternative approach (other than the sub-directory export) may help here?

Comment 20 WenhanShi 2017-07-24 04:48:20 UTC
Hi

Do we have some update for this bug?

Comment 21 Neeraj 2017-08-22 07:06:35 UTC
Hello,

Any updates regarding this bug.

Regards,
Neeraj Bhatt

Comment 22 Atin Mukherjee 2017-09-21 02:53:14 UTC
upstream patch : https://review.gluster.org/17141

Comment 26 Manisha Saini 2017-10-24 09:02:08 UTC
Verified this BZ with gluster builds 3.8.4-48 , 3.8.4-49, 3.8.4-50.
Basic sanity validation across the feature is covered.
Execution of remaining cases will continue ongoing in 3.3.1 test cycle and any issues if found, new bugs will be raised.

Comment 28 Pratik Mulay 2017-11-14 13:18:56 UTC
Hi Amar,

I've edited the Doc Text for it's associated Errata.

Request you to review the same and let me know in case of any concerns.

If no changes are required, request you to provide your approval for the same.

Comment 29 Amar Tumballi 2017-11-14 13:36:58 UTC
The DocText is fine as per technicality. Would like to understand from PM if this is what users want to see or they want to see different wording for the feature.

Comment 32 errata-xmlrpc 2017-11-29 03:29:14 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3276

Comment 33 Red Hat Bugzilla 2023-09-14 23:57:18 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.