Bug 1048749 - nfs: while using the option nfs.rpc-auth-reject, a volume mount fails but a subdirectory mount still is successful
Summary: nfs: while using the option nfs.rpc-auth-reject, a volume mount fails but a s...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: glusterd
Version: 2.1
Hardware: x86_64
OS: Linux
high
high
Target Milestone: ---
: RHGS 3.0.0
Assignee: Vivek Agarwal
QA Contact: Saurabh
URL:
Whiteboard:
Depends On:
Blocks: 1049225
TreeView+ depends on / blocked
 
Reported: 2014-01-06 09:57 UTC by Saurabh
Modified: 2023-09-14 01:56 UTC (History)
8 users (show)

Fixed In Version: glusterfs-3.6.0.0-1.el6rhs
Doc Type: Bug Fix
Doc Text:
Previously, a subdirectory mount request was successful even though the host was configured with the nfs.rpc-auth-reject option. With this fix, the clients requesting the mount are validated against the nfs.rpc-auth-reject irrespective of type of mount (either the volume mount or subdirectory mount). As a result, if the host is configured with nfs.rpc-auth-reject, the mount request from the same host would fail for any type of mount requests.
Clone Of:
: 1049225 (view as bug list)
Environment:
Last Closed: 2014-09-22 19:31:13 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2014:1278 0 normal SHIPPED_LIVE Red Hat Storage Server 3.0 bug fix and enhancement update 2014-09-22 23:26:55 UTC

Description Saurabh 2014-01-06 09:57:10 UTC
Description of problem:
Option nfs.rpc-auth-reject is used to reject a certain client(s) not to mount a volume.
This option works fine with a volume but not with a subdirectory, mean to say on a client that's in reject list, volume mount as expected is unsuccessful, whereas subdir mount of the same volume is still successful. 

Version-Release number of selected component (if applicable):
glusterfs-3.4.0.53rhs-1.el6rhs.x86_64

How reproducible:
always

Steps to Reproduce:
1. create a volume, start it
2. set the option, nfs.addr-namelookup to "on"
3. set the options, nfs.rpc-auth-reject "provide client(s) where mount is not allowed"
4. from same client as mentioned in step 3 try volume mount
5. from same client as mentioned in step 3 try subdir mount

Actual results:
step 4 --- unsuccessful
[root@rhsauto005 ~]# mount -t nfs 10.70.35.219:dist-rep /mnt/nfs-test
mount.nfs: access denied by server while mounting 10.70.35.219:dist-rep

step 5 --- successful ---- whereas it should fail
[root@rhsauto005 ~]# mount -t nfs 10.70.35.219:dist-rep/dir1 /mnt/nfs-test
[root@rhsauto005 ~]# 

volume info,
[root@quota5 ~]# gluster volume info dist-rep
 
Volume Name: dist-rep
Type: Distributed-Replicate
Volume ID: 4ee792db-48f0-463e-8ad1-d1507d161227
Status: Started
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.35.219:/rhs/brick1/d1r1
Brick2: 10.70.35.108:/rhs/brick1/d1r2
Brick3: 10.70.35.191:/rhs/brick1/d2r1
Brick4: 10.70.35.144:/rhs/brick1/d2r2
Brick5: 10.70.35.219:/rhs/brick1/d3r1
Brick6: 10.70.35.108:/rhs/brick1/d3r2
Brick7: 10.70.35.191:/rhs/brick1/d4r1
Brick8: 10.70.35.144:/rhs/brick1/d4r2
Brick9: 10.70.35.219:/rhs/brick1/d5r1
Brick10: 10.70.35.108:/rhs/brick1/d5r2
Brick11: 10.70.35.191:/rhs/brick1/d6r1
Brick12: 10.70.35.144:/rhs/brick1/d6r2
Options Reconfigured:
nfs.rpc-auth-reject: rhsauto005.lab.eng.blr.redhat.com
nfs.rpc-auth-allow: rhsauto002.lab.eng.blr.redhat.com
nfs.addr-namelookup: on
features.quota: off


Expected results:
the subdir mount of the same volume should also not be allowed.

Additional info:

Comment 2 santosh pradhan 2014-01-06 12:25:06 UTC
Looking into this.

Comment 4 Gowrishankar Rajaiyan 2014-01-07 08:52:24 UTC
Do we have a workaround for this ?

Comment 5 santosh pradhan 2014-01-07 09:14:26 UTC
Upstream review:
http://review.gluster.org/#/c/6655/

Comment 6 santosh pradhan 2014-01-07 09:17:22 UTC
(In reply to Gowrishankar Rajaiyan from comment #4)
> Do we have a workaround for this ?

AFAIK, there is no work around and needs a coed fix.

Saurabh, Do you know any?

Comment 7 santosh pradhan 2014-01-07 16:30:35 UTC
https://code.engineering.redhat.com/gerrit/#/c/18087/

Comment 8 santosh pradhan 2014-05-22 11:27:12 UTC
(In reply to santosh pradhan from comment #7)
> https://code.engineering.redhat.com/gerrit/#/c/18087/

This patch is abandoned because RHS 3.0 branch is cut from upstream-master which already had the fix: http://review.gluster.org/#/c/6655/

Comment 9 Vivek Agarwal 2014-05-22 11:42:55 UTC
Merged as a part of rebase

Comment 10 Saurabh 2014-06-19 11:03:35 UTC
from client,
[root@rhsauto038 ~]# mount -t nfs -o vers=3 10.70.37.62:/dist-rep /mnt/nfs-test
mount.nfs: access denied by server while mounting 10.70.37.62:/dist-rep
[root@rhsauto038 ~]# mount -t nfs -o vers=3 10.70.37.62:/dist-rep/dir /mnt/nfs-test
mount.nfs: access denied by server while mounting 10.70.37.62:/dist-rep/dir

from host-server,
[root@nfs1 ~]# gluster volume info dist-rep
 
Volume Name: dist-rep
Type: Distributed-Replicate
Volume ID: 98fb382d-a5ca-4cb6-bde1-579608485527
Status: Started
Snap Volume: no
Number of Bricks: 6 x 2 = 12
Transport-type: tcp
Bricks:
Brick1: 10.70.37.62:/bricks/d1r1
Brick2: 10.70.37.215:/bricks/d1r2
Brick3: 10.70.37.44:/bricks/d2r1
Brick4: 10.70.37.201:/bricks/d2r2
Brick5: 10.70.37.62:/bricks/d3r1
Brick6: 10.70.37.215:/bricks/d3r2
Brick7: 10.70.37.44:/bricks/d4r1
Brick8: 10.70.37.201:/bricks/d4r2
Brick9: 10.70.37.62:/bricks/d5r1
Brick10: 10.70.37.215:/bricks/d5r2
Brick11: 10.70.37.44:/bricks/d6r1
Brick12: 10.70.37.201:/bricks/d6r2
Options Reconfigured:
nfs.addr-namelookup: on
nfs.rpc-auth-reject: rhsauto038.lab.eng.blr.redhat.com
features.quota-deem-statfs: on
features.quota: on
performance.readdir-ahead: on
snap-max-hard-limit: 256
snap-max-soft-limit: 90
auto-delete: disable


hence moving the BZ to verified

Comment 11 Pavithra 2014-08-07 06:01:59 UTC
Hi Santosh,

Please review the edited doc text for technical accuracy and sign off.

Comment 14 errata-xmlrpc 2014-09-22 19:31:13 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1278.html

Comment 15 Red Hat Bugzilla 2023-09-14 01:56:28 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.