Description of problem: Option nfs.rpc-auth-reject is used to reject a certain client(s) not to mount a volume. This option works fine with a volume but not with a subdirectory, mean to say on a client that's in reject list, volume mount as expected is unsuccessful, whereas subdir mount of the same volume is still successful. Version-Release number of selected component (if applicable): glusterfs-3.4.0.53rhs-1.el6rhs.x86_64 How reproducible: always Steps to Reproduce: 1. create a volume, start it 2. set the option, nfs.addr-namelookup to "on" 3. set the options, nfs.rpc-auth-reject "provide client(s) where mount is not allowed" 4. from same client as mentioned in step 3 try volume mount 5. from same client as mentioned in step 3 try subdir mount Actual results: step 4 --- unsuccessful [root@rhsauto005 ~]# mount -t nfs 10.70.35.219:dist-rep /mnt/nfs-test mount.nfs: access denied by server while mounting 10.70.35.219:dist-rep step 5 --- successful ---- whereas it should fail [root@rhsauto005 ~]# mount -t nfs 10.70.35.219:dist-rep/dir1 /mnt/nfs-test [root@rhsauto005 ~]# volume info, [root@quota5 ~]# gluster volume info dist-rep Volume Name: dist-rep Type: Distributed-Replicate Volume ID: 4ee792db-48f0-463e-8ad1-d1507d161227 Status: Started Number of Bricks: 6 x 2 = 12 Transport-type: tcp Bricks: Brick1: 10.70.35.219:/rhs/brick1/d1r1 Brick2: 10.70.35.108:/rhs/brick1/d1r2 Brick3: 10.70.35.191:/rhs/brick1/d2r1 Brick4: 10.70.35.144:/rhs/brick1/d2r2 Brick5: 10.70.35.219:/rhs/brick1/d3r1 Brick6: 10.70.35.108:/rhs/brick1/d3r2 Brick7: 10.70.35.191:/rhs/brick1/d4r1 Brick8: 10.70.35.144:/rhs/brick1/d4r2 Brick9: 10.70.35.219:/rhs/brick1/d5r1 Brick10: 10.70.35.108:/rhs/brick1/d5r2 Brick11: 10.70.35.191:/rhs/brick1/d6r1 Brick12: 10.70.35.144:/rhs/brick1/d6r2 Options Reconfigured: nfs.rpc-auth-reject: rhsauto005.lab.eng.blr.redhat.com nfs.rpc-auth-allow: rhsauto002.lab.eng.blr.redhat.com nfs.addr-namelookup: on features.quota: off Expected results: the subdir mount of the same volume should also not be allowed. Additional info:
Looking into this.
Do we have a workaround for this ?
Upstream review: http://review.gluster.org/#/c/6655/
(In reply to Gowrishankar Rajaiyan from comment #4) > Do we have a workaround for this ? AFAIK, there is no work around and needs a coed fix. Saurabh, Do you know any?
https://code.engineering.redhat.com/gerrit/#/c/18087/
(In reply to santosh pradhan from comment #7) > https://code.engineering.redhat.com/gerrit/#/c/18087/ This patch is abandoned because RHS 3.0 branch is cut from upstream-master which already had the fix: http://review.gluster.org/#/c/6655/
Merged as a part of rebase
from client, [root@rhsauto038 ~]# mount -t nfs -o vers=3 10.70.37.62:/dist-rep /mnt/nfs-test mount.nfs: access denied by server while mounting 10.70.37.62:/dist-rep [root@rhsauto038 ~]# mount -t nfs -o vers=3 10.70.37.62:/dist-rep/dir /mnt/nfs-test mount.nfs: access denied by server while mounting 10.70.37.62:/dist-rep/dir from host-server, [root@nfs1 ~]# gluster volume info dist-rep Volume Name: dist-rep Type: Distributed-Replicate Volume ID: 98fb382d-a5ca-4cb6-bde1-579608485527 Status: Started Snap Volume: no Number of Bricks: 6 x 2 = 12 Transport-type: tcp Bricks: Brick1: 10.70.37.62:/bricks/d1r1 Brick2: 10.70.37.215:/bricks/d1r2 Brick3: 10.70.37.44:/bricks/d2r1 Brick4: 10.70.37.201:/bricks/d2r2 Brick5: 10.70.37.62:/bricks/d3r1 Brick6: 10.70.37.215:/bricks/d3r2 Brick7: 10.70.37.44:/bricks/d4r1 Brick8: 10.70.37.201:/bricks/d4r2 Brick9: 10.70.37.62:/bricks/d5r1 Brick10: 10.70.37.215:/bricks/d5r2 Brick11: 10.70.37.44:/bricks/d6r1 Brick12: 10.70.37.201:/bricks/d6r2 Options Reconfigured: nfs.addr-namelookup: on nfs.rpc-auth-reject: rhsauto038.lab.eng.blr.redhat.com features.quota-deem-statfs: on features.quota: on performance.readdir-ahead: on snap-max-hard-limit: 256 snap-max-soft-limit: 90 auto-delete: disable hence moving the BZ to verified
Hi Santosh, Please review the edited doc text for technical accuracy and sign off.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2014-1278.html
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days