Description of problem: Set values for the options nfs.rpc-auth-allow and nfs.rpc-auth-reject. Things worked as expected. Remember, the values are set using the command gluster volume set Issue happened once the gluster volume reset was used to bring the options back to default. This time the things didn't work as exepected. Version-Release number of selected component (if applicable): glusterfs-3.6.0.9-1.el6rhs.x86_64 How reproducible: always Steps to Reproduce: 1. create a volume of 6x2, start it 2. gluster volume set <vol-name> nfs.rpc-auth-allow "host1" 3. gluster volume set <vol-name> nfs.rpc-auth-reject "host2" 4. try mount from host1 and host2 result ---- host1(PASS) and host2(FAILS) --- as expected 5. gluster volume reset <vol-name> 6. try mount from host1 and host2 Actual results: host1 the mount is a PASS --- as expected. host2 the mount is a FAIL ---- for step6 this is unexpected as the options are reset now result from host2, [root@konsoul ~]# mount -t nfs -o vers=3 10.70.37.62:/dist-rep /mnt/nfs-test mount.nfs: access denied by server while mounting 10.70.37.62:/dist-rep gluster volume info, [root@nfs2 ~]# gluster volume info Volume Name: dist-rep Type: Distributed-Replicate Volume ID: 479f93d9-ed9b-4097-8d95-7a0657ee912f Status: Started Snap Volume: no Number of Bricks: 6 x 2 = 12 Transport-type: tcp Bricks: Brick1: 10.70.37.62:/bricks/d1r1 Brick2: 10.70.37.215:/bricks/d1r2 Brick3: 10.70.37.44:/bricks/d2r1 Brick4: 10.70.37.201:/bricks/d2r2 Brick5: 10.70.37.62:/bricks/d3r1 Brick6: 10.70.37.215:/bricks/d3r2 Brick7: 10.70.37.44:/bricks/d4r1 Brick8: 10.70.37.201:/bricks/d4r2 Brick9: 10.70.37.62:/bricks/d5r1 Brick10: 10.70.37.215:/bricks/d5r2 Brick11: 10.70.37.44:/bricks/d6r1 Brick12: 10.70.37.201:/bricks/d6r2 Options Reconfigured: features.quota: off Expected results: step6 mounts should PASS. Additional info:
Good catch. Could you try with a host3 which was not in allow/reject list?
yeah the mount works on the host3, [root@rhsauto009 ~]# showmount -e 10.70.37.62 Export list for 10.70.37.62: /dist-rep * [root@rhsauto009 ~]# mkdir /mnt/nfs-test [root@rhsauto009 ~]# mount -t nfs -o vers=3 10.70.37.62:/dist-rep /mnt/nfs-test [root@rhsauto009 ~]# ls /mnt/nfs-test file file1 [root@rhsauto009 ~]#
Posted the patch for review: http://review.gluster.org/#/c/7931/
*** Bug 1048761 has been marked as a duplicate of this bug. ***
Fix is already accepted upstream. I am not sure if we are planning to put the fix for Denali. Commit details: =============== commit 211785f29904995324bfd3c7fa4b35a498bf632a Author: Santosh Kumar Pradhan <spradhan> Date: Fri May 30 12:37:23 2014 +0530 rpc: Reconfigure() does not work for auth-reject Problem: If volume is set for rpc-auth.addr.<volname>.reject with value as "host1", ideally the NFS mount from "host1" should FAIL. It works as expected. But when the volume is RESET, then previous value set for auth-reject should go off, and further NFS mount from "host1" should PASS. But it FAILs because of stale value in dict for key "rpc-auth.addr.<volname>.reject". It does not impact rpc-auth.addr.<volname>.allow key because, each time NFS volfile gets generated, allow key ll have "*" as default value. But reject key does not have default value. FIX: Delete the OLD value for key irrespective of anything. Add NEW value for the key, if and only if that is SET in the reconfigured new volfile. Signed-off-by: Santosh Kumar Pradhan <spradhan> Change-Id: Ie80bd16cd1f9e32c51f324f2236122f6d118d860 BUG: 1103050 Reviewed-on: http://review.gluster.org/7931 Reviewed-by: Niels de Vos <ndevos> Tested-by: Gluster Build System <jenkins.com> Reviewed-by: Rajesh Joseph <rjoseph> Reviewed-by: Anand Avati <avati> Thanks, Santosh
executed the same test on the build glusterfs-3.6.0.33-1.el6rhs.x86_64 and found that presently it works fine. 1. create a volume of 6x2, start it 2. gluster volume set <vol-name> nfs.rpc-auth-allow "host1" 3. gluster volume set <vol-name> nfs.rpc-auth-reject "host2" 4. try mount from host1 and host2 result ---- host1(PASS) and host2(FAILS) --- as expected 5. gluster volume reset <vol-name> 6. try mount from host1 and host2
Niels, Could you review the edited doc text and sign-off?
Made a minor adjustment, looks good to me.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-0038.html