Bug 1414692 - [NFS-Ganesha]Entire volume mount is accessible if subdirectory of that volume is exported.
Summary: [NFS-Ganesha]Entire volume mount is accessible if subdirectory of that volume...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: doc-Administration_Guide
Version: rhgs-3.2
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: RHGS 3.3.0
Assignee: Bhavana
QA Contact: storage-qa-internal@redhat.com
URL:
Whiteboard:
Depends On:
Blocks: 1417154
TreeView+ depends on / blocked
 
Reported: 2017-01-19 09:16 UTC by Arthy Loganathan
Modified: 2017-09-21 04:28 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-09-21 04:28:40 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Arthy Loganathan 2017-01-19 09:16:59 UTC
Description of problem:
Entire volume mount is accessible if subdirectory of that volume is exported.

Version-Release number of selected component (if applicable):

nfs-ganesha-gluster-2.4.1-6.el7rhgs.x86_64
glusterfs-ganesha-3.8.4-12.el7rhgs.x86_64
nfs-ganesha-2.4.1-6.el7rhgs.x86_64


How reproducible:
Always

Steps to Reproduce:
1. Create Ganesha cluster, volume and export the volume.
2. Mount the volume and create deep directories in the volume.
3. In Server side , update the export config file to have subdirectory mount.
4. Do refresh config.
5. Ensure that showmount -e has subdirectory export.
6. Write files in the existing mount.

Actual results:
Existing volume mount is accessible and IO's are written.

Expected results:
Existing volume mount should become readonly.

Additional info:

Server side:

[root@dhcp47-2 ~]# cat /run/gluster/shared_storage/nfs-ganesha/exports/export.disperseVol.conf 
# WARNING : Using Gluster CLI will overwrite manual
# changes made to this file. To avoid it, edit the
# file and run ganesha-ha.sh --refresh-config.
EXPORT{
      Export_Id = 2;
      Path = "/d1/d2/d3/d4";
      FSAL {
           name = GLUSTER;
           hostname="localhost";
          volume="disperseVol";
          volpath="/d1/d2/d3/d4";
           }
      Access_type = RW;
      Disable_ACL = true;
      Squash="No_root_squash";
      Pseudo="/d1/d2/d3/d4";
      Protocols = "3", "4" ;
      Transports = "UDP","TCP";
      SecType = "sys";
      Attr_Expiration_Time = 600;
     }
[root@dhcp47-2 ~]#

/usr/libexec/ganesha/ganesha-ha.sh --refresh-config /var/run/gluster/shared_storage/nfs-ganesha/ disperseVol

[root@dhcp47-2 ~]# showmount -e
Export list for dhcp47-2.lab.eng.blr.redhat.com:
/replicaVol  (everyone)
/d1/d2/d3/d4 (everyone)


Client Side:

[root@dhcp47-176 ~]# cd /mnt/test
[root@dhcp47-176 test]# ls
d1
[root@dhcp47-176 test]# touch a
[root@dhcp47-176 test]# ls
a  d1
[root@dhcp47-176 test]# cd d1/
[root@dhcp47-176 d1]# touch b
[root@dhcp47-176 d1]#

Comment 2 Arthy Loganathan 2017-01-19 09:20:00 UTC
sosreports and ganesha logs are at,
http://rhsqe-repo.lab.eng.blr.redhat.com/sosreports/1414692/

Comment 3 Soumya Koduri 2017-01-19 10:10:08 UTC
I see this behaviour in case of gluster-NFS as well .


[root@dhcp35-197 ~]# showmount -e localhost
Export list for localhost:
/vol1 *
[root@dhcp35-197 ~]# mount -t nfs -o vers=3 localhost:/vol1 /mnt
[root@dhcp35-197 ~]# ls /mnt
dir1  file  tmp  tree
[root@dhcp35-197 ~]# 


Now I used gNFS options to export a sub-dir and unexport root volume -

[root@dhcp35-197 ~]# gluster v set vol1 nfs.export-volumes off
volume set: success
[root@dhcp35-197 ~]# 
[root@dhcp35-197 ~]# gluster v set vol1 nfs.export-dir /dir1
volume set: success
[root@dhcp35-197 ~]# showmount -e localhost
Export list for localhost:
/vol1/dir1 *
[root@dhcp35-197 ~]# 

>>> I am able to still access existing mount point
[root@dhcp35-197 ~]# touch /mnt/a3
[root@dhcp35-197 ~]# ls /mnt
a2  a3  dir1  file  tmp  tree
[root@dhcp35-197 ~]# 


>>>> But unable to mount the same volume in a new mount point.
[root@dhcp35-197 glusterfs]# mount -t nfs -o vers=3 localhost:/vol1 /mnt2
mount.nfs: mounting localhost:/vol1 failed, reason given by server: No such file or directory
[root@dhcp35-197 glusterfs]# mount -t nfs -o vers=3 localhost:/vol1/dir1 /mnt2
[root@dhcp35-197 glusterfs]# ls /mnt2
a1


I think its by design. whenever a volume was mounted, clients would have got the filehandle of the root volume. 

To export a  sub-directory, we have configured subdir path in the export config file but haven't changed the ExportID.

Hence when the old mount is being accessed, ExportID was still valid (as there is valid subdir export entry with the same ID) and so does the filehandle. 

I think we can document to change the ExportID when any of the sub-directories are being exported. 

Arthy, 

Could you please confirm if changing exportID (to any unused value till then) while exporting sub-dir resolves this issue? Thanks!

Comment 6 Soumya Koduri 2017-02-10 10:25:33 UTC
This cant be fixed. Hence we plan to document in the admin-guide in section "Exporting sub-directories" to change Export_ID to any unused value.

Comment 7 Frank Filz 2017-02-11 00:49:52 UTC
(In reply to Soumya Koduri from comment #6)
> This cant be fixed. Hence we plan to document in the admin-guide in section
> "Exporting sub-directories" to change Export_ID to any unused value.

Hmm, and here we actually can deliver something other NFS servers can't!

Note that even with this mechanism, a handle guessing client can still access any file in a filesystem/volume even when only some subset is exported, but at least we can shut out existing client mounts by forcing handle changes when the exported path changes.

Comment 8 Frank Filz 2017-02-11 00:51:10 UTC
On the other hand, servers that have "subtree check" can block this out, but subtree check can have problems dealing with renamed files.

Comment 11 Soumya Koduri 2017-06-27 07:25:29 UTC
The changes for this bug will be aligned with bug1442732

doc section:
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.2/html/administration_guide/sect-nfs#sect-NFS_Ganesha

Sub-section:
Exporting Subdirectories 

The changes needed are:

* Before exporting sub-directories, stop the volume

* Make necessary changes as already mentioned in the guide

* Change Export_ID to an unused value (preferably a larger value so that it shall be not be re-used for any volume)

* Start the volume

Add a note too that: incase if there are multiple sub-directories to be exported, create EXPORT blocks for each such sub-directory and then restart nfs-ganesha service.

Comment 12 Soumya Koduri 2017-06-27 07:26:31 UTC
Instead of service restart, we can suggest to send SIGHUP signal as well.

Comment 13 Bhavana 2017-07-20 09:26:14 UTC
Hi Jiffin / Soumya,

Based on our discussion today, can you please share the workaround steps for the same (or the finalized order of the steps). I guess the same is applicable to bug 1442732 too.

Comment 14 Soumya Koduri 2017-07-21 03:12:37 UTC
(In reply to Bhavana from comment #13)
> Hi Jiffin / Soumya,
> 
> Based on our discussion today, can you please share the workaround steps for
> the same (or the finalized order of the steps). I guess the same is
> applicable to bug 1442732 too.

I think comment#11 pretty much covers all the new changes needed for this bug and bug#1442732. 

@Jiffin,
Please review and provide your comments.

Comment 17 Soumya Koduri 2017-07-25 10:06:16 UTC
Thanks Bhavana.. a small typo in the below line -

"3. Change Export_ID to an unused value. I should preferably be a larger value so that it cannot be re-used for other volumes. "

>> "...It should preferably..."

Comment 18 Arthy Loganathan 2017-07-27 09:21:16 UTC
Verified the changes done in the documentation.


Note You need to log in before you can comment on or make changes to this bug.