Bug 765288 (GLUSTER-3556)

Summary: [FEAT] Support retrieval of nfs-server.vol from glusterfs clients
Product: [Community] GlusterFS Reporter: Louis Zuckerman <glusterbugs>
Component: nfsAssignee: bugs <bugs>
Status: CLOSED EOL QA Contact:
Severity: low Docs Contact:
Priority: medium    
Version: mainlineCC: bugs, gluster-bugs, joe, vijay
Target Milestone: ---Keywords: FutureFeature, Triaged
Target Release: ---   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: Enhancement
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-10-22 15:46:38 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Louis Zuckerman 2011-09-14 20:15:59 UTC
Enhancement:

Enable retrieval & loading of the nfs-server.vol file by the glusterfs client program.  

Example (on a "client" machine, not a peer):

glusterfs --volfile-id=nfs-server --volfile-server=glusterfs.example.net

Result:

A glusterfs client process launches, retrieves the glusterd/nfs/nfs-server.vol file from the volfile-server specified, and uses that to run a gluster NFS server on the local "client" machine.  Then glusterfs volumes can be mounted using NFS from localhost.

Motivation:

Combining the fault-tolerance of the glusterfs FUSE client with the small-file performance of an NFS client.  While supporting dynamic volume changes for NFS clients but without requiring the "client" machine to be probed into the trusted storage pool.

This can be accomplished currently by probing the "client" machine into the trusted storage pool, so that a gluster NFS server gets set up by glusterd.  Then volumes can be mounted using NFS from localhost.  The downside of this current solution is that it is very difficult to automate the process of setting up such a "client" machine.  This enhancement would enable the same functionality to be completely automated on the client side, without requiring any probe action to be taken on an existing glusterfs peer server.

Use case:

A specific use case for this would be web serving in EC2, where a glusterfs storage cluster provides file storage to an autoscaled group of frontend web servers.  In this case the proposed enhancement provides a fully automated way to achieve the fault-tolerance of the glusterfs FUSE client and the small-file performance for read-heavy web serving workloads of an NFS client.

Thank you.

Comment 1 Amar Tumballi 2011-09-23 02:20:33 UTC
From 3.3.0qa10 onwards (ie, current git master branch head), one can run below command to get the nfs-server.vol file fetched from server

'bash# glusterfs -s <server> --volfile-id gluster/nfs'

Please confirm that is enough for running nfs server logs.

Comment 2 Amar Tumballi 2011-09-23 02:21:19 UTC
 
> Please confirm that is enough for running nfs server <process>.

Comment 3 Louis Zuckerman 2011-09-23 17:39:58 UTC
Well it does work to retrieve the nfs server volfile, but it does not support dynamic updates like a client should.  Here are some things I've tried that don't work right using this new feature...

1. Adding bricks to a volume...

After a volume has been created and mounted on a remote machine using this new feature, if bricks are added the new bricks are not used by the nfs client.  This means for files written through the NFS mount, they will only be distributed to the initial bricks, not the added bricks.  This also means for files read from the NFS mount, that ONLY files on the initial bricks will be visible, any files added via other clients which were distributed to the new bricks will not be visible to this NFS client.

Checking the logs, it does get an update with the new volfile, and it does say it connects to the new bricks, however they are not used for reading or writing.  Even doing a rebalance on the server does not solve the problem.

2. Adding volumes to the cluster

After this new client feature has been started and pulls the initial nfs-server.vol from a peer, if new volumes are added to the cluster then this client is not updated with the new nfs-server.vol file.

Checking the logs, it says that a volume change was detected but that there was no change to the graph, so it does not update.

If another action is taken on the original existing volume, such as add-brick as described above, then the nfs-server.vol file is updated with the new nfs server graph, including both the original and the new volumes present, however the new volume can not be mounted.

Thanks again.

Comment 4 Amar Tumballi 2011-09-27 05:50:10 UTC
Planing to keep 3.4.x branch as "internal enhancements" release without any features. So moving these bugs to 3.4.0 target milestone.

Comment 5 Amar Tumballi 2011-10-02 02:46:15 UTC
Had a look on the issue again today. Actually, the nfs volume file is getting changed if one does 'add-brick' etc on the server. But, because nfs works with FH (file-handles) which needs the file to be open all the time, the new graph doesn't get into effect unless we restart the server process.

Need fuse like 'resolve_and_resume' in nfs we we have to support such functionality. Current work around is to restart the nfs server process (as it is done on serverside where there are 'glusterd' are running.

Comment 6 Kaleb KEITHLEY 2015-10-22 15:46:38 UTC
because of the large number of bugs filed against mainline version\ is ambiguous and about to be removed as a choice.

If you believe this is still a bug, please change the status back to NEW and choose the appropriate, applicable version for it.