+++ This bug was initially created as a clone of Bug #1231175 +++
+++ This bug was initially created as a clone of Bug #1231171 +++
Description of problem:
We should be able to find total number of glusterfs nfs client mounts from server nodes. It should list all the nfs client.
On Wednesday 03 June 2015 09:13 AM, Pranith Kumar Karampuri wrote:
> On 06/01/2015 11:07 AM, Bipin Kunal wrote:
> > Hi All,
> > Is there a way to find total number of gluster mounts?
> > If not, what would be the complexity for this RFE?
> > As far as I understand finding the number of fuse mount should be
> > possible but seems unfeasible for nfs and samba mounts.
> True. Bricks have connections from each of the clients. Each of
> fuse/nfs/glustershd/quotad/glfsapi-based-clients(samba/glfsheal) would
> have separate client-context set on the bricks. So We can get this
> information. But like you said I am not sure how it can be done in nfs
> server/samba. Adding more people.
Depends on why you would want to know about the clients:
1. For most of the use cases, admin might just need to know how many
Samba/NFS servers are currently using the given volume(Say just to perform umount everywhere).
In this case, each Samba/NFS server is just like a FUSE mount and we can use the same
technique that we would use for the above case that Pranith has mentioned.
2. If the requirement is to identify all the machines which are accessing a volume,
(probable use case:- you may want a end-user to close a file etc)
above method won't be sufficient. To get details of SMB clients, you would have to
run 'smbstatus' command on all SMB server nodes and it would output details of
connected SMB clients in this format.
PID Username Group Machine Protocol Version Service pid machine Connected at
> > Please let me know your precious thoughts on this.
> > Thanks,
> > Bipin Kunal
Reply from Niels from nfs point of view:
Gluster/NFS supports the 'showmount' command (over the MOUNT RPC
protocol). It can be used to list all the NFS-clients that have a
This list should not be 100% trusted though. NFSv3 uses the MOUNT RPC
protocol to get the file-handle for the mountpoint. After that, the
NFSv3 protocol can use the export until it wants to. When the NFS-client
unmounts the export, it sends the UMNT procedure to the NFS-server what
causes the NFS-client/export combination to be removed from the
client-list (showmount output).
A client that does not send a UMNT, will not be removed from the list of
active clients. This can happen when a client does a umount during
network issues, or a client spontaneously reboots (or kernel panic or
..). Very similar are clients that mount the exact same export/subdir in
multiple mountpoints. The NFS-server can not differentiate between a
client that did not set a UMNT, or a client that mounts the same
export/subdir more than once. These clients will only be listed once.
Does comment #1 not provide a solution for you?
Ya that do provide me a solution. But showmount will list all the nfs mount irrespective of gluster volume mount. We can use "showmount" output and display only the gluster mount information.
I have opened this bug as a child bug of https://bugzilla.redhat.com/show_bug.cgi?id=1231171.
BZ: 1231171 addresses a tool which will list all the gluster mounts
(In reply to Bipin Kunal from comment #3)
> Ya that do provide me a solution. But showmount will list all the nfs mount
> irrespective of gluster volume mount. We can use "showmount" output and
> display only the gluster mount information.
It is not really clear to me what the expected result is if this bug. You can use 'showmount' to display which NFS-clients mounted a volume/subdir. Is there anything else needed?
Note that you can set nfs.rmtab to a file on a Gluster/FUSE mountpoint. In that case, you can do a 'showmount' against one Gluster server, and all the clients from all servers will be listed. This comes with a performance cost while mounting, though. Mount-storms can get delayed quite a bit due to that option (Try to boot a whole HPC-cluster with hundreds of clients at once, mounting will get serialized.)
Migrated to github:
Please follow the github issue for further updates on this bug.