Bug 1231202 - [RFE]- How to find total number of glusterfs nfs client mounts?
Summary: [RFE]- How to find total number of glusterfs nfs client mounts?
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: GlusterFS
Classification: Community
Component: nfs
Version: mainline
Hardware: All
OS: Linux
unspecified
medium
Target Milestone: ---
Assignee: bugs@gluster.org
QA Contact:
URL:
Whiteboard:
Depends On:
Blocks: 1231171
TreeView+ depends on / blocked
 
Reported: 2015-06-12 11:40 UTC by Bipin Kunal
Modified: 2018-12-04 08:45 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of: 1231175
Environment:
Last Closed: 2018-11-19 05:20:35 UTC
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github 566 0 None None None 2020-05-04 07:22:02 UTC
Red Hat Bugzilla 1231171 0 unspecified CLOSED [RFE]- How to find total number of glusterfs client mounts? 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1231175 0 unspecified CLOSED [RFE]- How to find total number of glusterfs samba client mounts? 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1231207 0 unspecified CLOSED [RFE]- How to find total number of glusterfs fuse client mounts? 2021-02-22 00:41:40 UTC

Internal Links: 1231171 1231175 1231207

Description Bipin Kunal 2015-06-12 11:40:07 UTC
+++ This bug was initially created as a clone of Bug #1231175 +++

+++ This bug was initially created as a clone of Bug #1231171 +++

Description of problem:

We should be able to find total number of glusterfs nfs client mounts from server nodes. It should list all the  nfs client.


http://www.gluster.org/pipermail/gluster-devel/2015-June/045462.html

On Wednesday 03 June 2015 09:13 AM, Pranith Kumar Karampuri wrote:
>
>
> On 06/01/2015 11:07 AM, Bipin Kunal wrote:
> > Hi All,
> >
> >   Is there a way to find total number of gluster mounts?
> >
> >   If not, what would be the complexity for this RFE?
> >
> >   As far as I understand finding the number of fuse mount should be
> > possible but seems unfeasible for nfs and samba mounts.
> True. Bricks have connections from each of the clients. Each of
> fuse/nfs/glustershd/quotad/glfsapi-based-clients(samba/glfsheal) would
> have separate client-context set on the bricks. So We can get this
> information. But like you said I am not sure how it can be done in nfs
> server/samba. Adding more people.

Depends on why you would want to know about the clients:

1. For most of the use cases, admin might just need to know how many
Samba/NFS servers are currently using the given volume(Say just to perform umount everywhere).
In this case, each Samba/NFS server is just like a FUSE mount and we can use the same
technique that we would use for the above case that Pranith has mentioned.

2. If the requirement is to identify all the machines which are accessing a volume,
(probable use case:- you may want a end-user to close a file etc)
above method won't be sufficient. To get details of SMB clients, you would have to
run 'smbstatus' command on all SMB server nodes and it would output details of
connected SMB clients in this format.
PID     Username      Group         Machine            Protocol Version     Service      pid     machine       Connected at

Thanks,
Raghavendra Talur
>
> Pranith
> >
> >   Please let me know your precious thoughts on this.
> >
> > Thanks,
> > Bipin Kunal

Comment 1 Bipin Kunal 2015-06-15 08:02:19 UTC
Reply from Niels from nfs point of view:
http://www.gluster.org/pipermail/gluster-devel/2015-June/045640.html

Gluster/NFS supports the 'showmount' command (over the MOUNT RPC
protocol). It can be used to list all the NFS-clients that have a
volume/subdir mounted.

This list should not be 100% trusted though. NFSv3 uses the MOUNT RPC
protocol to get the file-handle for the mountpoint. After that, the
NFSv3 protocol can use the export until it wants to. When the NFS-client
unmounts the export, it sends the UMNT procedure to the NFS-server what
causes the NFS-client/export combination to be removed from the
client-list (showmount output).

A client that does not send a UMNT, will not be removed from the list of
active clients. This can happen when a client does a umount during
network issues, or a client spontaneously reboots (or kernel panic or
..). Very similar are clients that mount the exact same export/subdir in
multiple mountpoints. The NFS-server can not differentiate between a
client that did not set a UMNT, or a client that mounts the same
export/subdir more than once. These clients will only be listed once.

HTH,
Niels

Comment 2 Niels de Vos 2015-06-16 12:14:42 UTC
Does comment #1 not provide a solution for you?

Comment 3 Bipin Kunal 2015-06-16 14:15:04 UTC
Niels, 

Ya that do provide me a solution. But showmount will list all the nfs mount irrespective of gluster volume mount. We can use "showmount" output and display only the gluster mount information.

I have opened this bug as a child bug of https://bugzilla.redhat.com/show_bug.cgi?id=1231171.

BZ: 1231171 addresses a tool which will list all the gluster mounts

Comment 4 Niels de Vos 2015-07-12 22:22:36 UTC
(In reply to Bipin Kunal from comment #3)
> Ya that do provide me a solution. But showmount will list all the nfs mount
> irrespective of gluster volume mount. We can use "showmount" output and
> display only the gluster mount information.

It is not really clear to me what the expected result is if this bug. You can use 'showmount' to display which NFS-clients mounted a volume/subdir. Is there anything else needed?

Note that you can set nfs.rmtab to a file on a Gluster/FUSE mountpoint. In that case, you can do a 'showmount' against one Gluster server, and all the clients from all servers will be listed. This comes with a performance cost while mounting, though. Mount-storms can get delayed quite a bit due to that option (Try to boot a whole HPC-cluster with hundreds of clients at once, mounting will get serialized.)

Comment 5 Vijay Bellur 2018-11-19 05:41:45 UTC
Migrated to github:

https://github.com/gluster/glusterfs/issues/566

Please follow the github issue for further updates on this bug.


Note You need to log in before you can comment on or make changes to this bug.