+++ This bug was initially created as a clone of Bug #1231175 +++ +++ This bug was initially created as a clone of Bug #1231171 +++ Description of problem: We should be able to find total number of glusterfs fuse client mounts from server nodes. It should list all the fuse client. http://www.gluster.org/pipermail/gluster-devel/2015-June/045462.html On Wednesday 03 June 2015 09:13 AM, Pranith Kumar Karampuri wrote: > > > On 06/01/2015 11:07 AM, Bipin Kunal wrote: > > Hi All, > > > > Is there a way to find total number of gluster mounts? > > > > If not, what would be the complexity for this RFE? > > > > As far as I understand finding the number of fuse mount should be > > possible but seems unfeasible for nfs and samba mounts. > True. Bricks have connections from each of the clients. Each of > fuse/nfs/glustershd/quotad/glfsapi-based-clients(samba/glfsheal) would > have separate client-context set on the bricks. So We can get this > information. But like you said I am not sure how it can be done in nfs > server/samba. Adding more people. Depends on why you would want to know about the clients: 1. For most of the use cases, admin might just need to know how many Samba/NFS servers are currently using the given volume(Say just to perform umount everywhere). In this case, each Samba/NFS server is just like a FUSE mount and we can use the same technique that we would use for the above case that Pranith has mentioned. 2. If the requirement is to identify all the machines which are accessing a volume, (probable use case:- you may want a end-user to close a file etc) above method won't be sufficient. To get details of SMB clients, you would have to run 'smbstatus' command on all SMB server nodes and it would output details of connected SMB clients in this format. PID Username Group Machine Protocol Version Service pid machine Connected at Thanks, Raghavendra Talur > > Pranith > > > > Please let me know your precious thoughts on this. > > > > Thanks, > > Bipin Kunal
Would the command "gluster volume status $VOLNAME clients" not list what you need?
Niels, Thanks Niels, I was not aware of this command. That is similar to what I am looking for, But I don't think it is giving me correct output with all the expected details. Have a look at the output below: [root@rhs3-master1 ~]# gluster volume status dist clients Client connections for volume dist ---------------------------------------------- Brick : dell-per510-3.gsslab.pnq.redhat.com:/thinvol2/brick2/data Clients connected : 5 Hostname BytesRead BytesWritten -------- --------- ------------ 10.65.208.242:1017 1688 1220 10.65.208.243:1017 1688 1220 10.65.208.191:988 1730424624 1355096 10.65.208.191:1021 1940 1512 10.65.223.40:1020 3856 3244 ---------------------------------------------- Brick : dell-per510-4.gsslab.pnq.redhat.com:/thinvol2/brick2/data Clients connected : 5 Hostname BytesRead BytesWritten -------- --------- ------------ 10.65.208.242:1001 1328 892 10.65.208.243:1001 1328 892 10.65.208.191:1013 2595607036 2008004 10.65.208.191:1014 1584 1184 10.65.223.40:1019 3508 2916 ---------------------------------------------- Looking for IP : 10.65.208.191 [root@rhs3-master1 ~]# ifconfig em1 Link encap:Ethernet HWaddr 78:2B:CB:6B:A7:CC inet addr:10.65.208.191 Bcast:10.65.211.255 Mask:255.255.252.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:35663714 errors:0 dropped:0 overruns:0 frame:0 TX packets:25964117 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:12463022605 (11.6 GiB) TX bytes:28700121594 (26.7 GiB) [root@rhs3-master1 ~]# mount | grep dist dell-per510-3.gsslab.pnq.redhat.com:/dist on /glusterperf-test type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072) dell-per510-3.gsslab.pnq.redhat.com:/dist on /nfsperf-test type nfs (rw,vers=3,mountproto=tcp,addr=10.65.208.191,mountaddr=10.65.208.191) So 2 connections for IP 10.65.208.191 is fine Now moving to IP 10.65.223.40: [root@dhcp223-147 Downloads]# ifconfig enp0s25: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.65.223.40 netmask 255.255.254.0 broadcast 10.65.223.255 inet6 fe80::2ad2:44ff:fe80:38fb prefixlen 64 scopeid 0x20<link> ether 28:d2:44:80:38:fb txqueuelen 1000 (Ethernet) RX packets 6942007 bytes 8366944579 (7.7 GiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 8169932 bytes 8948338278 (8.3 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 20 memory 0xf0600000-f0620000 [root@dhcp223-147 Downloads]# mount | grep dist dell-per510-3.gsslab.pnq.redhat.com:/dist on /mnt/nfsperf-test type nfs (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.65.208.191,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=10.65.208.191) dell-per510-3.gsslab.pnq.redhat.com:/dist on /mnt/glusterperf-test type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) [root@dhcp223-147 Downloads]# Here we have 2 client mount, but "gluster volume status dist clients" list only one connection. Now moving to IP 10.65.208.242: [root@rhs3-master3 ~]# ifconfig eth0 Link encap:Ethernet HWaddr 5C:F3:FC:BA:EB:D0 inet addr:10.65.208.242 Bcast:10.65.211.255 Mask:255.255.252.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:54158773 errors:0 dropped:0 overruns:0 frame:0 TX packets:13179497 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:13322920519 (12.4 GiB) TX bytes:2124362207 (1.9 GiB) [root@rhs3-master3 ~]# mount | grep dist [root@rhs3-master3 ~]# No client mount seen on this node. Now moving to IP 10.65.208.243 : [root@rhs3-master4 ~]# ifconfig eth0 Link encap:Ethernet HWaddr 5C:F3:FC:BA:ED:AC inet addr:10.65.208.243 Bcast:10.65.211.255 Mask:255.255.252.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:86267336 errors:0 dropped:0 overruns:0 frame:0 TX packets:21727911 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:24354239948 (22.6 GiB) TX bytes:2943301706 (2.7 GiB) [root@rhs3-master4 ~]# mount | grep dist [root@rhs3-master4 ~]# No client mount seen on this node.
The efforts to address this problem will be tracked at https://github.com/gluster/glusterfs/issues/316.