Bug 1385040

Summary: gam_server crashing repeatedly
Product: Red Hat Enterprise Linux 6 Reporter: Joe Wright <jwright>
Component: gaminAssignee: Ondrej Holy <oholy>
Status: CLOSED WONTFIX QA Contact: Desktop QE <desktop-qa-list>
Severity: high Docs Contact:
Priority: unspecified    
Version: 6.8CC: aiyengar, alanm, cww, derfian, dkaylor, greg.matthews, jwright, oholy, rick.beldin, sydelko, walters, wbaudler
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-08-15 20:08:19 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
/etc/gamin/gaminrc file
none
/etc/gamin/mandatory_gaminrc file
none
/etc/mtab
none
gamin debug log, PID 5248
none
gamin debug log, PID 5294
none
kernel segfault messages, PID 5248, 5294, 5365, 5384, 6427
none
valgrind none

Description Joe Wright 2016-10-14 15:39:49 UTC
Description of problem:
- Numerous desktop sessions running using Thinlinc; seeing gam_server segfaulting many times per day. 
- User homedirs are over NFS; added "fsset nfs none" to /etc/gamin/gaminrc but it hasn't helped

Version-Release number of selected component (if applicable):
- gamin-0.1.10-9.el6

How reproducible:
- unsure, have not been able to reproduce in house

Steps to Reproduce:
1. behavior is random but frequent
2.
3.

Actual results:


Expected results:
- not crashing

Additional info:

Core was generated by `/usr/libexec/gam_server'.
Program terminated with signal 11, Segmentation fault.
#0  gam_queue_event (conn=0x62696c2f7273752f, reqno=2, event=-1, path=0x43eadc0 "/home/pier/e/", len=13)
    at gam_connection.c:632
632             g_assert (conn->eq);
(gdb) bt
#0  gam_queue_event (conn=0x62696c2f7273752f, reqno=2, event=-1, path=0x43eadc0 "/home/pier/e/", len=13)
    at gam_connection.c:632
#1  0x000000000040adf5 in ih_event_callback (event=0x4470ee0, sub=0x3674620) at inotify-helper.c:193
#2  0x000000000040c542 in ip_event_dispatch (event=0x4470ee0) at inotify-path.c:411
#3  ip_event_callback (event=0x4470ee0) at inotify-path.c:474
#4  0x000000000040b7d5 in ik_process_eq_callback (user_data=<value optimized out>) at inotify-kernel.c:658
#5  0x0000003fd9c4108b in g_timeout_dispatch (source=<value optimized out>, 
    callback=<value optimized out>, user_data=<value optimized out>) at gmain.c:3893
#6  0x0000003fd9c40642 in g_main_dispatch (context=0x21b1b10) at gmain.c:2441
#7  g_main_context_dispatch (context=0x21b1b10) at gmain.c:3014
#8  0x0000003fd9c44c98 in g_main_context_iterate (context=0x21b1b10, block=1, dispatch=1, 
    self=<value optimized out>) at gmain.c:3092
#9  0x0000003fd9c451a5 in g_main_loop_run (loop=0x21b2850) at gmain.c:3300
#10 0x0000000000404866 in main (argc=<value optimized out>, argv=<value optimized out>) at gam_server.c:647

Comment 3 Ondrej Holy 2016-10-17 11:29:55 UTC
It looks like the following Fedora bug, however, all the relevant upstream patches should be already part of this version:
https://bugzilla.redhat.com/show_bug.cgi?id=205731

They can also try to add: "fsset nfs4 none"

Comment 4 Joe Wright 2016-10-17 14:17:58 UTC
We already configured /etc/gamin/gaminrc with those parameters prior to filing this bug. No success.

Comment 5 Ondrej Holy 2016-10-19 07:24:51 UTC
Polling might happen in certain conditions, but inotify code should not be called if "/home/pier/e/" is nfs mount and gaminrc file contains "fsset nfs none" and "fsset nfs4 none", see:
https://git.gnome.org/browse/gamin/tree/server/gam_server.c#n185

Maybe the "/etc/gamin/gaminrc" is overwritten by another gaminrc file, because it has the lowest priority. You can try "/etc/gamin/mandatory_gaminrc" instead which should have the highest priority, see:
https://people.gnome.org/~veillard/gamin/config.html

If it doesn't help, can you please provide content of all your gaminrc files (i.e. "/etc/gamin/gaminrc", "/etc/gamin/mandatory_gaminrc", "~/.gaminrc") and "/etc/mtab" file?

Comment 6 Andrew Sydelko 2016-10-19 14:19:18 UTC
Created attachment 1212167 [details]
/etc/gamin/gaminrc file

Comment 7 Andrew Sydelko 2016-10-19 14:20:03 UTC
Created attachment 1212168 [details]
/etc/gamin/mandatory_gaminrc file

Comment 8 Andrew Sydelko 2016-10-19 14:20:24 UTC
Created attachment 1212169 [details]
/etc/mtab

Comment 9 Ondrej Holy 2016-10-20 11:10:35 UTC
Thanks for the data, it looks ok, let's try something else. Can you please provide debugging output for the crashed gam_server, please? Unfortunately, it is a bit tricky:

1) mv /usr/libexec/gam_server /usr/libexec/gam_server.bak
2) create /usr/libexec/gam_server with the following content:
#!/bin/sh
export GAM_DEBUG=1
exec /usr/libexec/gam_server.bak --notimeout &> /tmp/gamin-debug-$$.log
3) chmod +x /usr/libexec/gam_server
4) pkill gam_server
5) try to reproduce the crash
6) provide /tmp/gamin-debug-<PID OF THE CRASHED GAM_SERVER>.log

Comment 10 Andrew Sydelko 2016-10-20 13:42:04 UTC
Created attachment 1212518 [details]
gamin debug log, PID 5248

Comment 11 Andrew Sydelko 2016-10-20 13:42:33 UTC
Created attachment 1212520 [details]
gamin debug log, PID 5294

Comment 12 Andrew Sydelko 2016-10-20 13:44:12 UTC
Created attachment 1212521 [details]
kernel segfault messages, PID 5248, 5294, 5365, 5384, 6427

Comment 13 Andrew Sydelko 2016-10-20 13:46:02 UTC
I've attached 2 log files, I have 3 more that are pretty much identical. Let me know if you want those too.

--andy.

Comment 14 Ondrej Holy 2016-10-21 14:48:10 UTC
Thanks for the logs, the additional ones are not needed.

It seems that most of the requests are ignored, but sometimes some of them are not ignored:

# This was not ignored:
MONDIR request: from /usr/libexec/gvfsd-trash, seq 1, type 2 options 10
/usr/libexec/gvfsd-trash listening for /package/sage
g_a_s: /package/sage using kernel monitoring 
Adding sub /package/sage to listener /usr/libexec/gvfsd-trash
# This was ignored:
MONDIR request: from /usr/libexec/gvfsd-trash, seq 2, type 2 options 10
/usr/libexec/gvfsd-trash listening for /package/sage
# Mount list update happens usually around those messages
Updating list of mounted filesystems

So it seems that there is a race somewhere... does the list of NFS mounts changes in time?

Just a note that this is obviously side-effect of Bug 725178, because FAM is currently used for monitoring NFS filesystems, but was not before RHEL 6.8...

Comment 15 Andrew Sydelko 2016-10-21 15:40:26 UTC
Yes, quite often. /home and /package (and more) are automount spaces with lots of NFS mounts possible if someone traverses into them. 

I had actually asked about that bug change possibility on the initial support case but was shot down.

Comment 16 Ondrej Holy 2016-10-25 16:08:01 UTC
Hmm, the mtab changes might be the root cause of those crashes. However, still, I am not able to reproduce it.

Does automount works correctly for you? It seems to me that once gam_server get monitoring request, then it is not possible to unmount the nfs share, because of "device is busy".

I did not test it on RHEL 6 before, I will make new tests on RHEL 6...

Comment 17 Ondrej Holy 2016-10-27 10:36:11 UTC
I'm finally able to reproduce the crashes on RHEL 6.8. The following seems to be enough to reproduce the crashes:

1/ configure gamin:

/etc/gamin/mandatory_gaminrc:
fsset nfs none
fsset nfs4 none

pkill gam_server

2/ configure autofs:

/etc/auto.master:
/misc   /etc/auto.misc  --timeout=5

/etc/auto.misc:
m0   -fstype=nfs   ADDRESS
m1   -fstype=nfs   ADDRESS
m2   -fstype=nfs   ADDRESS
m3   -fstype=nfs   ADDRESS
m4   -fstype=nfs   ADDRESS

service autofs reload

3/ run the following:

for i in $(seq 0 4); do ls /misc/m$i; sleep 1; done

Comment 18 Ondrej Holy 2016-10-27 10:38:13 UTC
Created attachment 1214570 [details]
valgrind

There is also related valgrind output...

Comment 19 Ondrej Holy 2016-10-27 11:12:33 UTC
Just a note that the steps from Comment 17 works reliably only if gam_server binary is replaced by the script (Comment 9) and the binary is spawned under valgrind:

exec valgrind --log-file=/tmp/gamin-valgrind-$$.log --leak-check=full --track-origins=yes /usr/libexec/gam_server.bak --notimeout &> /tmp/gamin-debug-$$.log

More effort (e.g. more mountpoints) is needed in order to reproduce this with unmodified gam_server.

Comment 20 Ondrej Holy 2016-10-27 15:41:19 UTC
The crucial problem is that the nfs mountpoints are handled as local filesystem sometimes (when unmounted) and sometimes as nfs (when mounted). Consequently, polling or none is used sometime and inotify is used for the same dir another time. E.g. client subscribes inotify monitoring, but it doesn't unsubscribe them. I am looking for a way how to deal with it...

I see the following workarounds for autofs mounts (polling is not possible, because it blocks unmounting):

1) Always use inotify - This should offer more or less same behavioral as it was before RHEL 6.8 (at least for glib based applications). So you should get notifications about your changes on nfs, but not changes made from network. I think this is best what we can do on nfs if autofs is used:

fsset nfs kernel

2) Always use none - We can disable monitoring over gamin at all, but I don't think it is good idea if home dirs are on nfs. This should not affect local filesystem, because glib uses its own inotify monitor instead (at least for glib based applications):

fsset ext4 none
fsset nfs none

(This example presumes that local filesystem is ext4.)

Let me know if it helps to you.

Comment 21 Andrew Sydelko 2016-10-27 19:20:02 UTC
I tried

fsset nfs kernel

and the load shot from 4 to 60+, gam_server for every user was suddenly taking lots of CPU trying to go through top level directories on every NFS mount

Using:

fsset ext4 none
fsset nfs none

seemed to be better initially. It seems to go through cycles where it will cause the load to swing to 30+ and then come back down again, maybe every 5-10 minutes. I can't tell if it's time based or based on something like automount mounts coming or going.

Neither of these options are a whole lot better than what we've seen so far.

Comment 22 Ondrej Holy 2016-11-02 17:20:30 UTC
(In reply to Andrew Sydelko from comment #21)
> I tried
> 
> fsset nfs kernel
> 
> and the load shot from 4 to 60+, gam_server for every user was suddenly
> taking lots of CPU trying to go through top level directories on every NFS
> mount

I suppose that glib file monitoring (which was used before RHEL 6.8) is less demanding, however, the load is distributed into several processes...

> Using:
> 
> fsset ext4 none
> fsset nfs none
> 
> seemed to be better initially. It seems to go through cycles where it will
> cause the load to swing to 30+ and then come back down again, maybe every
> 5-10 minutes. I can't tell if it's time based or based on something like
> automount mounts coming or going.

I suppose that this is caused by automounts, but you can provide gamin-debug-?.log to see what is happening...
 
> Neither of these options are a whole lot better than what we've seen so far.

I am looking for fix for the crashes, however, the default gamin behavior doesn't help to you, because polling prevents autofs unmounts and doesn't have lesser load...

Maybe, GIO_USE_FILE_MONITOR could be backported in order to avoid gamin usage and reduce the load...

Comment 23 Andrew Sydelko 2016-11-04 19:50:37 UTC
I don't suppose I can downgrade glib to the RHEL 6.7 version where these problems didn't exist?

Comment 24 Ondrej Holy 2016-11-10 15:37:26 UTC
You can probably do it as a workaround temporary, but be careful! The following document should show you an official way:
https://access.redhat.com/solutions/29617

I've manually downgraded to glib2-2.28.8-4.el6 version and it seems it works properly.

Comment 25 Ondrej Holy 2016-11-11 13:13:16 UTC
Another workaround is to just remove/rename the following library:
/usr/lib/gio/modules/libgiofam.so

This should remove FAM support from GLib, so it should work same as before, but with the latest GLib... this is similar to what could be achieved using GIO_USE_FILE_MONITOR if it is backported.

Comment 26 Ondrej Holy 2016-11-11 13:14:01 UTC
I am looking for a way how to fix the crashes, however, if you insist on the previous behavioral, you should file another bug report for GLib in order to backport of GIO_USE_FILE_MONITOR env variable, or provide another solution...

Comment 27 Ondrej Holy 2016-11-14 09:28:50 UTC
Colin, can you please take a look at this?

Comment 28 Ondrej Holy 2016-11-30 09:12:17 UTC
Bug 1399726 has been filed in order to revert previous GLib behavior.

Comment 29 Ondrej Holy 2016-12-19 07:39:56 UTC
(In reply to Ondrej Holy from comment #20)
> (snip)
> 
> 2) Always use none - We can disable monitoring over gamin at all, but I
> don't think it is good idea if home dirs are on nfs. This should not affect
> local filesystem, because glib uses its own inotify monitor instead (at
> least for glib based applications):
> 
> fsset ext4 none
> fsset nfs none

It should be enough to use:
fsset autofs none
fsset nfs none

See: https://bugzilla.redhat.com/show_bug.cgi?id=1399726#c7

Comment 30 Ondrej Holy 2017-01-05 13:53:42 UTC
*** Bug 1388909 has been marked as a duplicate of this bug. ***

Comment 37 Ondrej Holy 2017-03-22 08:04:29 UTC
Let me know if this is still an issue with RHEL 6.9 (or patch from Bug 1399726).

Comment 38 Greg Matthews 2017-03-31 09:22:58 UTC
yes, still an issue with 6.9.

Comment 39 Ondrej Holy 2017-04-04 07:32:30 UTC
Do you use gamin for something explicitly, or use some special software, or environment?

Doesn't the gamin configuration from Comment 29 help to you?

I wonder whether some other project started using gamin in RHEL 6.8, because this wasn't reported before RHEL 6.8...

Comment 40 Greg Matthews 2017-04-04 09:40:00 UTC
thanks for the reply. I think I made a mistake in comment 38. The host I saw it on didn't yet have the glib2-2.28.8-9 version. I've just rectified that and am monitoring the logs.

however, the configuration in comment 29 does not prevent the gam_server crashes.

G

Comment 41 Ondrej Holy 2017-04-10 10:41:53 UTC
(In reply to Greg Matthews from comment #40)
> thanks for the reply. I think I made a mistake in comment 38. The host I saw
> it on didn't yet have the glib2-2.28.8-9 version. I've just rectified that
> and am monitoring the logs.

Ok, thanks for the comment.

> however, the configuration in comment 29 does not prevent the gam_server
> crashes.

Hmm, can you please provide output from "mount" command?

Comment 42 Greg Matthews 2017-04-10 13:05:02 UTC
I'm starting to have trouble finding hosts that are still displaying this behaviour as we have rolled the recent glib2 packages out. However, those workstations that have not been rebooted since the roll out still show these crashes and this is the output from mount on one of those:

[qqs43472@ws148 ~]$ mount
/dev/mapper/vg.1-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/sda1 on /boot type ext4 (rw)
/dev/mapper/vg.1-lv_scratch on /scratch type ext4 (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
cs03r-sc-nas-svm02.diamond.ac.uk:/exports/dls_sw/epics on /dls_sw/epics type nfs (rw,rsize=8192,wsize=8192,intr,soft,sloppy,addr=172.23.100.71)
cs04r-nas01-02.diamond.ac.uk:/vol/staff_home/staff-home/pck07289 on /home/pck07289 type nfs (rw,nosuid,rsize=32768,wsize=32768,acl,intr,soft,nfsvers=3,sloppy,addr=172.23.130.7)
gvfs-fuse-daemon on /home/pck07289/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=pck07289)
\\\\diamsanserv01.diamond.ac.uk\\pck07289$ on /scratch/pck07289/U type cifs (rw)
dls-sw.diamond.ac.uk:/srv/software/apps/apps on /dls_sw/apps type nfs (rw,rsize=8192,wsize=8192,intr,soft,nfsvers=3,sloppy,addr=172.23.136.33)
cs03r-sc-nas-svm02.diamond.ac.uk:/exports/dls_sw/prod on /dls_sw/prod type nfs (rw,rsize=8192,wsize=8192,intr,soft,sloppy,addr=172.23.100.71)
cs04r-nas01-02.diamond.ac.uk:/vol/technical/technical/sysadmin/linux on /home/sys-admin type nfs (rw,nosuid,nfsvers=3,acl,rsize=32768,wsize=32768,intr,soft,sloppy,addr=172.23.130.7)
\\\\diamsanserv01.diamond.ac.uk\\pck07289$ on /scratch/pck07289/U type cifs (rw)
cs04r-nas01-02.diamond.ac.uk:/vol/staff_home/staff-home/qqs43472 on /home/qqs43472 type nfs (rw,nosuid,rsize=32768,wsize=32768,acl,intr,soft,nfsvers=3,sloppy,addr=172.23.130.7)
cs03r-sc-nas-svm02.diamond.ac.uk:/exports/dls_sw/etc on /dls_sw/etc type nfs (rw,rsize=8192,wsize=8192,intr,soft,sloppy,addr=172.23.100.71)
cs03r-sc-nas-svm02.diamond.ac.uk:/exports/dls/ops_data on /dls/ops-data type nfs (rw,rsize=8192,wsize=8192,intr,soft,nfsvers=3,sloppy,addr=172.23.100.71)
mx-scratch.diamond.ac.uk:/mnt/lustre03/mx-scratch on /dls/mx-scratch type nfs (rw,rsize=32768,wsize=32768,acl,intr,soft,nfsvers=3,sloppy,addr=172.23.142.217)
dls-attic:/srv/attic on /dls/attic type nfs (rw,rsize=32768,wsize=32768,acl,intr,soft,nfsvers=3,sloppy,addr=172.23.150.3)
cs04r-nas01-02.diamond.ac.uk:/vol/staff_home/staff-home/aak24408 on /home/aak24408 type nfs (rw,nosuid,rsize=32768,wsize=32768,acl,intr,soft,nfsvers=3,sloppy,addr=172.23.130.7)
i24-storage.diamond.ac.uk:/mnt/gpfs02/i24 on /dls/i24 type nfs (rw,rsize=32768,wsize=32768,intr,soft,acl,nfsvers=3,sloppy,addr=172.23.154.11)
i04-storage.diamond.ac.uk:/mnt/gpfs02/i04 on /dls/i04 type nfs (rw,rsize=32768,wsize=32768,intr,soft,acl,nfsvers=3,sloppy,addr=172.23.154.11)
i04-1-storage.diamond.ac.uk:/mnt/gpfs02/i04-1 on /dls/i04-1 type nfs (rw,rsize=32768,wsize=32768,intr,soft,acl,nfsvers=3,sloppy,addr=172.23.154.11)
p45-storage.diamond.ac.uk:/mnt/gpfs02/p45 on /dls/p45 type nfs (rw,rsize=32768,wsize=32768,intr,soft,acl,nfsvers=3,sloppy,addr=172.23.154.11)
cs04r-nas01-02.diamond.ac.uk:/vol/science on /dls/science type nfs (rw,nosuid,nfsvers=3,acl,rsize=32768,wsize=32768,intr,soft,sloppy,addr=172.23.130.7)
dls-sw.diamond.ac.uk:/srv/software/apps/dasc on /dls_sw/dasc type nfs (rw,rsize=8192,wsize=8192,intr,soft,nfsvers=3,sloppy,addr=172.23.136.33)
cs03r-sc-nas-svm02.diamond.ac.uk:/exports/dls_sw/work on /dls_sw/work type nfs (rw,rsize=32768,wsize=32768,intr,soft,nfsvers=3,sloppy,addr=172.23.100.71)
cs04r-nas01-02.diamond.ac.uk:/vol/staff_home/staff-home/zva49823 on /home/zva49823 type nfs (rw,nosuid,rsize=32768,wsize=32768,acl,intr,soft,nfsvers=3,sloppy,addr=172.23.130.7)
i02-storage.diamond.ac.uk:/mnt/gpfs02/i02 on /dls/i02 type nfs (rw,rsize=32768,wsize=32768,intr,soft,acl,nfsvers=3,sloppy,addr=172.23.154.11)
i03-storage.diamond.ac.uk:/mnt/gpfs02/i03 on /dls/i03 type nfs (rw,rsize=32768,wsize=32768,intr,soft,acl,nfsvers=3,sloppy,addr=172.23.154.11)
cs04r-sc-vserv-115:/mnt/lustre03/staging on /dls/staging type nfs (rw,rsize=32768,wsize=32768,intr,soft,acl,nfsvers=3,sloppy,addr=172.23.142.30)
cs04r-nas01-02.diamond.ac.uk:/vol/staff_home/staff-home/xfz42935 on /home/xfz42935 type nfs (rw,nosuid,rsize=32768,wsize=32768,acl,intr,soft,nfsvers=3,sloppy,addr=172.23.130.7)
cs04r-nas01-02.diamond.ac.uk:/vol/staff_home/staff-home/ktc05079 on /home/ktc05079 type nfs (rw,nosuid,rsize=32768,wsize=32768,acl,intr,soft,nfsvers=3,sloppy,addr=172.23.130.7)
m02-storage.diamond.ac.uk:/mnt/gpfs02/m02 on /dls/m02 type nfs (rw,rsize=32768,wsize=32768,intr,soft,acl,nfsvers=3,sloppy,addr=172.23.180.70)
i15-storage.diamond.ac.uk:/mnt/gpfs02/i15 on /dls/i15 type nfs (rw,rsize=32768,wsize=32768,intr,soft,acl,nfsvers=3,sloppy,addr=172.23.154.11)
m05-storage.diamond.ac.uk:/mnt/gpfs02/m05 on /dls/m05 type nfs (rw,rsize=32768,wsize=32768,intr,soft,acl,nfsvers=3,sloppy,addr=172.23.185.70)
cs04r-nas01-02.diamond.ac.uk:/vol/staff_home/staff-home/fer45166 on /home/fer45166 type nfs (rw,nosuid,rsize=32768,wsize=32768,acl,intr,soft,nfsvers=3,sloppy,addr=172.23.130.7)
cs04r-nas01-02.diamond.ac.uk:/vol/staff_home/staff-home/kdf51254 on /home/kdf51254 type nfs (rw,nosuid,rsize=32768,wsize=32768,acl,intr,soft,nfsvers=3,sloppy,addr=172.23.130.7)
cs04r-nas01-02.diamond.ac.uk:/vol/dls_tmp/dls_tmp on /dls/tmp type nfs (rw,nosuid,nfsvers=3,acl,rsize=32768,wsize=32768,intr,soft,sloppy,addr=172.23.130.7)
dls-bl-storage.diamond.ac.uk:/srv/bl-data/i06 on /dls/i06 type nfs (rw,rsize=32768,wsize=32768,acl,intr,soft,nfsvers=3,sloppy,addr=172.23.142.13)


[qqs43472@ws148 ~]$ cat /etc/gamin/gaminrc 
fsset nfs none
fsset autofs none

Comment 43 Ondrej Holy 2017-04-11 07:10:34 UTC
Thanks! Ah, there isn't any mount of type autofs. So, the suggested workaround can't work to you. But you use autofs, don't you? It seems that autofs mount is there in some configurations and isn't in other configurations. It should work in your case if you add "fsset ext4 none" in your gaminrc file...

Comment 44 Greg Matthews 2017-04-11 08:39:05 UTC
yes, all of those nfs mounts are there from autofs. We have not nfs mounts in /etc/fstab for workstations.

if I set "fsset ext4 none" then presumably gamin is basically disabled completely right?

Comment 45 Ondrej Holy 2017-04-11 12:29:00 UTC
If you set "fsset ext4 none" and "fsset nfs none", then we can say that monitoring in gamin is completely disabled in your case (given the output from mount command). So, for example, monitoring over GLib would still work on ext4, but not for nfs...