Bug 998061 - gvfsd-trash is preventing autofs mounts from being expired when fs perms are o-rx
gvfsd-trash is preventing autofs mounts from being expired when fs perms are ...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: gvfs (Show other bugs)
6.5
x86_64 Linux
medium Severity medium
: rc
: ---
Assigned To: Ondrej Holy
Desktop QE
:
: 758466 1145651 (view as bug list)
Depends On:
Blocks: 1075802
  Show dependency treegraph
 
Reported: 2013-08-16 18:33 EDT by agilmore2
Modified: 2015-02-19 07:27 EST (History)
14 users (show)

See Also:
Fixed In Version: gvfs-1.4.3-19.el6
Doc Type: Bug Fix
Doc Text:
Cause: GVfs trash implementation creates file monitors for all mount points. It doesn't care about access permissions. Monitor implementation polls the files every about 4 seconds if there aren't sufficient access permissions. Consequence: AutoFS mounts are ordinarily unmounted if they aren't using for some time, however file monitor prevents AutoFS mounts from being expired. It causes also high system load if there is lot of AutoFS mounts. Fix: Mount points without read access permissions aren't longer monitored. Result: File monitors aren't polling mount points without read access, therefore also AutoFS mounts can expire.
Story Points: ---
Clone Of:
: 1147973 (view as bug list)
Environment:
Last Closed: 2015-02-19 07:27:07 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
GNOME Desktop 737371 None None None Never
Red Hat Knowledge Base (Solution) 473033 None None None Never

  None (edit)
Description agilmore2 2013-08-16 18:33:26 EDT
Description of problem:
Autofs never expires mount points when the root of the mounted filesystem has o-rx permissions, and ownership of that directory is root:disk.
There does not appear to be any SELinux alerts, and automount is running as default user, which is root.

Version-Release number of selected component (if applicable):
autofs-5.0.5-74.el6_4

How reproducible:
every time

Steps to Reproduce:
1. automount partition by ls /mnt/rdx_drives/<UUID>
2. wait for expiry, expires
3. automount again
4. chmod o-rx /mnt/rdx_drives/<UUID>/
5. manually umount /mnt/rdx_drives/<UUID>
6. automount again
7. wait for expiry, does NOT expire

Actual results:
does NOT expire mounts

Expected results:
expiry and umount

Additional info: 

/etc/auto.master:
# only uncommented line:
/mnt/rdx_drives		/etc/auto.rdx	--timeout=30

/etc/auto.rdx:
# /etc/auto.rdx
*          -fstype=auto,rw,_netdev       :/dev/disk/by-uuid/&
# 

Shell script to reproduce on my machine, 

uuid=ed52cab7-f9f2-4ca6-89be-a76ee6b56df2
sudo ls -la /mnt/rdx_drives/$uuid
sudo chmod o-rx /mnt/rdx_drives/$uuid
sudo umount /mnt/rdx_drives/$uuid
sudo ls -la /mnt/rdx_drives/$uuid
date
mount | grep $uuid
sleep 45
date
mount | grep $uuid
sudo umount /mnt/rdx_drives/$uuid
sudo ls -la /mnt/rdx_drives/$uuid
sudo chmod o+rx /mnt/rdx_drives/$uuid
sudo umount /mnt/rdx_drives/$uuid
sudo ls -la /mnt/rdx_drives/$uuid
date
mount | grep $uuid
sleep 45
date
mount | grep $uuid
Comment 3 Scott Mayhew 2013-09-12 13:00:33 EDT
[root@localhost ~]# stap -d /lib64/libc-2.12.so -d /lib64/libgio-2.0.so.0.2200.5 -d /lib64/libglib-2.0.so.0.2200.5 -d /usr/libexec/gvfsd-trash -d /lib64/libgobject-2.0.so.0.2200.5 -e 'probe kernel.function("sys_inotify_add_watch").return { if (execname() == "gvfsd-trash") { printf("%s %s %s %d %s\n", tz_ctime(gettimeofday_s()), execname(), probefunc(), $return, errno_str($return)); print_ubacktrace(); printf("\n"); } }'

// The first two entries here are from the first time the gvfsd-trash tries to add the inotify watch:

Thu Sep 12 10:41:16 2013 EDT gvfsd-trash sys_inotify_add_watch -13 EACCES
 0x354b8e8fe7 : inotify_add_watch+0x7/0x30 [/lib64/libc-2.12.so]
 0x3551070d8c : _ik_watch+0x2c/0xb0 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551071b72 : _ip_start_watching+0x132/0x1e0 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551072431 : _ih_sub_add+0x51/0x100 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551073051 : g_inotify_directory_monitor_constructor+0x71/0x120 [/lib64/libgio-2.0.so.0.2200.5]
 0x354fc11d81 : g_object_newv+0x291/0xae0 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc12809 : g_object_new_valist+0x239/0x3b0 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc12a4c : g_object_new+0xcc/0xe0 [/lib64/libgobject-2.0.so.0.2200.5]
 0x3551066461 : _g_local_directory_monitor_new+0x71/0xe0 [/lib64/libgio-2.0.so.0.2200.5]
 0x40bd41 : dir_watch_recursive_create+0x51/0xc0 [/usr/libexec/gvfsd-trash]
 0x40bb47 : dir_watch_new+0xe7/0x110 [/usr/libexec/gvfsd-trash]
 0x40bb0c : dir_watch_new+0xac/0x110 [/usr/libexec/gvfsd-trash]
 0x40bb0c : dir_watch_new+0xac/0x110 [/usr/libexec/gvfsd-trash]
 0x40bb0c : dir_watch_new+0xac/0x110 [/usr/libexec/gvfsd-trash]
 0x40b4af : trash_dir_new+0x16f/0x180 [/usr/libexec/gvfsd-trash]
 0x40abe4 : trash_watcher_remount+0x154/0x1e0 [/usr/libexec/gvfsd-trash]
 0x354fc0bb3e : g_closure_invoke+0x15e/0x1e0 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc20e23 : signal_emit_unlocked_R+0xda3/0x1840 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc220af : g_signal_emit_valist+0x7ef/0x910 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc225f3 : g_signal_emit+0x83/0x90 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc0bb3e : g_closure_invoke+0x15e/0x1e0 [/lib64/libgobject-2.0.so.0.2200.5]

Thu Sep 12 10:41:16 2013 EDT gvfsd-trash sys_inotify_add_watch -13 EACCES
 0x354b8e8fe7 : inotify_add_watch+0x7/0x30 [/lib64/libc-2.12.so]
 0x3551070d8c : _ik_watch+0x2c/0xb0 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551071b72 : _ip_start_watching+0x132/0x1e0 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551072431 : _ih_sub_add+0x51/0x100 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551073051 : g_inotify_directory_monitor_constructor+0x71/0x120 [/lib64/libgio-2.0.so.0.2200.5]
 0x354fc11d81 : g_object_newv+0x291/0xae0 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc12809 : g_object_new_valist+0x239/0x3b0 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc12a4c : g_object_new+0xcc/0xe0 [/lib64/libgobject-2.0.so.0.2200.5]
 0x3551066461 : _g_local_directory_monitor_new+0x71/0xe0 [/lib64/libgio-2.0.so.0.2200.5]
 0x40bd41 : dir_watch_recursive_create+0x51/0xc0 [/usr/libexec/gvfsd-trash]
 0x40bb47 : dir_watch_new+0xe7/0x110 [/usr/libexec/gvfsd-trash]
 0x40bb0c : dir_watch_new+0xac/0x110 [/usr/libexec/gvfsd-trash]
 0x40bb0c : dir_watch_new+0xac/0x110 [/usr/libexec/gvfsd-trash]
 0x40b4af : trash_dir_new+0x16f/0x180 [/usr/libexec/gvfsd-trash]
 0x40ac0a : trash_watcher_remount+0x17a/0x1e0 [/usr/libexec/gvfsd-trash]
 0x354fc0bb3e : g_closure_invoke+0x15e/0x1e0 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc20e23 : signal_emit_unlocked_R+0xda3/0x1840 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc220af : g_signal_emit_valist+0x7ef/0x910 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc225f3 : g_signal_emit+0x83/0x90 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc0bb3e : g_closure_invoke+0x15e/0x1e0 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc20e23 : signal_emit_unlocked_R+0xda3/0x1840 [/lib64/libgobject-2.0.so.0.2200.5]

// After that, we'll see the following every 4 seconds

Thu Sep 12 10:41:19 2013 EDT gvfsd-trash sys_inotify_add_watch -13 EACCES
 0x354b8e8fe7 : inotify_add_watch+0x7/0x30 [/lib64/libc-2.12.so]
 0x3551070d8c : _ik_watch+0x2c/0xb0 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551071b72 : _ip_start_watching+0x132/0x1e0 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551072118 : im_scan_missing+0x88/0x250 [/lib64/libgio-2.0.so.0.2200.5]
 0x354d03961b : g_timeout_dispatch+0x1b/0x80 [/lib64/libglib-2.0.so.0.2200.5]
 0x354d038f0e : g_main_context_dispatch+0x22e/0x4b0 [/lib64/libglib-2.0.so.0.2200.5]
 0x354d03c938 : g_main_context_iterate+0x518/0x5a0 [/lib64/libglib-2.0.so.0.2200.5]
 0x354d03cd55 : g_main_loop_run+0x195/0x370 [/lib64/libglib-2.0.so.0.2200.5]
 0x409bee : daemon_main+0x18e/0x280 [/usr/libexec/gvfsd-trash]
 0x409e7c : main+0x4c/0x60 [/usr/libexec/gvfsd-trash]
 0x354b81ecdd : __libc_start_main+0xfd/0x1d0 [/lib64/libc-2.12.so]
 0x408099 : _start+0x29/0x2c [/usr/libexec/gvfsd-trash]

Thu Sep 12 10:41:19 2013 EDT gvfsd-trash sys_inotify_add_watch -13 EACCES
 0x354b8e8fe7 : inotify_add_watch+0x7/0x30 [/lib64/libc-2.12.so]
 0x3551070d8c : _ik_watch+0x2c/0xb0 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551071b72 : _ip_start_watching+0x132/0x1e0 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551072118 : im_scan_missing+0x88/0x250 [/lib64/libgio-2.0.so.0.2200.5]
 0x354d03961b : g_timeout_dispatch+0x1b/0x80 [/lib64/libglib-2.0.so.0.2200.5]
 0x354d038f0e : g_main_context_dispatch+0x22e/0x4b0 [/lib64/libglib-2.0.so.0.2200.5]
 0x354d03c938 : g_main_context_iterate+0x518/0x5a0 [/lib64/libglib-2.0.so.0.2200.5]
 0x354d03cd55 : g_main_loop_run+0x195/0x370 [/lib64/libglib-2.0.so.0.2200.5]
 0x409bee : daemon_main+0x18e/0x280 [/usr/libexec/gvfsd-trash]
 0x409e7c : main+0x4c/0x60 [/usr/libexec/gvfsd-trash]
 0x354b81ecdd : __libc_start_main+0xfd/0x1d0 [/lib64/libc-2.12.so]
 0x408099 : _start+0x29/0x2c [/usr/libexec/gvfsd-trash]
...

This is where we call inotify_add_watch:

gint32
_ik_watch (const char *path,
           guint32     mask,
           int        *err)
{
...
  wd = inotify_add_watch (inotify_instance_fd, path, mask);

  if (wd < 0)
    {
      int e = errno;
      /* FIXME: debug msg failed to add watch */
      if (err)
        *err = e; <----- Here's where we store the errno (in our case -EACCESS)
      return wd;
    }
...

But unfortunately the caller doesn't do anything with the error number:

_ip_start_watching (inotify_sub *sub)
{
  gint32 wd;
  int err; <----- Here's where we declared the variable that the errno is stored in
...
  wd = _ik_watch (sub->dirname, IP_INOTIFY_MASK|IN_ONLYDIR, &err);
  if (wd < 0)
    {
      IP_W ("Failed\n");
      return FALSE; <----- Here's where we return when we failed to add the inotify watch.  The errno is effectively tossed out at this point.
    }
...

Working our way back up the stack:

gboolean
_ih_sub_add (inotify_sub *sub)
{
...
  if (!_ip_start_watching (sub))
    _im_add (sub); <---- When we fail to add the inotify watch, we call this function which adds the directory to the missing_sub_list
...

void
_im_add (inotify_sub *sub)
{
...
  missing_sub_list = g_list_prepend (missing_sub_list, sub);
...
  if (!scan_missing_running)
    {
      scan_missing_running = TRUE;
      g_timeout_add_seconds (SCAN_MISSING_TIME, im_scan_missing, NULL); <-----This sets up a timer that runs im_scan_missing every 4 seconds
    }
}

SCAN_MISSING_TIME is 4 seconds:

#define SCAN_MISSING_TIME 4 /* 1/4 Hz */

When im_scan_missing runs, it walks the missing_sub_list and calls _ip_start_watching on each entry in the list:

static gboolean
im_scan_missing (gpointer user_data)
{
...
  for (l = missing_sub_list; l; l = l->next)
    {
...
      not_m = _ip_start_watching (sub); <----- So we're effectively right back where we started.
...
Comment 4 Scott Mayhew 2013-09-12 13:02:25 EDT
It seems to me there are two possible solutions here

1. Propagate the error back up to _ih_sub_add and don't call _im_add on certain errors (like EACCESS).  This would require modifications to libgio which is in the glib2 package.

2. Have gvfsd-trash check the permissions on a mountpoint before trying to add an inotify watch on it, and bail out if it doesn't have the appropriate permissions.  According to inotify_add_watch(2), "the caller must have read permission for this file"... so a check such as the following could be added to trash_dir_new:

---8<---
  if (watching && !access(mount_point, R_OK))
    dir->watch = dir_watch_new (dir->directory,
                                dir->topdir,
                                trash_dir_created,
                                trash_dir_check,
                                trash_dir_destroyed,
                                dir);
  else
    dir->watch = NULL;
---8<---

Either way, it looks like autofs is the wrong component for this bug.  Changing to gvfs.
Comment 5 RHEL Product and Program Management 2013-10-13 19:15:38 EDT
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated
in the current release, Red Hat is unable to address this
request at this time.

Red Hat invites you to ask your support representative to
propose this request, if appropriate, in the next release of
Red Hat Enterprise Linux.
Comment 7 Ondrej Holy 2014-09-25 11:50:24 EDT
*** Bug 1145651 has been marked as a duplicate of this bug. ***
Comment 8 Han Boetes 2014-09-26 04:31:45 EDT
The patch suggested over here: https://bugzilla.gnome.org/show_bug.cgi?id=737371 appears to be helping. We're test driving this now and stracing the running gvfsd-trash process does not show the typical attempts to check all unreadable mountpoints.
Comment 9 Ondrej Holy 2014-10-02 06:35:51 EDT
*** Bug 758466 has been marked as a duplicate of this bug. ***
Comment 17 errata-xmlrpc 2015-02-19 07:27:07 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0237.html

Note You need to log in before you can comment on or make changes to this bug.