RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 998061 - gvfsd-trash is preventing autofs mounts from being expired when fs perms are o-rx
Summary: gvfsd-trash is preventing autofs mounts from being expired when fs perms are ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: gvfs
Version: 6.5
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: rc
: ---
Assignee: Ondrej Holy
QA Contact: Desktop QE
URL:
Whiteboard:
: 758466 1145651 (view as bug list)
Depends On:
Blocks: 1075802
TreeView+ depends on / blocked
 
Reported: 2013-08-16 22:33 UTC by agilmore2
Modified: 2019-11-14 06:22 UTC (History)
14 users (show)

Fixed In Version: gvfs-1.4.3-19.el6
Doc Type: Bug Fix
Doc Text:
Cause: GVfs trash implementation creates file monitors for all mount points. It doesn't care about access permissions. Monitor implementation polls the files every about 4 seconds if there aren't sufficient access permissions. Consequence: AutoFS mounts are ordinarily unmounted if they aren't using for some time, however file monitor prevents AutoFS mounts from being expired. It causes also high system load if there is lot of AutoFS mounts. Fix: Mount points without read access permissions aren't longer monitored. Result: File monitors aren't polling mount points without read access, therefore also AutoFS mounts can expire.
Clone Of:
: 1147973 (view as bug list)
Environment:
Last Closed: 2015-02-19 12:27:07 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
GNOME Bugzilla 737371 0 Normal RESOLVED gvfsd-trash polls mounts without read access 2020-09-09 11:17:10 UTC
Red Hat Knowledge Base (Solution) 473033 0 None None None Never
Red Hat Product Errata RHBA-2015:0237 0 normal SHIPPED_LIVE gvfs bug fix update 2015-02-19 17:26:34 UTC

Description agilmore2 2013-08-16 22:33:26 UTC
Description of problem:
Autofs never expires mount points when the root of the mounted filesystem has o-rx permissions, and ownership of that directory is root:disk.
There does not appear to be any SELinux alerts, and automount is running as default user, which is root.

Version-Release number of selected component (if applicable):
autofs-5.0.5-74.el6_4

How reproducible:
every time

Steps to Reproduce:
1. automount partition by ls /mnt/rdx_drives/<UUID>
2. wait for expiry, expires
3. automount again
4. chmod o-rx /mnt/rdx_drives/<UUID>/
5. manually umount /mnt/rdx_drives/<UUID>
6. automount again
7. wait for expiry, does NOT expire

Actual results:
does NOT expire mounts

Expected results:
expiry and umount

Additional info: 

/etc/auto.master:
# only uncommented line:
/mnt/rdx_drives		/etc/auto.rdx	--timeout=30

/etc/auto.rdx:
# /etc/auto.rdx
*          -fstype=auto,rw,_netdev       :/dev/disk/by-uuid/&
# 

Shell script to reproduce on my machine, 

uuid=ed52cab7-f9f2-4ca6-89be-a76ee6b56df2
sudo ls -la /mnt/rdx_drives/$uuid
sudo chmod o-rx /mnt/rdx_drives/$uuid
sudo umount /mnt/rdx_drives/$uuid
sudo ls -la /mnt/rdx_drives/$uuid
date
mount | grep $uuid
sleep 45
date
mount | grep $uuid
sudo umount /mnt/rdx_drives/$uuid
sudo ls -la /mnt/rdx_drives/$uuid
sudo chmod o+rx /mnt/rdx_drives/$uuid
sudo umount /mnt/rdx_drives/$uuid
sudo ls -la /mnt/rdx_drives/$uuid
date
mount | grep $uuid
sleep 45
date
mount | grep $uuid

Comment 3 Scott Mayhew 2013-09-12 17:00:33 UTC
[root@localhost ~]# stap -d /lib64/libc-2.12.so -d /lib64/libgio-2.0.so.0.2200.5 -d /lib64/libglib-2.0.so.0.2200.5 -d /usr/libexec/gvfsd-trash -d /lib64/libgobject-2.0.so.0.2200.5 -e 'probe kernel.function("sys_inotify_add_watch").return { if (execname() == "gvfsd-trash") { printf("%s %s %s %d %s\n", tz_ctime(gettimeofday_s()), execname(), probefunc(), $return, errno_str($return)); print_ubacktrace(); printf("\n"); } }'

// The first two entries here are from the first time the gvfsd-trash tries to add the inotify watch:

Thu Sep 12 10:41:16 2013 EDT gvfsd-trash sys_inotify_add_watch -13 EACCES
 0x354b8e8fe7 : inotify_add_watch+0x7/0x30 [/lib64/libc-2.12.so]
 0x3551070d8c : _ik_watch+0x2c/0xb0 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551071b72 : _ip_start_watching+0x132/0x1e0 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551072431 : _ih_sub_add+0x51/0x100 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551073051 : g_inotify_directory_monitor_constructor+0x71/0x120 [/lib64/libgio-2.0.so.0.2200.5]
 0x354fc11d81 : g_object_newv+0x291/0xae0 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc12809 : g_object_new_valist+0x239/0x3b0 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc12a4c : g_object_new+0xcc/0xe0 [/lib64/libgobject-2.0.so.0.2200.5]
 0x3551066461 : _g_local_directory_monitor_new+0x71/0xe0 [/lib64/libgio-2.0.so.0.2200.5]
 0x40bd41 : dir_watch_recursive_create+0x51/0xc0 [/usr/libexec/gvfsd-trash]
 0x40bb47 : dir_watch_new+0xe7/0x110 [/usr/libexec/gvfsd-trash]
 0x40bb0c : dir_watch_new+0xac/0x110 [/usr/libexec/gvfsd-trash]
 0x40bb0c : dir_watch_new+0xac/0x110 [/usr/libexec/gvfsd-trash]
 0x40bb0c : dir_watch_new+0xac/0x110 [/usr/libexec/gvfsd-trash]
 0x40b4af : trash_dir_new+0x16f/0x180 [/usr/libexec/gvfsd-trash]
 0x40abe4 : trash_watcher_remount+0x154/0x1e0 [/usr/libexec/gvfsd-trash]
 0x354fc0bb3e : g_closure_invoke+0x15e/0x1e0 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc20e23 : signal_emit_unlocked_R+0xda3/0x1840 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc220af : g_signal_emit_valist+0x7ef/0x910 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc225f3 : g_signal_emit+0x83/0x90 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc0bb3e : g_closure_invoke+0x15e/0x1e0 [/lib64/libgobject-2.0.so.0.2200.5]

Thu Sep 12 10:41:16 2013 EDT gvfsd-trash sys_inotify_add_watch -13 EACCES
 0x354b8e8fe7 : inotify_add_watch+0x7/0x30 [/lib64/libc-2.12.so]
 0x3551070d8c : _ik_watch+0x2c/0xb0 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551071b72 : _ip_start_watching+0x132/0x1e0 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551072431 : _ih_sub_add+0x51/0x100 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551073051 : g_inotify_directory_monitor_constructor+0x71/0x120 [/lib64/libgio-2.0.so.0.2200.5]
 0x354fc11d81 : g_object_newv+0x291/0xae0 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc12809 : g_object_new_valist+0x239/0x3b0 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc12a4c : g_object_new+0xcc/0xe0 [/lib64/libgobject-2.0.so.0.2200.5]
 0x3551066461 : _g_local_directory_monitor_new+0x71/0xe0 [/lib64/libgio-2.0.so.0.2200.5]
 0x40bd41 : dir_watch_recursive_create+0x51/0xc0 [/usr/libexec/gvfsd-trash]
 0x40bb47 : dir_watch_new+0xe7/0x110 [/usr/libexec/gvfsd-trash]
 0x40bb0c : dir_watch_new+0xac/0x110 [/usr/libexec/gvfsd-trash]
 0x40bb0c : dir_watch_new+0xac/0x110 [/usr/libexec/gvfsd-trash]
 0x40b4af : trash_dir_new+0x16f/0x180 [/usr/libexec/gvfsd-trash]
 0x40ac0a : trash_watcher_remount+0x17a/0x1e0 [/usr/libexec/gvfsd-trash]
 0x354fc0bb3e : g_closure_invoke+0x15e/0x1e0 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc20e23 : signal_emit_unlocked_R+0xda3/0x1840 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc220af : g_signal_emit_valist+0x7ef/0x910 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc225f3 : g_signal_emit+0x83/0x90 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc0bb3e : g_closure_invoke+0x15e/0x1e0 [/lib64/libgobject-2.0.so.0.2200.5]
 0x354fc20e23 : signal_emit_unlocked_R+0xda3/0x1840 [/lib64/libgobject-2.0.so.0.2200.5]

// After that, we'll see the following every 4 seconds

Thu Sep 12 10:41:19 2013 EDT gvfsd-trash sys_inotify_add_watch -13 EACCES
 0x354b8e8fe7 : inotify_add_watch+0x7/0x30 [/lib64/libc-2.12.so]
 0x3551070d8c : _ik_watch+0x2c/0xb0 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551071b72 : _ip_start_watching+0x132/0x1e0 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551072118 : im_scan_missing+0x88/0x250 [/lib64/libgio-2.0.so.0.2200.5]
 0x354d03961b : g_timeout_dispatch+0x1b/0x80 [/lib64/libglib-2.0.so.0.2200.5]
 0x354d038f0e : g_main_context_dispatch+0x22e/0x4b0 [/lib64/libglib-2.0.so.0.2200.5]
 0x354d03c938 : g_main_context_iterate+0x518/0x5a0 [/lib64/libglib-2.0.so.0.2200.5]
 0x354d03cd55 : g_main_loop_run+0x195/0x370 [/lib64/libglib-2.0.so.0.2200.5]
 0x409bee : daemon_main+0x18e/0x280 [/usr/libexec/gvfsd-trash]
 0x409e7c : main+0x4c/0x60 [/usr/libexec/gvfsd-trash]
 0x354b81ecdd : __libc_start_main+0xfd/0x1d0 [/lib64/libc-2.12.so]
 0x408099 : _start+0x29/0x2c [/usr/libexec/gvfsd-trash]

Thu Sep 12 10:41:19 2013 EDT gvfsd-trash sys_inotify_add_watch -13 EACCES
 0x354b8e8fe7 : inotify_add_watch+0x7/0x30 [/lib64/libc-2.12.so]
 0x3551070d8c : _ik_watch+0x2c/0xb0 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551071b72 : _ip_start_watching+0x132/0x1e0 [/lib64/libgio-2.0.so.0.2200.5]
 0x3551072118 : im_scan_missing+0x88/0x250 [/lib64/libgio-2.0.so.0.2200.5]
 0x354d03961b : g_timeout_dispatch+0x1b/0x80 [/lib64/libglib-2.0.so.0.2200.5]
 0x354d038f0e : g_main_context_dispatch+0x22e/0x4b0 [/lib64/libglib-2.0.so.0.2200.5]
 0x354d03c938 : g_main_context_iterate+0x518/0x5a0 [/lib64/libglib-2.0.so.0.2200.5]
 0x354d03cd55 : g_main_loop_run+0x195/0x370 [/lib64/libglib-2.0.so.0.2200.5]
 0x409bee : daemon_main+0x18e/0x280 [/usr/libexec/gvfsd-trash]
 0x409e7c : main+0x4c/0x60 [/usr/libexec/gvfsd-trash]
 0x354b81ecdd : __libc_start_main+0xfd/0x1d0 [/lib64/libc-2.12.so]
 0x408099 : _start+0x29/0x2c [/usr/libexec/gvfsd-trash]
...

This is where we call inotify_add_watch:

gint32
_ik_watch (const char *path,
           guint32     mask,
           int        *err)
{
...
  wd = inotify_add_watch (inotify_instance_fd, path, mask);

  if (wd < 0)
    {
      int e = errno;
      /* FIXME: debug msg failed to add watch */
      if (err)
        *err = e; <----- Here's where we store the errno (in our case -EACCESS)
      return wd;
    }
...

But unfortunately the caller doesn't do anything with the error number:

_ip_start_watching (inotify_sub *sub)
{
  gint32 wd;
  int err; <----- Here's where we declared the variable that the errno is stored in
...
  wd = _ik_watch (sub->dirname, IP_INOTIFY_MASK|IN_ONLYDIR, &err);
  if (wd < 0)
    {
      IP_W ("Failed\n");
      return FALSE; <----- Here's where we return when we failed to add the inotify watch.  The errno is effectively tossed out at this point.
    }
...

Working our way back up the stack:

gboolean
_ih_sub_add (inotify_sub *sub)
{
...
  if (!_ip_start_watching (sub))
    _im_add (sub); <---- When we fail to add the inotify watch, we call this function which adds the directory to the missing_sub_list
...

void
_im_add (inotify_sub *sub)
{
...
  missing_sub_list = g_list_prepend (missing_sub_list, sub);
...
  if (!scan_missing_running)
    {
      scan_missing_running = TRUE;
      g_timeout_add_seconds (SCAN_MISSING_TIME, im_scan_missing, NULL); <-----This sets up a timer that runs im_scan_missing every 4 seconds
    }
}

SCAN_MISSING_TIME is 4 seconds:

#define SCAN_MISSING_TIME 4 /* 1/4 Hz */

When im_scan_missing runs, it walks the missing_sub_list and calls _ip_start_watching on each entry in the list:

static gboolean
im_scan_missing (gpointer user_data)
{
...
  for (l = missing_sub_list; l; l = l->next)
    {
...
      not_m = _ip_start_watching (sub); <----- So we're effectively right back where we started.
...

Comment 4 Scott Mayhew 2013-09-12 17:02:25 UTC
It seems to me there are two possible solutions here

1. Propagate the error back up to _ih_sub_add and don't call _im_add on certain errors (like EACCESS).  This would require modifications to libgio which is in the glib2 package.

2. Have gvfsd-trash check the permissions on a mountpoint before trying to add an inotify watch on it, and bail out if it doesn't have the appropriate permissions.  According to inotify_add_watch(2), "the caller must have read permission for this file"... so a check such as the following could be added to trash_dir_new:

---8<---
  if (watching && !access(mount_point, R_OK))
    dir->watch = dir_watch_new (dir->directory,
                                dir->topdir,
                                trash_dir_created,
                                trash_dir_check,
                                trash_dir_destroyed,
                                dir);
  else
    dir->watch = NULL;
---8<---

Either way, it looks like autofs is the wrong component for this bug.  Changing to gvfs.

Comment 5 RHEL Program Management 2013-10-13 23:15:38 UTC
This request was evaluated by Red Hat Product Management for
inclusion in the current release of Red Hat Enterprise Linux.
Because the affected component is not scheduled to be updated
in the current release, Red Hat is unable to address this
request at this time.

Red Hat invites you to ask your support representative to
propose this request, if appropriate, in the next release of
Red Hat Enterprise Linux.

Comment 7 Ondrej Holy 2014-09-25 15:50:24 UTC
*** Bug 1145651 has been marked as a duplicate of this bug. ***

Comment 8 Han Boetes 2014-09-26 08:31:45 UTC
The patch suggested over here: https://bugzilla.gnome.org/show_bug.cgi?id=737371 appears to be helping. We're test driving this now and stracing the running gvfsd-trash process does not show the typical attempts to check all unreadable mountpoints.

Comment 9 Ondrej Holy 2014-10-02 10:35:51 UTC
*** Bug 758466 has been marked as a duplicate of this bug. ***

Comment 17 errata-xmlrpc 2015-02-19 12:27:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-0237.html


Note You need to log in before you can comment on or make changes to this bug.