RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1456925 - Hundreds of gvfsd-trash processes are spawned when user runs Xsession / Gnome
Summary: Hundreds of gvfsd-trash processes are spawned when user runs Xsession / Gnome
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: gvfs
Version: 6.9
Hardware: x86_64
OS: Linux
high
high
Target Milestone: rc
: ---
Assignee: Ondrej Holy
QA Contact: Desktop QE
URL:
Whiteboard:
Depends On:
Blocks: 1374441 1461138 1492868
TreeView+ depends on / blocked
 
Reported: 2017-05-30 16:57 UTC by Alex Ladd
Modified: 2023-09-15 00:02 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-02-13 15:08:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1632960 0 unspecified CLOSED Hundreds of gvfsd-trash processes are spawned when user runs Xsession/Gnome after an NFS session failed 2023-03-24 14:15:47 UTC

Internal Links: 1632960

Description Alex Ladd 2017-05-30 16:57:55 UTC
Description of problem:

When users log into Gnome, many gvfsd-trash processes are spawned that go into D state (over 2000 in this case). It is causing the inability to browse the filesystem with the processes pile up, and there are hangs and crashes.


Version-Release number of selected component (if applicable):

gvfs-1.4.3-27.el6


How reproducible:

Users log into the GUI and use it normally


Steps to Reproduce:
1. Log into Gnome and use normally


Actual results:

gvfsd-trash processes start then never stop


Expected results:

one gvfsd-trash process per user session


Additional info:

We have observed this issue sometimes tied to the use of NFS, and this system in particular has multiple NFS mounts. In another situation, NFS errors were observed and addressed, which alleviated the gvfsd-trash issue. There are no observable NFS errors in this case.

Comment 2 Ondrej Holy 2017-05-31 14:25:20 UTC
Thanks for your report.

Can you please provide more info please? When this start happening? rhel-6.9 got just translations updates for gvfs. rhel-6.8 got just unrelated patches. It might potentially relate to fix of Bug 998061 from rhel-6.7...

I think that the only reason why you can see too many backends is that something tries to open trash:/// repeatedly and gvfsd-trash hanged on something before registering in mount tracker. The question is why it hangs. As far as I know, the backend tries to access all mount points before registering, so it might hang on some unreachable mount for some reason...

Just an idea, but does the following finish without any error?
$ for p in $(mount | cut -d" " -f3); do stat $p > /dev/null || break; done

Can you please provide output from "mount" command?

Can you please provide backtrace(s) for the hanged gvfsd-trash backends?

Comment 3 Ondrej Holy 2017-05-31 15:23:31 UTC
You can also try to find which application tries to access trash:/// so intensely if any... but it is a bit tricky to debug:
1) find who often calls MountLocation for trash location
$ dbus-monitor "interface=org.gtk.vfs.MountTracker,member=MountLocation"
2) try to obtain pid for the sender
$ dbus-send --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBus org.freedesktop.DBus.GetConnectionUnixProcessID string:[the value after sender= e.g. ":1.187" from 1)]
3) try to obtain application name using pid
$ ps ax | grep [pid - the value after uint32 from 2)]

Comment 4 valentine.darrell 2017-06-15 12:42:20 UTC
(In reply to Ondrej Holy from comment #2)
> Thanks for your report.
> 
> Can you please provide more info please? When this start happening? rhel-6.9
> got just translations updates for gvfs. rhel-6.8 got just unrelated patches.
> It might potentially relate to fix of Bug 998061 from rhel-6.7...
> 
> I think that the only reason why you can see too many backends is that
> something tries to open trash:/// repeatedly and gvfsd-trash hanged on
> something before registering in mount tracker. The question is why it hangs.
> As far as I know, the backend tries to access all mount points before
> registering, so it might hang on some unreachable mount for some reason...
> 
> Just an idea, but does the following finish without any error?
> $ for p in $(mount | cut -d" " -f3); do stat $p > /dev/null || break; done
> 
> Can you please provide output from "mount" command?
> 
> Can you please provide backtrace(s) for the hanged gvfsd-trash backends?

This started happening after the server was upgraded to RHEL 6.9.

Yes, the command completes without error:
$ for p in $(mount | cut -d" " -f3); do stat $p > /dev/null || break; done
$ 

$ mount
/dev/mapper/vg_one-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw,noexec,nosuid,nodev,rootcontext="system_u:object_r:tmpfs_t:s0")
/dev/mapper/vg_one-lv_home on /home type ext4 (rw)
/dev/mapper/vg_one-lv_opt on /opt type ext4 (rw)
/dev/mapper/vg_one-lv_tmp on /tmp type ext4 (rw)
/dev/mapper/vg_one-lv_usr on /usr type ext4 (rw)
/dev/mapper/vg_one-lv_var on /var type ext4 (rw)
/dev/mapper/vg_one-lv_varlog on /var/log type ext4 (rw)
/dev/mapper/mpathfp1 on /sasworkjd type ext4 (rw,discard)
/dev/mapper/mpathfp2 on /saswork type ext4 (rw,discard)
/dev/mapper/vg_lun6-lv_oessystems on /OESSystems type ext4 (rw,discard,acl)
/dev/mapper/vg_lun6-lv_sasworkoes on /sasworkoes type ext4 (rw,discard,acl)
/dev/mapper/vg_lun6-lv_sasworkoes1 on /sasworkoes1 type ext4 (rw,discard,acl)
/dev/mapper/vg_lun6-lv_sasworkoes2 on /sasworkoes2 type ext4 (rw,discard,acl)
/dev/mapper/vg_lun6-lv_wagework on /wagework type ext4 (rw,discard,acl)
/dev/mapper/vg_lun1-lv_naics on /naics type ext4 (rw,discard,acl)
/dev/mapper/vg_lun1-lv_naics_h on /naics_h type ext4 (rw,discard,acl)
/dev/mapper/vg_lun1-lv_202files on /202files type ext4 (rw,discard,acl)
/dev/mapper/vg_lun1-lv_cda_p on /cda_p type ext4 (rw,discard,acl)
/dev/mapper/vg_lun1-lv_pnd_arch on /pnd_arch type ext4 (rw,discard,acl)
/dev/mapper/vg_lun1-lv_cda1 on /cda1 type ext4 (rw,discard,acl)
/dev/mapper/vg_lun1-lv_ldb on /ldb type ext4 (rw,discard,acl)
/dev/mapper/vg_lun1-lv_ldbcu on /ldb/ldbcu type ext4 (rw,discard,acl)
/dev/mapper/vg_lun1-lv_keypunch on /keypunch type ext4 (rw,discard,acl)
/dev/mapper/vg_lun1-lv_files on /files type ext4 (rw,discard,acl)
/dev/mapper/vg_lun1-lv_tabulation on /tabulation type ext4 (rw,discard,acl)
/dev/mapper/vg_lun1-lv_saswork1 on /saswork1 type ext4 (rw,discard,acl)
/dev/mapper/vg_lun1-lv_mwr on /MWR type ext4 (rw,discard,acl)
/dev/mapper/vg_micmac2-lv_saswkmicmac on /saswkmicmac type ext4 (rw,discard,acl)
/dev/mapper/vg_micmac2-lv_dbesproc on /dbesproc type ext4 (rw,discard,acl)
/dev/mapper/vg_micmac-lv_micds on /micds type ext4 (rw,discard,acl)
/dev/mapper/vg_micmac-lv_micmacext on /micmacext type ext4 (rw,discard,acl)
/dev/mapper/vg_micmac-lv_qcewnds on /qcewnds type ext4 (rw,discard,acl)
/dev/mapper/vg_micmac-lv_sends on /SENDS type ext4 (rw,discard,acl)
/dev/mapper/vg_micmac-lv_sendst on /SENDST type ext4 (rw,discard,acl)
/dev/mapper/vg_micmac-lv_betafiles on /betafiles type ext4 (rw,discard,acl)
/dev/mapper/vg_micmac-lv_qcewpnd on /qcewpnd type ext4 (rw,discard,acl)
/dev/mapper/vg_micmac-lv_micmacextp on /micmacextp type ext4 (rw,discard,acl)
/dev/mapper/vg_micmac2-lv_sendswork7 on /sendswork7 type ext4 (rw,discard,acl)
/dev/mapper/vg_micmac2-lv_sendswork6 on /sendswork6 type ext4 (rw,discard,acl)
/dev/mapper/vg_micmac2-lv_sendswork5 on /sendswork5 type ext4 (rw,discard,acl)
/dev/mapper/vg_micmac2-lv_sendswork4 on /sendswork4 type ext4 (rw,discard,acl)
/dev/mapper/vg_micmac2-lv_sendswork3 on /sendswork3 type ext4 (rw,discard,acl)
/dev/mapper/vg_micmac2-lv_sendswork2 on /sendswork2 type ext4 (rw,discard,acl)
/dev/mapper/vg_micmac2-lv_sendswork1 on /sendswork1 type ext4 (rw,discard,acl)
/dev/mapper/vg_micmac2-lv_sasworksun9 on /sasworksun9 type ext4 (rw,discard,acl)
/dev/mapper/vg_micmac3-lv_micmac on /micmac type ext4 (rw,discard,acl)
/dev/mapper/vg_lun1-lv_ldbbed on /ldbbed type ext4 (rw,discard,acl)
/dev/mapper/vg_lun1-lv_oracle_db_archive on /oracle_db_archive type ext4 (rw,discard,acl)
/dev/mapper/vg_lun1-lv_blscen on /blscen type ext4 (rw,discard,acl)
/dev/mapper/vg_oes-lv_oespro on /OESPRO type ext4 (rw,discard,acl)
/dev/mapper/vg_oes-lv_smsoes on /SMSOES type ext4 (rw,discard,acl)
/dev/mapper/vg_oes-lv_edb on /EDB type ext4 (rw,discard,acl)
/dev/mapper/vg_sled-lv_SLEDSys on /SLEDSys type ext4 (rw,discard,acl)
/dev/mapper/vg_sled-lv_dbestemp on /dbestemp type ext4 (rw,discard,acl)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
filer6-san:/cc130_cifsvol/daslt/CDA/cda_data/sas_equi on /sas_equi type nfs (rw,bg,hard,tcp,vers=3,rsize=65536,wsize=65536,nointr,addr=IP)
filer6-san:/cc738_cifsvol/dbes/sled on /SLEDSys/sledproject type nfs (rw,hard,proto=tcp,rsize=65536,wsize=65536,vers=3,addr=10.112.201.96)
filer6-san:/dews_vol/dews_qtree/IDCF/PROD/Upload/mwr on /SLEDSys/sledinputfiles type nfs (rw,bg,hard,proto=tcp,rsize=65536,wsize=65536,vers=3,addr=IP)
filer2-san:/vol/appvol on /sasdepot type nfs (rw,bg,hard,vers=3,proto=tcp,timeo=600,rsize=65536,wsize=65536,nointr,addr=IP)
filer5-san:/vol/vol8/ldbsas on /ldbsas type nfs (rw,bg,hard,vers=3,proto=tcp,timeo=600,rsize=65536,wsize=65536,nointr,addr=IP)
bkfiler2-san:/vol/dbesvol/ldbarchive on /ldbcu_arch type nfs (rw,bg,hard,vers=3,proto=tcp,timeo=600,rsize=65536,wsize=65536,nointr,addr=IP)
bkfiler2-san:/vol/cc738_oes_archivevol on /oesProductionArchive type nfs (ro,bg,hard,vers=3,proto=tcp,timeo=600,rsize=65536,wsize=65536,nointr,addr=IP)
filer5-san:/vol/mfarchperm on /micarch type nfs (rw,bg,hard,vers=3,proto=tcp,timeo=600,rsize=65536,wsize=65536,nointr,addr=IP)
filer5-san:/vol/vol7/oesarchive on /OESProduction type nfs (ro,bg,hard,vers=3,proto=tcp,timeo=600,rsize=65536,wsize=65536,nointr,addr=IP)
filer5-san:/vol/vol9 on /zastor type nfs (rw,bg,hard,vers=3,proto=tcp,timeo=600,rsize=65536,wsize=65536,nointr,addr=IP)
filer5-san:/vol/dbesdev on /ldbsas_2 type nfs (rw,bg,hard,vers=3,proto=tcp,timeo=600,rsize=65536,wsize=65536,nointr,addr=IP)
filer6-san:/cc130_cifsvol/daslt/CDA/cda_dbes_data/test/cda_dbes_data on /micmac/cda type nfs (rw,bg,hard,vers=3,tcp,timeo=600,rsize=65536,wsize=65536,nointr,addr=IP)
filer2-san:/vol/ccfshare on /ccfshare type nfs (rw,addr=IP)
/dev/mapper/vg_oes-lv_oesauto on /OESAUTO type ext4 (rw)
/dev/sdq1 on /boot type ext4 (rw)

Comment 5 Ondrej Holy 2017-06-16 14:12:53 UTC
Thanks for the reply, did you update from 6.8 and it worked correctly with 6.8?

Comment 6 Ondrej Holy 2017-06-16 14:14:01 UTC
It was mentioned that it happens with NFS mounts, so I suppose that it is caused by some inaccessible NFS shares, although "for p in $(mount | cut -d" " -f3); do stat $p > /dev/null || break; done" finished successfully last time...

I can simulate the similar situation:
1/ service nfs-server start # on server
2/ mount -t nfs server:path mountpoint
3/ service nfs-server stop # on server
4/ pkill -f gvfs
5/ for i in $(seq 1 10); do gvfs-ls trash:// & done

You can see 10 gvfsd-trash processes until "service nfs-server start" is executed. So, it might be probably really a consequence of the changes from Bug 998061...

I see several problems here:
- something tries to access trash:// several times continually regardless of outstanding operations and failures
- new daemon is spawned for each access until it is successfully mounted
- the daemons hang when accessing inaccessible nfs mount and never finishes the mount operation

I don't understand why access() simply doesn't return some error after some reasonable timeout and hangs forever probably, but I think that it should be possible to change this behavior by some mount options for the nfs mount points somehow...

Can you please try to obtain full backtrace for the hanged gvfsd-trash backends to be sure that this is what I am describing here?

Comment 7 Ondrej Holy 2017-06-16 14:14:59 UTC
(In reply to Ondrej Holy from comment #3)
> You can also try to find which application tries to access trash:/// so
> intensely if any... but it is a bit tricky to debug:
> 1) find who often calls MountLocation for trash location
> $ dbus-monitor "interface=org.gtk.vfs.MountTracker,member=MountLocation"
> 2) try to obtain pid for the sender
> $ dbus-send --print-reply --dest=org.freedesktop.DBus /org/freedesktop/DBus
> org.freedesktop.DBus.GetConnectionUnixProcessID string:[the value after
> sender= e.g. ":1.187" from 1)]
> 3) try to obtain application name using pid
> $ ps ax | grep [pid - the value after uint32 from 2)]

I've just realized that this works only if something explicitly calls mount, which is not applicable for trash:// which should be automounted...

Comment 8 valentine.darrell 2017-06-16 14:30:45 UTC
(In reply to Ondrej Holy from comment #5)
> Thanks for the reply, did you update from 6.8 and it worked correctly with
> 6.8?

Yes, it worked corrected with RHEL 6.8. No issues at all.

Comment 9 valentine.darrell 2017-06-16 14:41:18 UTC
(In reply to Ondrej Holy from comment #6)
> It was mentioned that it happens with NFS mounts, so I suppose that it is
> caused by some inaccessible NFS shares, although "for p in $(mount | cut -d"
> " -f3); do stat $p > /dev/null || break; done" finished successfully last
> time...
> 
> I can simulate the similar situation:
> 1/ service nfs-server start # on server
> 2/ mount -t nfs server:path mountpoint
> 3/ service nfs-server stop # on server
> 4/ pkill -f gvfs
> 5/ for i in $(seq 1 10); do gvfs-ls trash:// & done
> 
> You can see 10 gvfsd-trash processes until "service nfs-server start" is
> executed. So, it might be probably really a consequence of the changes from
> Bug 998061...
> 
> I see several problems here:
> - something tries to access trash:// several times continually regardless of
> outstanding operations and failures
> - new daemon is spawned for each access until it is successfully mounted
> - the daemons hang when accessing inaccessible nfs mount and never finishes
> the mount operation
> 
> I don't understand why access() simply doesn't return some error after some
> reasonable timeout and hangs forever probably, but I think that it should be
> possible to change this behavior by some mount options for the nfs mount
> points somehow...
> 
> Can you please try to obtain full backtrace for the hanged gvfsd-trash
> backends to be sure that this is what I am describing here?

There are no NFS errors reported by the server. I am not sure what you mean by "backtrace for the hanged backends"

Comment 10 Ondrej Holy 2017-06-19 11:12:48 UTC
Sorry, I meant to obtain a stack trace for some of the gvfsd-trash daemons, what can be done e.g. using the following:
gstack $(pidof -s gvfsd-trash)

However, gstack will probably also hang if the daemon is hanged on the access() call...

Comment 11 valentine.darrell 2017-06-23 13:41:46 UTC
(In reply to Ondrej Holy from comment #10)
> Sorry, I meant to obtain a stack trace for some of the gvfsd-trash daemons,
> what can be done e.g. using the following:
> gstack $(pidof -s gvfsd-trash)
> 
> However, gstack will probably also hang if the daemon is hanged on the
> access() call...

There were so many gvfsd-trash processes that the server became unstable and users were unable to complete their tasks due to the per user process limitations (1024). The GUI trash utility was disabled and the server rebooted to clear D staged processes. 

After the reboot, there are no longer any gvfsd-trash processes spawning and the server is stable. 

However, the other issue still persists where users cannot browse the file system from Gnome (it just hangs).

This is a production server and cannot be tinkered with too much.

Comment 12 Ondrej Holy 2017-06-26 15:09:39 UTC
Thanks for the update.

What is "GUI trash utility" and how was it disabled? Do you mean gvfsd-trash daemon? Trash daemon can be disabled by removing of /usr/share/gvfs/mounts/trash.mount file (changing automount value to false may also help to workaround the issue). However, this should not affect file system browsing in any way, apart from inaccessible trash.

What filesystems can't the users browse? It rather signalizes some network problems if you mean browsing of some NFS mounts...  

Let me know if you see it happening again, ideally with the stack trace attached if possible.

Comment 13 valentine.darrell 2017-06-27 16:19:26 UTC
(In reply to Ondrej Holy from comment #12)
> Thanks for the update.
> 
> What is "GUI trash utility" and how was it disabled? Do you mean gvfsd-trash
> daemon? Trash daemon can be disabled by removing of
> /usr/share/gvfs/mounts/trash.mount file (changing automount value to false
> may also help to workaround the issue). However, this should not affect file
> system browsing in any way, apart from inaccessible trash.
> 
> What filesystems can't the users browse? It rather signalizes some network
> problems if you mean browsing of some NFS mounts...  
> 
> Let me know if you see it happening again, ideally with the stack trace
> attached if possible.

Yes, I implemented the workaround of commenting out the the Exec=/usr/libexec/gvfsd-trash line in the /usr/share/gvfs/mounts/trash.mount file and rebooting the server.

When any user accesses on the file browser within Nautilus/Gnome by double-clicking on the Computer icon, then double clicking on the FileSystem icon, the animation just spins, never displaying the contents, it eventually hangs the Nautilus session.

The users can browse any NFS or local file system within the terminal using "cd /" OR by putting in the location by selecting "Go" from the menu bar, then selecting "Location", then browsing subdirectories.

The users can run the mount command without hanging or errors. They simply cannot use the File Browser.

Comment 14 Ondrej Holy 2017-06-28 14:43:08 UTC
Ok, your workaround should have the same effect, but should not really cause problems with filesystem browsing. Both issues are probably just a side-effect of some filesystem problem...

"Filesystem" icon in "Computer" is just a link to "file:///" location (i.e. /). So, I suppose that also manually specified "file:///" location doesn't work in Nautilus, only subdirectories as you mentioned, am I right?

Almost all your mounts are mounted in the root. As far as I can tell, Nautilus does a lot of operation on the files and also creates thumbnails etc. so it might hang on some of the mounts for some reason. Does plain "gvfs-ls file:///" works?

Can you please provide output from "gstack $(pidof nautilus)" when nautilus hangs after clicking on "Filesystem"?

Comment 15 valentine.darrell 2017-06-28 15:55:03 UTC
(In reply to Ondrej Holy from comment #14)
> Ok, your workaround should have the same effect, but should not really cause
> problems with filesystem browsing. Both issues are probably just a
> side-effect of some filesystem problem...
> 
> "Filesystem" icon in "Computer" is just a link to "file:///" location (i.e.
> /). So, I suppose that also manually specified "file:///" location doesn't
> work in Nautilus, only subdirectories as you mentioned, am I right?
> 
> Almost all your mounts are mounted in the root. As far as I can tell,
> Nautilus does a lot of operation on the files and also creates thumbnails
> etc. so it might hang on some of the mounts for some reason. Does plain
> "gvfs-ls file:///" works?
> 
> Can you please provide output from "gstack $(pidof nautilus)" when nautilus
> hangs after clicking on "Filesystem"?

Correct, specifying "File:///" does not work.

After clicking on "Filesystem", the gstack hangs without any output. When the Nautilus session crashes the gstack produced this:

$ gstack 34514
Thread 1 (process 34514):
#0  0x0000003f72edf383 in ?? ()

Comment 16 Ondrej Holy 2017-06-30 08:28:14 UTC
Thanks! Hmm, the stack trace isn't really useful. It hanged probably on some uninterruptable kernel call. We have to find somehow where it hangs, we can't do much without it. Are you able to debug it step by step using gdb to see where it hangs (it would be easier to debug gvfs-ls if it has same symptoms)?

Can you please test whether plain "gvfs-ls file:///" works? Does the following finishes without hang? 
$ for f in $(ls -d /*/); do printf "$f: "; gvfs-info "file://$f" > /dev/null && gvfs-ls "file://$f" > /dev/null && echo "ok"; done

Comment 17 valentine.darrell 2017-06-30 14:50:50 UTC
(In reply to Ondrej Holy from comment #16)
> Thanks! Hmm, the stack trace isn't really useful. It hanged probably on some
> uninterruptable kernel call. We have to find somehow where it hangs, we
> can't do much without it. Are you able to debug it step by step using gdb to
> see where it hangs (it would be easier to debug gvfs-ls if it has same
> symptoms)?
> 
> Can you please test whether plain "gvfs-ls file:///" works? Does the
> following finishes without hang? 
> $ for f in $(ls -d /*/); do printf "$f: "; gvfs-info "file://$f" > /dev/null
> && gvfs-ls "file://$f" > /dev/null && echo "ok"; done

Hi,

From a terminal "gvfs-ls file:///" completes without an error, no hang and the command above completes without hanging.

I was able to get a gstack from nautilus today after clicking on "File System" during the hang:

gstack 33047         
Thread 2 (Thread 0x7f92b46d2700 (LWP 33109)):
#0  0x0000003f7360e82d in read () from /lib64/libpthread.so.0
#1  0x0000003f74641cfb in ?? () from /lib64/libglib-2.0.so.0
#2  0x0000003f7466a3e4 in ?? () from /lib64/libglib-2.0.so.0
#3  0x0000003f73607aa1 in start_thread () from /lib64/libpthread.so.0
#4  0x0000003f72ee8bcd in clone () from /lib64/libc.so.6
Thread 1 (Thread 0x7f92ba9b3960 (LWP 33047)):
#0  0x0000003f72edf383 in poll () from /lib64/libc.so.6
#1  0x0000003f74644a6a in ?? () from /lib64/libglib-2.0.so.0
#2  0x0000003f74645215 in g_main_loop_run () from /lib64/libglib-2.0.so.0
#3  0x00007f92bb8e4d17 in gtk_main () from /usr/lib64/libgtk-x11-2.0.so.0
#4  0x000000000041a02a in ?? ()
#5  0x0000003f72e1ed1d in __libc_start_main () from /lib64/libc.so.6
#6  0x000000000040ace9 in ?? ()
#7  0x00007fff7d2b00d8 in ?? ()
#8  0x000000000000001c in ?? ()
#9  0x0000000000000001 in ?? ()
#10 0x00007fff7d2b1665 in ?? ()
#11 0x0000000000000000 in ?? ()


Not sure if this helps any.

Comment 18 Ondrej Holy 2017-07-11 08:19:45 UTC
Hi, sorry for the delay, I was on vacation.

Can you please try gstack again with debug info installed? The following should install all necessary debug info in order to see something useful instead of ?? in the trace:
debuginfo-install nautilus

Comment 19 Ondrej Holy 2017-07-11 08:20:36 UTC
Carlos, don't you have an idea why "nautilus file:///" doesn't work, when "gvfs-ls file:///" works?

Comment 20 Carlos Soriano 2017-07-13 14:40:15 UTC
Looking at the code and trying myself, still no idea, sorry :/
I'm quite lost to get any feeling where the problem could be with this issue.

Comment 21 Ondrej Holy 2017-10-13 12:31:30 UTC
Can you please respond to Comment 18?

Comment 22 valentine.darrell 2017-10-23 17:55:22 UTC
This error continues to pose a problem for the users of this server. We have updated this server with the most recent RHEL 6.9 patches at least twice since this call was opened. However, the issue persists.

Gstack continues to give me the same response when I run it against the nautilus PID.

That package mentioned in Comment 18 is not available when I search our repository.

Comment 23 Ondrej Holy 2017-10-24 12:51:33 UTC
It should be provided by the yum-utils package:

$ repoquery --whatprovides /usr/bin/debuginfo-install 
yum-utils-0:1.1.30-40.el6.noarch

Can you please try it again with that package?

Comment 25 Ondrej Holy 2018-02-01 19:24:14 UTC
This bug report has still many unanswered questions (what causes such big amount of trash requests, where the daemon hangs etc.) and I can't do much without proper backtrace, strace, reproducer, or some other info... 

I thought initially that it relates to Bug 998061, but seems it is not, because it was told it worked correctly in rhel-6.8. So the reproducer from Comment 6 is not relevant. 

We slightly changed topic to Nautilus, which is not necessarily relevant. Let's rather move back to trash daemons. Feel free to file another bug report against Nautilus.

Can you please try to provide backtrace of hanged gvfsd-trash, but including the debug info? It should work if you start it manually (regardless it is disabled).

Or it may be enough to provide just strace output in this case. Can you please run the daemon in strace, let it hang up and then attach it here please?
$ strace /usr/libexec/gvfsd-trash 2>&1 | tee strace.log

Otherwise, I will have to close this bug report without any further info...

Comment 26 Robert Verstandig 2018-09-25 05:01:16 UTC
I have seen this issue come up twice in RHEL 7.5 now. Both instances occurred when an NFS share failed during normal operation.

In the first instance, it was due to a share being removed from service but left in the /etc/fstab file. The second instance occurred today when a student tmpfs share stopped working for some reason. The gvfsd-trash continuously respawned and crashed the cluster frontend even after I cleared the user processes.

This also broke the root login. It hung during the login process. A ctrl-c got me to a login prompt but the shell was broken and kept throwing vti errors until I sourced the .bashrc file manually. Subsequent logins also failed the same way.

I ended up having to restart the frontend.

Still a problem here guys...

Comment 27 Robert Verstandig 2018-09-25 05:11:36 UTC
The actual error I was getting from the shell was: bash: __vte_prompt_command: command not found.

Comment 28 Ondrej Holy 2018-09-25 15:39:43 UTC
Can you please file new bug against RHEL 7 product? The gvfs version differs between the products and also this bug report already contains a lot of misleading information.

Comment 29 Red Hat Bugzilla 2023-09-15 00:02:22 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 500 days


Note You need to log in before you can comment on or make changes to this bug.