Description of problem: Noticing this: (gnome-panel:3213): GVFS-RemoteVolumeMonitor-WARNING **: invoking IsSupported() failed for remote volume monitor with dbus name org.gtk.Private.GduVolumeMonitor: org.freedesktop.DBus.Error.Spawn.ChildSignaled: Process /usr/libexec/gvfs-gdu-volume-monitor received signal 11 (nautilus:3241): GVFS-RemoteVolumeMonitor-WARNING **: invoking IsSupported() failed for remote volume monitor with dbus name org.gtk.Private.GduVolumeMonitor: org.freedesktop.DBus.Error.Spawn.ChildSignaled: Process /usr/libexec/gvfs-gdu-volume-monitor received signal 11 and Jul 5 09:11:09 tlondon kernel: <6>gvfs-gdu-volume[3377]: segfault at 0 ip (null) sp 00007fff5babac48 error 14 in gvfs-gdu-volume-monitor[400000+15000] Jul 5 09:11:09 tlondon kernel: <6>gvfs-gdu-volume[3383]: segfault at 0 ip (null) sp 00007fff89bc3a68 error 14 in gvfs-gdu-volume-monitor[400000+15000] Jul 5 09:11:45 tlondon kernel: <6>gvfs-gdu-volume[4710]: segfault at 0 ip (null) sp 00007fff14f4ade8 error 14 in gvfs-gdu-volume-monitor[400000+15000] Not sure how to run/debug this further... Version-Release number of selected component (if applicable): gvfs-archive-1.3.1-2.fc12.x86_64 gvfs-gphoto2-1.3.1-2.fc12.x86_64 gvfs-1.3.1-2.fc12.x86_64 gvfs-smb-1.3.1-2.fc12.x86_64 gvfs-obexftp-1.3.1-2.fc12.x86_64 gvfs-fuse-1.3.1-2.fc12.x86_64 How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Noticed this too: (audacity:11789): GVFS-RemoteVolumeMonitor-WARNING **: invoking IsSupported() failed for remote volume monitor with dbus name org.gtk.Private.GduVolumeMonitor: org.freedesktop.DBus.Error.Spawn.ChildSignaled: Process /usr/libexec/gvfs-gdu-volume-monitor received signal 11 Appears to be correlated with: Jul 6 09:52:32 tlondon kernel: <6>gvfs-gdu-volume[11888]: segfault at 0 ip (null) sp 00007fff97c03a28 error 14 in gvfs-gdu-volume-monitor[400000+15000]
Appears that I got another one: Jul 6 13:53:24 tlondon kernel: <6>gvfs-gdu-volume[22070]: segfault at 0 ip (null) sp 00007fff6d4d2b48 error 14 in gvfs-gdu-volume-monitor[400000+15000] This time, it looks like it caused firefox to core dump: Assertion 'out->clean_up' failed at pulse.c:593, function stream_drain_cb(). Aborting. /usr/lib64/firefox-3.5/run-mozilla.sh: line 131: 4083 Aborted (core dumped) "$prog" ${1+"$@"}
Loading in as many debuginfo packages as I could easily find, I see this from the firefox core: Core was generated by `/usr/lib64/firefox-3.5/firefox'. Program terminated with signal 6, Aborted. #0 0x00000033b960ed5b in raise () from /lib64/libpthread.so.0 Missing separate debuginfos, use: debuginfo-install PackageKit-gtk-module-0.5.0-1.fc12.x86_64 e2fsprogs-libs-1.41.4-10.fc11.x86_64 glibc-2.10.1-2.x86_64 krb5-libs-1.6.3-20.fc11.x86_64 openssl-0.9.8k-1.fc11.x86_64 pulseaudio-libs-0.9.15-11.fc11.x86_64 (gdb) where #0 0x00000033b960ed5b in raise () from /lib64/libpthread.so.0 #1 0x00000038f3276b8a in nsProfileLock::FatalSignalHandler (signo=6) at nsProfileLock.cpp:212 #2 <signal handler called> #3 0x00000033b8e332f5 in raise () from /lib64/libc.so.6 #4 0x00000033b8e34b20 in abort () from /lib64/libc.so.6 #5 0x00007f0fa315dc53 in stream_drain_cb (s=<value optimized out>, success=1, userdata=0x7f0fae305240) at pulse.c:588 #6 0x00007f0fa315ffa0 in __PRETTY_FUNCTION__.8041 () from /usr/lib64/libcanberra-0.14/libcanberra-pulse.so #7 0x00000034cc42b25f in pa_run_once () from /usr/lib64/libpulsecommon-0.9.15.so Backtrace stopped: previous frame inner to this frame (corrupt stack?) (gdb) Is this related/connected?
The firefox crash is unrelated to the gvfs crash
What would really help here is a stacktrace of the volume monitor crash. If you run /usr/libexec/gvfs-gdu-volume-monitor by hand in your session, does it crash right away ? If not, does it crash if you do dbus-send --session --dest=org.gtk.Private.GduVolumeMonitor --type=method_call --print-reply /org/gtk/Private/RemoteVolumeMonitor org.gtk.Private.RemoteVolumeMonitor.IsSupported in another terminal ?
I can run /usr/libexec/gvfs-gdu-volume-monitor by hand and it does not crash: I get a nice popup immediately of the file system on the SD card that is (semi) permanently plugged in to my system. Also, [tbl@tlondon ~]$ dbus-send --session --dest=org.gtk.Private.GduVolumeMonitor --type=method_call --print-reply /org/gtk/Private/RemoteVolumeMonitor org.gtk.Private.RemoteVolumeMonitor.IsSupported method return sender=:1.205 -> dest=:1.225 reply_serial=2 boolean true [tbl@tlondon ~]$ Causes no crash. From ~/.xsession-errors: (nautilus:1733): GVFS-RemoteVolumeMonitor-WARNING **: Owner :1.37 of volume monitor org.gtk.Private.GduVolumeMonitor disconnected from the bus; removing drives/volumes/mounts (nautilus:1733): GVFS-RemoteVolumeMonitor-WARNING **: New owner :1.183 for volume monitor org.gtk.Private.GduVolumeMonitor connected to the bus; seeding drives/volumes/mounts (gnome-panel:1715): GVFS-RemoteVolumeMonitor-WARNING **: Owner :1.37 of volume monitor org.gtk.Private.GduVolumeMonitor disconnected from the bus; removing drives/volumes/mounts (gnome-panel:1715): GVFS-RemoteVolumeMonitor-WARNING **: New owner :1.183 for volume monitor org.gtk.Private.GduVolumeMonitor connected to the bus; seeding drives/volumes/mounts (gnome-panel:1715): GVFS-RemoteVolumeMonitor-WARNING **: Owner :1.183 of volume monitor org.gtk.Private.GduVolumeMonitor disconnected from the bus; removing drives/volumes/mounts (nautilus:1733): GVFS-RemoteVolumeMonitor-WARNING **: Owner :1.183 of volume monitor org.gtk.Private.GduVolumeMonitor disconnected from the bus; removing drives/volumes/mounts (nautilus:1733): GVFS-RemoteVolumeMonitor-WARNING **: New owner :1.205 for volume monitor org.gtk.Private.GduVolumeMonitor connected to the bus; seeding drives/volumes/mounts (gnome-panel:1715): GVFS-RemoteVolumeMonitor-WARNING **: New owner :1.205 for volume monitor org.gtk.Private.GduVolumeMonitor connected to the bus; seeding drives/volumes/mounts
i have something similar (but on Fedora Core 11) several times per day...: Aug 12 11:03:32 vaako kernel: gvfs-gdu-volume[22672]: segfault at 18 ip 0000003b8960b441 sp 00007fff89a03900 error 4 in libgdu.so.0.0.0[3b89600000+22000] -arne
(In reply to comment #7) > i have something similar (but on Fedora Core 11) several times per day...: > Aug 12 11:03:32 vaako kernel: gvfs-gdu-volume[22672]: segfault at 18 ip > 0000003b8960b441 sp 00007fff89a03900 error 4 in > libgdu.so.0.0.0[3b89600000+22000] Please try to grab a stacktrace, see comment #5 for hints.
i installed the debuginfo versions [38MiB :-)]... then i called > gdb /usr/libexec/gvfs-gdu-volume-monitor and typed > run Starting program: /usr/libexec/gvfs-gdu-volume-monitor [Thread debugging using libthread_db enabled] (process:1472): libgdu-WARNING **: Couldn't call GetAll() to get properties for /: Cannot launch daemon, file not found or permissions invalid (process:1472): libgdu-WARNING **: Couldn't get daemon properties (process:1472): GLib-GObject-WARNING **: invalid (NULL) pointer instance (process:1472): GLib-GObject-CRITICAL **: g_signal_connect_data: assertion `G_TYPE_CHECK_INSTANCE (instance)' failed (process:1472): GLib-GObject-WARNING **: invalid (NULL) pointer instance (process:1472): GLib-GObject-CRITICAL **: g_signal_connect_data: assertion `G_TYPE_CHECK_INSTANCE (instance)' failed (process:1472): GLib-GObject-WARNING **: invalid (NULL) pointer instance (process:1472): GLib-GObject-CRITICAL **: g_signal_connect_data: assertion `G_TYPE_CHECK_INSTANCE (instance)' failed (process:1472): GLib-GObject-WARNING **: invalid (NULL) pointer instance (process:1472): GLib-GObject-CRITICAL **: g_signal_connect_data: assertion `G_TYPE_CHECK_INSTANCE (instance)' failed Program received signal SIGSEGV, Segmentation fault. gdu_pool_get_presentables (pool=0x0) at gdu-pool.c:1378 1378 ret = g_list_copy (pool->priv->presentables); Missing separate debuginfos, use: debuginfo-install ORBit2-2.14.17-1.fc11.x86_64 dbus-glib-0.80-2.fc11.x86_64 expat-2.0.1-6.x86_64 gamin-0.1.10-4.fc11.x86_64 libattr-2.4.43-3.fc11.x86_64 libcap-2.16-4.fc11.1.x86_64 libselinux-2.0.80-1.fc11.x86_64 and it core dumped and then i typed > where and got this stack trace: #0 gdu_pool_get_presentables (pool=0x0) at gdu-pool.c:1378 #1 0x000000000040d466 in update_drives (removed_drives=<value optimized out>, added_drives=<value optimized out>, monitor=<value optimized out>) at ggduvolumemonitor.c:1121 #2 update_all (removed_drives=<value optimized out>, added_drives=<value optimized out>, monitor=<value optimized out>) at ggduvolumemonitor.c:957 #3 0x000000000040e015 in g_gdu_volume_monitor_constructor ( type=<value optimized out>, n_construct_properties=<value optimized out>, construct_properties=<value optimized out>) at ggduvolumemonitor.c:455 #4 0x0000003b8b211a39 in IA__g_object_newv ( object_type=<value optimized out>, n_parameters=0, parameters=<value optimized out>) at gobject.c:1215 #5 0x0000003b8b212585 in IA__g_object_new_valist (object_type=6441520, first_property_name=0x0, var_args=0x7fffffffe030) at gobject.c:1278 #6 0x0000003b8b2126dc in IA__g_object_new (object_type=6441520, first_property_name=0x0) at gobject.c:1060 #7 0x000000000040e380 in monitor_try_create () at gvfsproxyvolumemonitordaemon.c:1651 #8 0x000000000040e4c1 in g_vfs_proxy_volume_monitor_daemon_main ( argc=<value optimized out>, argv=<value optimized out>, dbus_name=0x411890 "org.gtk.Private.GduVolumeMonitor", volume_monitor_type=6441520) at gvfsproxyvolumemonitordaemon.c:1694 #9 0x0000003b8921ea2d in __libc_start_main (main=<value optimized out>, argc=<value optimized out>, ubp_av=<value optimized out>, init=<value optimized out>, fini=<value optimized out>, rtld_fini=<value optimized out>, stack_end=0x7fffffffe248) at libc-start.c:220 #10 0x00000000004069e9 in _start () and then i typed > list 1373 **/ 1374 GList * 1375 gdu_pool_get_presentables (GduPool *pool) 1376 { 1377 GList *ret; 1378 ret = g_list_copy (pool->priv->presentables); 1379 g_list_foreach (ret, (GFunc) g_object_ref, NULL); 1380 return ret; 1381 } 1382 -arne
*** Bug 518454 has been marked as a duplicate of this bug. ***
*** Bug 519323 has been marked as a duplicate of this bug. ***
The last ones of these that I've seen on my rawhide system is: /var/log/messages-20090712:Jul 7 08:26:01 tlondon kernel: <6>gvfs-gdu-volume[2528]: segfault at 0 ip (null) sp 00007fff87d1f048 error 14 in gvfs-gdu-volume-monitor[400000+15000] /var/log/messages-20090712:Jul 7 08:26:36 tlondon kernel: <6>gvfs-gdu-volume[2716]: segfault at 0 ip (null) sp 00007ffffa318098 error 14 in gvfs-gdu-volume-monitor[400000+15000] since then, nothing. I'm running now: gvfs-smb-1.3.5-2.fc12.x86_64 gvfs-archive-1.3.5-2.fc12.x86_64 gvfs-fuse-1.3.5-2.fc12.x86_64 gvfs-gphoto2-1.3.5-2.fc12.x86_64 gvfs-obexftp-1.3.5-2.fc12.x86_64 gvfs-debuginfo-1.3.5-1.fc12.x86_64 gvfs-1.3.5-2.fc12.x86_64 /var/log/yum.log shows these updates: Jul 07 08:50:20 Updated: evolution-data-server-doc-2.27.3-3.fc12.noarch Jul 07 08:50:25 Updated: glibc-2.10.90-2.x86_64 Jul 07 08:50:25 Installed: libcom_err-1.41.7-1.fc12.x86_64 Jul 07 08:50:26 Updated: krb5-libs-1.7-3.fc12.x86_64 Jul 07 08:50:27 Updated: openssl-0.9.8k-6.fc12.x86_64 Jul 07 08:50:28 Installed: libuuid-1.41.7-1.fc12.x86_64 Jul 07 08:50:28 Installed: libblkid-2.15.1-1.fc12.x86_64 Jul 07 08:50:30 Updated: util-linux-ng-2.15.1-1.fc12.x86_64 Jul 07 08:50:30 Updated: e2fsprogs-libs-1.41.7-1.fc12.x86_64 Jul 07 08:50:31 Installed: libss-1.41.7-1.fc12.x86_64 Jul 07 08:50:31 Installed: libudev-143-2.fc12.x86_64 Jul 07 08:50:33 Updated: evolution-data-server-2.27.3-3.fc12.x86_64 Jul 07 08:50:33 Updated: libcurl-7.19.5-6.fc12.x86_64 Jul 07 08:50:34 Updated: bluez-libs-4.43-2.fc12.x86_64 Jul 07 08:50:35 Updated: e2fsprogs-1.41.7-1.fc12.x86_64 Jul 07 08:50:35 Updated: openssh-5.2p1-12.fc12.x86_64 Jul 07 08:50:37 Updated: libpurple-2.5.8-1.fc12.x86_64 Jul 07 08:50:46 Updated: glibc-common-2.10.90-2.x86_64 Jul 07 08:50:47 Installed: udev-143-2.fc12.x86_64 Jul 07 08:50:48 Installed: libgudev1-143-2.fc12.x86_64 Jul 07 08:50:54 Updated: gnome-disk-utility-libs-0.4-1.fc12.x86_64 Jul 07 08:50:55 Updated: gnome-disk-utility-ui-libs-0.4-1.fc12.x86_64 Jul 07 08:50:56 Updated: grubby-7.0-1.fc12.x86_64 Jul 07 08:50:56 Updated: gnome-bluetooth-libs-2.27.7.1-1.fc12.x86_64 Jul 07 08:51:09 Updated: pidgin-2.5.8-1.fc12.x86_64 Jul 07 08:51:09 Updated: openssh-clients-5.2p1-12.fc12.x86_64 Jul 07 08:51:11 Updated: openssh-server-5.2p1-12.fc12.x86_64 Jul 07 08:51:11 Updated: bluez-cups-4.43-2.fc12.x86_64 Jul 07 08:51:13 Updated: krb5-workstation-1.7-3.fc12.x86_64 Jul 07 08:51:16 Updated: 1:nfs-utils-1.2.0-5.fc12.x86_64 Jul 07 08:51:34 Updated: gnome-power-manager-2.27.2-1.fc12.x86_64 Jul 07 08:51:35 Installed: libcom_err-devel-1.41.7-1.fc12.x86_64 Jul 07 08:51:36 Updated: e2fsprogs-devel-1.41.7-1.fc12.x86_64 Jul 07 08:51:38 Updated: evolution-data-server-devel-2.27.3-3.fc12.x86_64 Jul 07 08:51:42 Updated: glibc-devel-2.10.90-2.x86_64 Jul 07 08:51:44 Updated: openssl-devel-0.9.8k-6.fc12.x86_64 Jul 07 08:51:46 Updated: libcurl-devel-7.19.5-6.fc12.x86_64 Jul 07 08:51:54 Updated: gnome-bluetooth-2.27.7.1-1.fc12.x86_64 Jul 07 08:52:06 Erased: udev-extras Jul 07 08:52:20 Erased: libudev0 Possibly fixed (for me) in one of these?
Was this on x86_64 ? If so, it might be fixed by recent fixed in ld.so-x86_64... Putting on F12Blocker, since we want to make sure it is really gone.
x86_64 for me. Again, this vanished for me after updates on 7 July, including glibc-2.10.90-2.x86_64.
x86_64 for me, 2... i use a fully updated fc11 system and still have that problem (last occurrence: today 07:17:26UTC)... -arne
*** Bug 520059 has been marked as a duplicate of this bug. ***
> (process:1472): libgdu-WARNING **: Couldn't call GetAll() to get properties for /: Cannot launch daemon, file not found or permissions invalid > (process:1472): libgdu-WARNING **: Couldn't get daemon properties This is the source of problems, apparently libgdu cannot connect to devkit-disks-daemon and gdu_pool_new() returns NULL afterwards. Adding explicit tests doesn't make sense. Can you please post version of gnome-disk-utility and DeviceKit-disks packages? Try to find out why the devkit-disks-daemon cannot be spawned. What about selinux? Do other gvfs backends work fine (e.g. ftp)?
i disabled selinux, because it didnt like my httpd/cgi thingies... i am not aware of any un-/successful gvfs activity, because i dont know so much about gvfs... just those strange error messages in /var/log/messages puzzled me a little... according to daily report i had 13 gvfs-gdu related crashes yesterday... -arne
*** Bug 524365 has been marked as a duplicate of this bug. ***
*** Bug 524491 has been marked as a duplicate of this bug. ***
(In reply to bug 524491 comment #2) > Please always fill repro steps, to have an idea, what happened. Yeah, but I wasn't sure how to reproduce it - it was crashing often and for no reason from my view.
I've just faced this issue during rawhide upgrade, with palimpsest. The old devkit-disks-daemon kept running, resulting in typical problem with different APIs. For everybody interested, can you please try upgrading to gnome-disk-utility-2.28.0-2.fc12 + DeviceKit-disks-007-1.fc12 to see if the problem persists? Don't forget to kill old instances of devkit-disks-daemon or do a reboot. This is what I got with leftover gvfs-gdu-volume-monitor instance (old instance crashes and the new one correctly connects to the new daemon version): gvfs-gdu-volume[2577]: segfault at 0 ip 0000000000409c3d sp 00007fffd81dbe60 error 4 in gvfs-gdu-volume-monitor;4ab7739d (deleted)[400000+18000]
could somebody please provide those packages for fc11? thx. -arne
I'm a tad confused, what's the state of this bug with regard to Fedora 12?
I don't think there have been confirmed sightings of this in recent-ish rawhide. The original problem here was at login, I believe, and we've had a fix to the dbus timeout logic that made timeouts effectively much shorter than expected, which might explain failing dbus calls at a time of high activity (such as login).
I supposed the bug went away with consistent update of gnome-disk-utility and DeviceKit-disks packages, having same API version. But recently bug 529982 has been reported, though it doesn't happen at startup, rather than after making physical changes in Palimpsest. I will look at that more in detail.
Closing for now then. Re-open if you see it again.