Bug 1026283 - Nautilus eating 100% cpu
Summary: Nautilus eating 100% cpu
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Fedora
Classification: Fedora
Component: tracker
Version: 20
Hardware: Unspecified
OS: Other
unspecified
high
Target Milestone: ---
Assignee: Deji Akingunola
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard: RejectedBlocker AcceptedFreezeException
Depends On:
Blocks: F20FinalFreezeException
TreeView+ depends on / blocked
 
Reported: 2013-11-04 10:45 UTC by Lukas Bezdicka
Modified: 2014-01-15 06:03 UTC (History)
15 users (show)

Fixed In Version: tracker-0.16.2-3.fc19
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2013-12-06 05:55:18 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)
strace of nautilus process (3.62 MB, text/plain)
2013-11-04 10:45 UTC, Lukas Bezdicka
no flags Details
traces (15.25 KB, text/plain)
2013-11-20 17:45 UTC, Petr Schindler
no flags Details
traces with sqlite-3.8.0.2-4 (21.51 KB, text/plain)
2013-11-25 12:05 UTC, Petr Schindler
no flags Details
tracker-busy-loop.patch (untested) (706 bytes, patch)
2013-11-27 17:19 UTC, Tim Waugh
no flags Details | Diff
tracker-busy-loop.patch (untested) (1.04 KB, patch)
2013-11-27 17:25 UTC, Tim Waugh
no flags Details | Diff
simpler (1.78 KB, patch)
2013-12-03 01:56 UTC, Michael Catanzaro
no flags Details | Diff
Avoid introducing a memory leak (1.17 KB, patch)
2013-12-03 02:23 UTC, Michael Catanzaro
no flags Details | Diff


Links
System ID Private Priority Status Summary Last Updated
GNOME Bugzilla 710413 0 None None None Never

Description Lukas Bezdicka 2013-11-04 10:45:11 UTC
Created attachment 819049 [details]
strace of nautilus process

Description of problem:
Shortly afer I ran kinit I'm getting 100% cpu usage from nautilus pollint on fd.

Version-Release number of selected component (if applicable):
nautilus-3.10.0-1.fc20.x86_64

How reproducible:
sometimes

Steps to Reproduce:
1. login
2. kinit for your internal systems usage
3. see nautilus eat one core

Actual results:
100% cpu usage

Expected results:
nothing

Additional info:

Comment 1 Tim Waugh 2013-11-12 14:53:28 UTC
Lukas: are you still seeing this? I think it might have been fixed.

Comment 2 Petr Schindler 2013-11-18 14:19:04 UTC
Not sure if it's the same problem, but sometimes when I run htop I can see nautilus using 100% cpu. And it doesn't stop. I have to always kill it.

Command is /usr/bin/nautilus --no-default-window

Comment 3 Petr Schindler 2013-11-18 14:24:28 UTC
It also seems to prevent starting new nautilus window. When I see this happens and try to run nautilus (from terminal or from desktop menu) it doesn't start.

Proposing as blocker. When you hit this bug you can't run 'Files' which is on default dash. So it violates the criterion: "All applications that can be launched using the standard graphical mechanism of a release-blocking desktop after a default installation of that desktop must start successfully and withstand a basic functionality test."

Comment 4 Matthias Clasen 2013-11-20 01:56:42 UTC
Hmm, doesn't reproduce here so far. Do you have any bookmarks or mounts that only become available after kinit ? It would be useful to have a few backtraces from when nautilus is busy like that would be useful.

Comment 5 Ray Strode [halfline] 2013-11-20 01:59:11 UTC
can you install debuginfo, and then when it's stuck in 100% cpu usage, get a few backtraces ?

something like

for i in $(seq 1 3); do gstack $(pidof nautilus); done > three-stack-traces.txt

Comment 6 Ray Strode [halfline] 2013-11-20 01:59:34 UTC
(mid-air collision with Matthias)

Comment 7 Petr Schindler 2013-11-20 17:45:30 UTC
Created attachment 826778 [details]
traces

It happened to me again. But I'm still not sure when and how. You can find traces in attachment.

Comment 8 Adam Williamson 2013-11-20 18:54:26 UTC
Discussed at 2013-11-20 blocker review meeting: http://meetbot.fedoraproject.org/fedora-blocker-review/2013-11-20/f20-blocker-review.2013-11-20-17.00.log.txt . We agreed that this is a worrying issue and may wind up constituting a blocker, but we need more precise information on exactly what's going on to determine how many people it's likely to affect and hence whether it truly merits blocker status. If the reporters / assignees could find out exactly what the conditions that trigger this bug are it would help.

(Several attendees at the meeting are running GNOME 3 and have not seen this; I don't recall seeing it at all in my F20 testing).

Comment 9 Ray Strode [halfline] 2013-11-20 19:07:18 UTC
So it looks like tracker is to blame.  In those stack traces, only thread 6 seems to be doing anything:

#0  0x000000389a03fe02 in sqlite3VdbeTransferError () from /lib64/libsqlite3.so.0
#1  0x000000389a076a0f in sqlite3_step () from /lib64/libsqlite3.so.0
#2  0x000000385ba6410f in function_weights () from /usr/lib64/tracker-0.16/libtracker-data.so.0
#3  0x000000389a0708d2 in sqlite3VdbeExec () from /lib64/libsqlite3.so.0
#4  0x000000389a076acf in sqlite3_step () from /lib64/libsqlite3.so.0
#5  0x000000385ba56ba0 in db_cursor_iter_next () from /usr/lib64/tracker-0.16/libtracker-data.so.0
#6  0x000000385ba572f7 in tracker_db_cursor_iter_next_thread () from /usr/lib64/tracker-0.16/libtracker-data.so.0
#7  0x000000399546f67f in run_in_thread () from /lib64/libgio-2.0.so.0
#8  0x000000399545cf36 in io_job_thread () from /lib64/libgio-2.0.so.0
#9  0x000000399547c865 in g_task_thread_pool_thread () from /lib64/libgio-2.0.so.0
#10 0x000000399486f406 in g_thread_pool_thread_proxy () from /lib64/libglib-2.0.so.0
#11 0x000000399486ea45 in g_thread_proxy () from /lib64/libglib-2.0.so.0
#12 0x00000036cb407f33 in start_thread () from /lib64/libpthread.so.0
#13 0x00000036cacf4ead in clone () from /lib64/libc.so.6


#0  0x000000389a01d67a in sqlite3BtreeDataFetch.part.124 () from /lib64/libsqlite3.so.0
#1  0x000000389a0759f3 in sqlite3VdbeExec () from /lib64/libsqlite3.so.0
#2  0x000000389a076acf in sqlite3_step () from /lib64/libsqlite3.so.0
#3  0x000000385ba6410f in function_weights () from /usr/lib64/tracker-0.16/libtracker-data.so.0
#4  0x000000389a0708d2 in sqlite3VdbeExec () from /lib64/libsqlite3.so.0
#5  0x000000389a076acf in sqlite3_step () from /lib64/libsqlite3.so.0
#6  0x000000385ba56ba0 in db_cursor_iter_next () from /usr/lib64/tracker-0.16/libtracker-data.so.0
#7  0x000000385ba572f7 in tracker_db_cursor_iter_next_thread () from /usr/lib64/tracker-0.16/libtracker-data.so.0
#8  0x000000399546f67f in run_in_thread () from /lib64/libgio-2.0.so.0
#9  0x000000399545cf36 in io_job_thread () from /lib64/libgio-2.0.so.0
#10 0x000000399547c865 in g_task_thread_pool_thread () from /lib64/libgio-2.0.so.0
#11 0x000000399486f406 in g_thread_pool_thread_proxy () from /lib64/libglib-2.0.so.0
#12 0x000000399486ea45 in g_thread_proxy () from /lib64/libglib-2.0.so.0
#13 0x00000036cb407f33 in start_thread () from /lib64/libpthread.so.0
#14 0x00000036cacf4ead in clone () from /lib64/libc.so.6

Thread 6 (Thread 0x7fce469dd700 (LWP 3466)):
#0  0x000000389a0743f6 in sqlite3VdbeExec () from /lib64/libsqlite3.so.0
#1  0x000000389a076acf in sqlite3_step () from /lib64/libsqlite3.so.0
#2  0x000000385ba6410f in function_weights () from /usr/lib64/tracker-0.16/libtracker-data.so.0
#3  0x000000389a0708d2 in sqlite3VdbeExec () from /lib64/libsqlite3.so.0
#4  0x000000389a076acf in sqlite3_step () from /lib64/libsqlite3.so.0
#5  0x000000385ba56ba0 in db_cursor_iter_next () from /usr/lib64/tracker-0.16/libtracker-data.so.0
#6  0x000000385ba572f7 in tracker_db_cursor_iter_next_thread () from /usr/lib64/tracker-0.16/libtracker-data.so.0
#7  0x000000399546f67f in run_in_thread () from /lib64/libgio-2.0.so.0
#8  0x000000399545cf36 in io_job_thread () from /lib64/libgio-2.0.so.0
#9  0x000000399547c865 in g_task_thread_pool_thread () from /lib64/libgio-2.0.so.0
#10 0x000000399486f406 in g_thread_pool_thread_proxy () from /lib64/libglib-2.0.so.0
#11 0x000000399486ea45 in g_thread_proxy () from /lib64/libglib-2.0.so.0
#12 0x00000036cb407f33 in start_thread () from /lib64/libpthread.so.0
#13 0x00000036cacf4ead in clone () from /lib64/libc.so.6

So it's stepping through an SQLite database?

Comment 10 Ray Strode [halfline] 2013-11-20 19:07:57 UTC
pretty sure kinit is a red herring, so removing it from subject.

Comment 11 Debarshi Ray 2013-11-21 01:05:11 UTC
What is your sqlite version?

If you have sqlite-3.8.1-1.fc20, can you downgrade to sqlite-3.8.0.2-4.fc20: http://koji.fedoraproject.org/koji/buildinfo?buildID=467393?

Comment 12 Petr Schindler 2013-11-21 10:48:30 UTC
OK. I downgraded to 3.8.0.2-4 as suggested. Only problem now is that I don't know how to reproduce this, so it can take some time to verify.

Comment 13 Kamil Páral 2013-11-21 12:11:28 UTC
(In reply to Debarshi Ray from comment #11)
> What is your sqlite version?

To reply to this question: Petr's sqlite version was sqlite-3.8.1-1.fc20

Comment 14 Debarshi Ray 2013-11-22 13:52:34 UTC
(In reply to Petr Schindler from comment #12)
> OK. I downgraded to 3.8.0.2-4 as suggested. Only problem now is that I don't
> know how to reproduce this, so it can take some time to verify.

You could try searching for files either via the GNOME Shell overview, or from inside Nautilus. That should excercise the same code paths.

Comment 15 Petr Schindler 2013-11-25 12:05:48 UTC
Created attachment 828607 [details]
traces with sqlite-3.8.0.2-4

I reproduced this bug even with sqlite-3.8.0.2-4. Again, I'm not sure what triggered it.

Comment 16 Petr Schindler 2013-11-25 12:10:31 UTC
I wrote that new nautilus can't be opened in comment 3. But I'm able to open new nautilus window now, so it was probably another problem. I'm now more +1 FE and -1 blocker.

Comment 17 Theodore Lee 2013-11-25 12:15:56 UTC
FWIW, I'm still seeing instances of this bug correlate directly to inability to open new Nautilus windows in Fedora 19. I have not checked this after the last round of updates on my Fedora 20 beta install (I tend to jump straight to killall nautilus), so it's possible it may be different there.

Comment 18 Theodore Lee 2013-11-26 00:50:05 UTC
After some testing, I can confirm that I am now able to open new Nautilus windows on Fedora 20, even while this bug is occurring, but not on Fedora 19. On both systems it also seems to stop file search results from turning up in gnome-shell.

Comment 19 Tim Waugh 2013-11-27 16:44:11 UTC
I have a similar backtrace to that in comment #9:

Thread 11 (Thread 0x7f8aecb57700 (LWP 10288)):
#0  0x000000319f0eaa8d in poll () at ../sysdeps/unix/syscall-template.S:81
#1  0x00000039424495b4 in g_main_context_poll (priority=2147483647, n_fds=3, 
    fds=0x7f8ae80010c0, timeout=-1, context=0x2133640) at gmain.c:4007
#2  g_main_context_iterate (context=0x2133640, block=block@entry=1, 
    dispatch=dispatch@entry=1, self=<optimized out>) at gmain.c:3708
#3  0x0000003942449a3a in g_main_loop_run (loop=0x21335d0) at gmain.c:3907
#4  0x00000039430d0376 in gdbus_shared_thread_func (user_data=0x2133610)
    at gdbusprivate.c:278
#5  0x000000394246ea45 in g_thread_proxy (data=0x2111320) at gthread.c:798
#6  0x000000319f807f33 in start_thread (arg=0x7f8aecb57700)
    at pthread_create.c:309
#7  0x000000319f0f4ead in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 10 (Thread 0x7f8ae75b3700 (LWP 10312)):
#0  sqlite3VdbeExec (p=p@entry=0x7f8ad8006ca8) at sqlite3.c:68536
#1  0x0000003551e76acf in sqlite3Step (p=0x7f8ad8006ca8) at sqlite3.c:64418
#2  sqlite3_step (pStmt=<optimized out>) at sqlite3.c:64484
#3  0x0000003c09a641af in function_weights ()
   from /usr/lib64/tracker-0.16/libtracker-data.so.0
#4  0x0000003551e708d2 in sqlite3VdbeExec (p=p@entry=0x24411c8)
    at sqlite3.c:67601
#5  0x0000003551e76acf in sqlite3Step (p=0x24411c8) at sqlite3.c:64418
#6  sqlite3_step (pStmt=<optimized out>) at sqlite3.c:64484
#7  0x0000003c09a56c40 in db_cursor_iter_next ()
   from /usr/lib64/tracker-0.16/libtracker-data.so.0
#8  0x0000003c09a57397 in tracker_db_cursor_iter_next_thread ()
   from /usr/lib64/tracker-0.16/libtracker-data.so.0
#9  0x000000394306f67f in run_in_thread (job=<optimized out>, 
    c=0x7f8ae80382b0, _data=0x2293290) at gsimpleasyncresult.c:871
#10 0x000000394305cf36 in io_job_thread (task=<optimized out>, 
    source_object=<optimized out>, task_data=0x242dd00, 
    cancellable=<optimized out>) at gioscheduler.c:89
#11 0x000000394307c865 in g_task_thread_pool_thread (thread_data=0x2296100, 
    pool_data=<optimized out>) at gtask.c:1245
#12 0x000000394246f406 in g_thread_pool_thread_proxy (data=<optimized out>)
    at gthreadpool.c:309
#13 0x000000394246ea45 in g_thread_proxy (data=0x22bd230) at gthread.c:798
#14 0x000000319f807f33 in start_thread (arg=0x7f8ae75b3700)
    at pthread_create.c:309
#15 0x000000319f0f4ead in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 9 (Thread 0x7f8ae6ba6700 (LWP 10313)):
#0  0x000000319f0eaa8d in poll () at ../sysdeps/unix/syscall-template.S:81
#1  0x00000039424495b4 in g_main_context_poll (priority=2147483647, n_fds=1, 
    fds=0x7f8ae00010c0, timeout=-1, context=0x21f4450) at gmain.c:4007
#2  g_main_context_iterate (context=context@entry=0x21f4450, 
    block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>)
    at gmain.c:3708
#3  0x00000039424496dc in g_main_context_iteration (context=0x21f4450, 
    may_block=1) at gmain.c:3774
#4  0x00007f8ae6badb7d in dconf_gdbus_worker_thread ()
   from /usr/lib64/gio/modules/libdconfsettings.so
#5  0x000000394246ea45 in g_thread_proxy (data=0x22bd320) at gthread.c:798
#6  0x000000319f807f33 in start_thread (arg=0x7f8ae6ba6700)
    at pthread_create.c:309
#7  0x000000319f0f4ead in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 8 (Thread 0x7f8ae5752700 (LWP 10314)):
#0  0x000000319f0eaa8d in poll () at ../sysdeps/unix/syscall-template.S:81
#1  0x00000039424495b4 in g_main_context_poll (priority=2147483647, n_fds=2, 
    fds=0x7f8adc0008c0, timeout=-1, context=0x22df030) at gmain.c:4007
#2  g_main_context_iterate (context=context@entry=0x22df030, 
    block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>)
    at gmain.c:3708
#3  0x00000039424496dc in g_main_context_iteration (context=0x22df030, 
    may_block=may_block@entry=1) at gmain.c:3774
#4  0x0000003942449729 in glib_worker_main (data=<optimized out>)
    at gmain.c:5473
#5  0x000000394246ea45 in g_thread_proxy (data=0x22ceb70) at gthread.c:798
#6  0x000000319f807f33 in start_thread (arg=0x7f8ae5752700)
    at pthread_create.c:309
#7  0x000000319f0f4ead in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 7 (Thread 0x7f8ad7fff700 (LWP 10319)):
#0  __lll_lock_wait ()
    at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
#1  0x000000319f80a1af in _L_lock_1026 () from /lib64/libpthread.so.0
#2  0x000000319f80a151 in __GI___pthread_mutex_lock (mutex=0x22924c0)
    at ../nptl/pthread_mutex_lock.c:134
#3  0x000000394248a5a1 in g_mutex_lock (mutex=<optimized out>)
    at gthread-posix.c:213
#4  0x0000003c09e1691f in tracker_direct_connection_real_query ()
   from /lib64/libtracker-sparql-0.16.so.0
#5  0x0000003c09e15c2f in ____lambda2__gio_scheduler_job_func ()
   from /lib64/libtracker-sparql-0.16.so.0
#6  0x000000394305cf36 in io_job_thread (task=<optimized out>, 
    source_object=<optimized out>, task_data=0x242ddc0, 
    cancellable=<optimized out>) at gioscheduler.c:89
#7  0x000000394307c865 in g_task_thread_pool_thread (thread_data=0x227c140, 
    pool_data=<optimized out>) at gtask.c:1245
#8  0x000000394246f406 in g_thread_pool_thread_proxy (data=<optimized out>)
    at gthreadpool.c:309
#9  0x000000394246ea45 in g_thread_proxy (data=0x2292370) at gthread.c:798
#10 0x000000319f807f33 in start_thread (arg=0x7f8ad7fff700)
    at pthread_create.c:309
#11 0x000000319f0f4ead in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 6 (Thread 0x7f8ad77fe700 (LWP 10320)):
#0  __lll_lock_wait ()
    at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
#1  0x000000319f80a1af in _L_lock_1026 () from /lib64/libpthread.so.0
#2  0x000000319f80a151 in __GI___pthread_mutex_lock (mutex=0x22924c0)
    at ../nptl/pthread_mutex_lock.c:134
#3  0x000000394248a5a1 in g_mutex_lock (mutex=<optimized out>)
    at gthread-posix.c:213
#4  0x0000003c09e1691f in tracker_direct_connection_real_query ()
   from /lib64/libtracker-sparql-0.16.so.0
#5  0x0000003c09e15c2f in ____lambda2__gio_scheduler_job_func ()
   from /lib64/libtracker-sparql-0.16.so.0
#6  0x000000394305cf36 in io_job_thread (task=<optimized out>, 
    source_object=<optimized out>, task_data=0x240d380, 
    cancellable=<optimized out>) at gioscheduler.c:89
#7  0x000000394307c865 in g_task_thread_pool_thread (thread_data=0x227c070, 
    pool_data=<optimized out>) at gtask.c:1245
#8  0x000000394246f406 in g_thread_pool_thread_proxy (data=<optimized out>)
    at gthreadpool.c:309
#9  0x000000394246ea45 in g_thread_proxy (data=0x22e7b20) at gthread.c:798
#10 0x000000319f807f33 in start_thread (arg=0x7f8ad77fe700)
    at pthread_create.c:309
#11 0x000000319f0f4ead in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 5 (Thread 0x7f8ae4b27700 (LWP 11871)):
#0  __lll_lock_wait ()
    at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
#1  0x000000319f80a1af in _L_lock_1026 () from /lib64/libpthread.so.0
#2  0x000000319f80a151 in __GI___pthread_mutex_lock (mutex=0x22924c0)
    at ../nptl/pthread_mutex_lock.c:134
#3  0x000000394248a5a1 in g_mutex_lock (mutex=<optimized out>)
    at gthread-posix.c:213
#4  0x0000003c09e1691f in tracker_direct_connection_real_query ()
   from /lib64/libtracker-sparql-0.16.so.0
#5  0x0000003c09e15c2f in ____lambda2__gio_scheduler_job_func ()
   from /lib64/libtracker-sparql-0.16.so.0
#6  0x000000394305cf36 in io_job_thread (task=<optimized out>, 
    source_object=<optimized out>, task_data=0x242dec0, 
    cancellable=<optimized out>) at gioscheduler.c:89
#7  0x000000394307c865 in g_task_thread_pool_thread (thread_data=0x22966b0, 
    pool_data=<optimized out>) at gtask.c:1245
#8  0x000000394246f406 in g_thread_pool_thread_proxy (data=<optimized out>)
    at gthreadpool.c:309
#9  0x000000394246ea45 in g_thread_proxy (data=0x22e79e0) at gthread.c:798
#10 0x000000319f807f33 in start_thread (arg=0x7f8ae4b27700)
    at pthread_create.c:309
#11 0x000000319f0f4ead in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 4 (Thread 0x7f8ad6ffd700 (LWP 11920)):
#0  __lll_lock_wait ()
    at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
#1  0x000000319f80a1af in _L_lock_1026 () from /lib64/libpthread.so.0
#2  0x000000319f80a151 in __GI___pthread_mutex_lock (mutex=0x22924c0)
    at ../nptl/pthread_mutex_lock.c:134
#3  0x000000394248a5a1 in g_mutex_lock (mutex=<optimized out>)
    at gthread-posix.c:213
#4  0x0000003c09e1691f in tracker_direct_connection_real_query ()
   from /lib64/libtracker-sparql-0.16.so.0
#5  0x0000003c09e15c2f in ____lambda2__gio_scheduler_job_func ()
   from /lib64/libtracker-sparql-0.16.so.0
#6  0x000000394305cf36 in io_job_thread (task=<optimized out>, 
    source_object=<optimized out>, task_data=0x22b6800, 
    cancellable=<optimized out>) at gioscheduler.c:89
#7  0x000000394307c865 in g_task_thread_pool_thread (thread_data=0x22965e0, 
    pool_data=<optimized out>) at gtask.c:1245
#8  0x000000394246f406 in g_thread_pool_thread_proxy (data=<optimized out>)
    at gthreadpool.c:309
#9  0x000000394246ea45 in g_thread_proxy (data=0x7f8ae0002450) at gthread.c:798
#10 0x000000319f807f33 in start_thread (arg=0x7f8ad6ffd700)
    at pthread_create.c:309
#11 0x000000319f0f4ead in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 3 (Thread 0x7f8ad67fc700 (LWP 11955)):
#0  __lll_lock_wait ()
    at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
#1  0x000000319f80a1af in _L_lock_1026 () from /lib64/libpthread.so.0
#2  0x000000319f80a151 in __GI___pthread_mutex_lock (mutex=0x22924c0)
    at ../nptl/pthread_mutex_lock.c:134
#3  0x000000394248a5a1 in g_mutex_lock (mutex=<optimized out>)
    at gthread-posix.c:213
#4  0x0000003c09e1691f in tracker_direct_connection_real_query ()
   from /lib64/libtracker-sparql-0.16.so.0
#5  0x0000003c09e15c2f in ____lambda2__gio_scheduler_job_func ()
   from /lib64/libtracker-sparql-0.16.so.0
#6  0x000000394305cf36 in io_job_thread (task=<optimized out>, 
    source_object=<optimized out>, task_data=0x22b6900, 
    cancellable=<optimized out>) at gioscheduler.c:89
#7  0x000000394307c865 in g_task_thread_pool_thread (thread_data=0x227c210, 
    pool_data=<optimized out>) at gtask.c:1245
#8  0x000000394246f406 in g_thread_pool_thread_proxy (data=<optimized out>)
    at gthreadpool.c:309
#9  0x000000394246ea45 in g_thread_proxy (data=0x7f8ac00124f0) at gthread.c:798
#10 0x000000319f807f33 in start_thread (arg=0x7f8ad67fc700)
    at pthread_create.c:309
#11 0x000000319f0f4ead in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 2 (Thread 0x7f8ad5ffb700 (LWP 11957)):
#0  __lll_lock_wait ()
    at ../nptl/sysdeps/unix/sysv/linux/x86_64/lowlevellock.S:135
#1  0x000000319f80a1af in _L_lock_1026 () from /lib64/libpthread.so.0
#2  0x000000319f80a151 in __GI___pthread_mutex_lock (mutex=0x22924c0)
    at ../nptl/pthread_mutex_lock.c:134
#3  0x000000394248a5a1 in g_mutex_lock (mutex=<optimized out>)
    at gthread-posix.c:213
#4  0x0000003c09e1691f in tracker_direct_connection_real_query ()
   from /lib64/libtracker-sparql-0.16.so.0
#5  0x0000003c09e15c2f in ____lambda2__gio_scheduler_job_func ()
   from /lib64/libtracker-sparql-0.16.so.0
#6  0x000000394305cf36 in io_job_thread (task=<optimized out>, 
    source_object=<optimized out>, task_data=0x22b6a00, 
    cancellable=<optimized out>) at gioscheduler.c:89
#7  0x000000394307c865 in g_task_thread_pool_thread (thread_data=0x227c2e0, 
    pool_data=<optimized out>) at gtask.c:1245
#8  0x000000394246f406 in g_thread_pool_thread_proxy (data=<optimized out>)
    at gthreadpool.c:309
#9  0x000000394246ea45 in g_thread_proxy (data=0x7f8ac0012630) at gthread.c:798
#10 0x000000319f807f33 in start_thread (arg=0x7f8ad5ffb700)
    at pthread_create.c:309
#11 0x000000319f0f4ead in clone ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

Thread 1 (Thread 0x7f8af30c1a40 (LWP 10278)):
#0  0x000000319f0eaa8d in poll () at ../sysdeps/unix/syscall-template.S:81
#1  0x00000039424495b4 in g_main_context_poll (priority=2147483647, n_fds=4, 
    fds=0x20f31d0, timeout=-1, context=0x2122350) at gmain.c:4007
#2  g_main_context_iterate (context=context@entry=0x2122350, 
    block=block@entry=1, dispatch=dispatch@entry=1, self=<optimized out>)
    at gmain.c:3708
#3  0x00000039424496dc in g_main_context_iteration (context=0x2122350, 
    context@entry=0x0, may_block=may_block@entry=1) at gmain.c:3774
#4  0x000000394309c954 in g_application_run (application=0x20ef1d0, argc=2, 
    argv=0x7ffff8814668) at gapplication.c:1635
#5  0x000000000042b27b in main ()

So I took a closer look at thread 10 and finished each function up the call chain until I found one that didn't return:

(gdb) thread 10
[Switching to thread 10 (Thread 0x7f8ae75b3700 (LWP 10312))]
#0  sqlite3VdbeExec (p=p@entry=0x7f8ad8006ca8) at sqlite3.c:68536
68536	        u.ao.aOffset[u.ao.i] = u.ao.offset;
(gdb) finish
Run till exit from #0  sqlite3VdbeExec (p=p@entry=0x7f8ad8006ca8)
    at sqlite3.c:68536
sqlite3Step (p=0x7f8ad8006ca8) at sqlite3.c:64419
64419	    db->nVdbeExec--;
Value returned is $1 = 1
(gdb) finish
Run till exit from #0  sqlite3Step (p=0x7f8ad8006ca8) at sqlite3.c:64419
64484	  while( (rc = sqlite3Step(v))==SQLITE_SCHEMA
(gdb) finish
Run till exit from #0  sqlite3_step (pStmt=<optimized out>) at sqlite3.c:64484
0x0000003c09a641af in function_weights ()
   from /usr/lib64/tracker-0.16/libtracker-data.so.0
Value returned is $2 = 9
(gdb) finish
Run till exit from #0  0x0000003c09a641af in function_weights ()
   from /usr/lib64/tracker-0.16/libtracker-data.so.0

Looks like function_weights() is the one that's in a loop.

Comment 20 Tim Waugh 2013-11-27 17:02:08 UTC
That loop looks like this:

147			while ((rc = sqlite3_step (stmt)) != SQLITE_DONE) {
148				if (rc == SQLITE_ROW) {
149					guint weight;
150					weight = sqlite3_column_int (stmt, 0);
151					g_array_append_val (weight_array, weight);
152				}
153			}

In this instance, rc == 9 == SQLITE_INTERRUPT.

Changing component.

Comment 21 Tim Waugh 2013-11-27 17:19:47 UTC
Created attachment 829845 [details]
tracker-busy-loop.patch (untested)

Maybe something like this would fix this particular case? There are other similar patterns elsewhere in (at least) that file.

Comment 22 Tim Waugh 2013-11-27 17:25:14 UTC
Created attachment 829847 [details]
tracker-busy-loop.patch (untested)

Actually I can only find that pattern in one other place. So here's an untested patch which might fix it. (I'd test it right now if I knew how to trigger this bug...)

Comment 23 Adam Williamson 2013-11-27 18:13:04 UTC
Discussed at 2013-11-27 blocker review meeting: http://meetbot.fedoraproject.org/fedora-blocker-review/2013-11-27/f20-blocker-review-3.2013-11-27-17.01.log.txt . This is clearly an irritating bug if you hit it, and multiple people have hit it. But the intent of the cited criterion is more about checking that we're not shipping absolutely broken apps - like, 'there's a bug in nautilus which means it'll never launch' - rather than a more complex scenario like this. We don't think the practical consequence of this bug - sometimes, you can't launch nautilus unless you kill the stuck process - is really that terrible that it should block release. So this is rejected as a release blocker, but accepted as a freeze exception issue, it would be very nice to get it fixed. If at all possible we'd like the fix by the end of the week, if we're going to pull it through freeze for the final release.

Comment 24 Michael Catanzaro 2013-12-03 01:56:36 UTC
Created attachment 831855 [details]
simpler

I don't know how to trigger it either... what I find interesting is that I don't think Nautilus actually uses full text search at all?

(In reply to Tim Waugh from comment #22)
> Created attachment 829847 [details]
> tracker-busy-loop.patch (untested)
> 
> Actually I can only find that pattern in one other place. So here's an
> untested patch which might fix it. (I'd test it right now if I knew how to
> trigger this bug...)

I think you can just change the != SQLITE_DONE in the loop to == SQLITE_ROW? (Don't trust me, but it looks right and is simpler.)

Comment 25 Michael Catanzaro 2013-12-03 02:23:34 UTC
Created attachment 831857 [details]
Avoid introducing a memory leak

Now that we don't loop forever, we'll leak the sqlite3_stmt. You never need to protect calls to sqlite3_finalize anyway.

Comment 26 Tim Waugh 2013-12-03 11:07:56 UTC
(In reply to Michael Catanzaro from comment #24)
> I think you can just change the != SQLITE_DONE in the loop to == SQLITE_ROW?
> (Don't trust me, but it looks right and is simpler.)

I copied the pattern from other locations in the code. I had the impression from them that sqlite3_stmt() might return SQLITE_OK, but I really don't know the sqlite3 API at all.

Comment 27 Carlos Garnacho 2013-12-03 16:02:10 UTC
Those patches were going roughly in the right direction, but still a bit incomplete mainly wrt posterior runs, where FTS would run on tampered data. I pushed on tracker master a more complete fix that attempts to recover on later runs, and still respects SQLITE_BUSY.

Still, would be interesting to know where does sqlite3_interrupt() get called, and how does it get to affect the tracker database connection, Tracker does avoid calling it itself.

Comment 28 Fedora Update System 2013-12-04 11:52:20 UTC
thunderbird-24.1.0-2.fc20, tracker-0.16.4-2.fc20 has been submitted as an update for Fedora 20.
https://admin.fedoraproject.org/updates/tracker-0.16.4-2.fc20,thunderbird-24.1.0-2.fc20

Comment 29 Fedora Update System 2013-12-05 00:38:12 UTC
Package thunderbird-24.1.0-2.fc20, tracker-0.16.4-2.fc20:
* should fix your issue,
* was pushed to the Fedora 20 testing repository,
* should be available at your local mirror within two days.
Update it with:
# su -c 'yum update --enablerepo=updates-testing thunderbird-24.1.0-2.fc20 tracker-0.16.4-2.fc20'
as soon as you are able to.
Please go to the following url:
https://admin.fedoraproject.org/updates/FEDORA-2013-22705/tracker-0.16.4-2.fc20,thunderbird-24.1.0-2.fc20
then log in and leave karma (feedback).

Comment 30 Fedora Update System 2013-12-06 05:55:18 UTC
thunderbird-24.1.0-2.fc20, tracker-0.16.4-2.fc20 has been pushed to the Fedora 20 stable repository.  If problems still persist, please make note of it in this bug report.

Comment 31 Fedora Update System 2013-12-27 17:27:37 UTC
tracker-0.16.2-3.fc19 has been submitted as an update for Fedora 19.
https://admin.fedoraproject.org/updates/tracker-0.16.2-3.fc19

Comment 32 Fedora Update System 2014-01-15 06:03:05 UTC
tracker-0.16.2-3.fc19 has been pushed to the Fedora 19 stable repository.  If problems still persist, please make note of it in this bug report.


Note You need to log in before you can comment on or make changes to this bug.