Hide Forgot
Description of problem: The machine becomes barely useable; the disk is spinning hard. This is exacerbated because I'm using LUKS for safety. I have gwibber running; I am signed in to twitter, identi.ca and facebook. gwibber has a limited number of items that it shows on the screen. It isn't giving me 1.5GB of lookback. I'm not getting any value out of that 1.5GB in sqlite. Version-Release number of selected component (if applicable): $ rpm -q gwibber gwibber-2.33.0-12.894bzr.fc13.noarch How reproducible: yes. Not on command. Just leave gwibber running with active twitter twaddle coming to it. Steps to Reproduce: 1. gwibber 2. sign up for twitter, identi.ca, facebook 3. let it sit and run 4. kill and restart as you see fit (the problem is the db, not the process) Actual results: sqlite gets big. Expected results: sqlite should be small(er) so that the scans are fast(er) scans should be done slowly so as not to trash the machine. Additional info: sqlite size should be commensurate with the size it needs to save (i.e. what's on the screen). I'm not getting 1.5GB worth of twitter lookback. I have scrollable (what is the limit?) circa 100 lines in the GUI. See gwibber-service in this actuality of top; although this is a 4core machine, the machine doesn't perform well when the disk is running hard and kcryptd is running trying to keep up with it. So gwibber-service is making the desktop experience largely unuseable during it's rescan behaviors. top - 11:21:18 up 66 days, 15:47, 7 users, load average: 1.72, 2.23, 1.44 Tasks: 407 total, 1 running, 405 sleeping, 0 stopped, 1 zombie Cpu(s): 4.3%us, 13.3%sy, 0.0%ni, 53.8%id, 25.1%wa, 0.3%hi, 3.3%si, 0.0%st Mem: 4112972k total, 4063272k used, 49700k free, 51620k buffers Swap: 8290300k total, 1822888k used, 6467412k free, 1459004k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 456 root 20 0 0 0 0 S 49.4 0.0 1595:36 kcryptd 14947 wbaker 20 0 79120 8972 1900 D 11.3 0.2 139:36.88 gwibber-service 28342 wbaker 20 0 921m 439m 9580 S 7.3 10.9 24:09.72 firefox 12724 wbaker 20 0 245m 8456 2980 S 3.6 0.2 171:04.18 npviewer.bin 21363 root 20 0 210m 70m 10m S 3.6 1.7 4601:53 Xorg 49 root 20 0 0 0 0 S 2.3 0.0 88:10.63 kswapd0 $ find . -name .gvfs -prune -o -size +1G -ls 2>/dev/null 161575 1377668 -rw-r--r-- 1 wbaker wbaker 1410723840 Feb 5 10:58 ./.config/gwibber/gwibber.sqlite $ ls -lhd ./.config/gwibber/gwibber.sqlite -rw-r--r--. 1 wbaker wbaker 1.4G Feb 5 11:24 ./.config/gwibber/gwibber.sqlite What is that gwibber-service doing out there? It is scanning the sqlite $ strace -p 14947 ...etc... read(12, "\r\0\0\0\4\1\203\0\3G\2\327\2A\1\203\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1024) = 1024 _llseek(12, 1175300096, [1175300096], SEEK_SET) = 0 read(12, "\0\21\203lterreceive0messagesM0\370\10You m"..., 1024) = 1024 _llseek(12, 1175305216, [1175305216], SEEK_SET) = 0 read(12, "\r\0\0\0\1\1\252\0\1\252\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1024) = 1024 _llseek(12, 1175307264, [1175307264], SEEK_SET) = 0 read(12, "\r\0\0\0\1\1\240\0\1\240\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1024) = 1024 _llseek(12, 1175309312, [1175309312], SEEK_SET) = 0 read(12, "\r\0\0\0\1\0\357\0\0\357\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1024) = 1024 _llseek(12, 1175311360, [1175311360], SEEK_SET) = 0 read(12, "\r\0\0\0\2\0l\0\0\334\0l\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1024) = 1024 _llseek(12, 1175312384, [1175312384], SEEK_SET) = 0 read(12, "\0\0\0\0terreceive0messagesM0\370DFitch"..., 1024) = 1024 _llseek(12, 1175314432, [1175314432], SEEK_SET) = 0 read(12, "\r\0\0\0\2\0\327\0\3&\0\327\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1024) = 1024 _llseek(12, 1175319552, [1175319552], SEEK_SET) = 0 read(12, "\r\0\0\0\2\0011\0\2J\0011\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 1024) = 1024 _llseek(12, 1175322624, [1175322624], SEEK_SET) = 0 read(12, "\r\0\0\0\1\0\243\0\0\243\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ...etc...
Workaround ... kill the sqlite database $ mv .config/gwibber/gwibber.sqlite{,.orig} $ gwibber $ find .config/gwibber/ -ls 2492014 4 drwxrwxr-x 2 wbaker wbaker 4096 Feb 6 19:58 .config/gwibber/ 2492204 576 -rw-r--r-- 1 wbaker wbaker 584704 Feb 6 19:58 .config/gwibber/gwibber.sqlite 161575 1382468 -rw-r--r-- 1 wbaker wbaker 1415636992 Feb 6 10:32 .config/gwibber/gwibber.sqlite.orig This became unlivable ... always running the disk Upon restart, one has to re-enroll in the social networks, but the content shows up as per usual so there's no real loss of functionality.
This message is a reminder that Fedora 13 is nearing its end of life. Approximately 30 (thirty) days from now Fedora will stop maintaining and issuing updates for Fedora 13. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as WONTFIX if it remains open with a Fedora 'version' of '13'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version prior to Fedora 13's end of life. Bug Reporter: Thank you for reporting this issue and we are sorry that we may not be able to fix it before Fedora 13 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora please change the 'version' of this bug to the applicable version. If you are unable to change the version, please add a comment here and someone will do it for you. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete. The process we are following is described here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping
Fedora 13 changed to end-of-life (EOL) status on 2011-06-25. Fedora 13 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. Thank you for reporting this bug and we are sorry it could not be fixed.