Bug 496680 - Amarok triggered oom-killer
Amarok triggered oom-killer
Status: CLOSED WORKSFORME
Product: Fedora
Classification: Fedora
Component: amarok (Show other bugs)
10
x86_64 Linux
low Severity medium
: ---
: ---
Assigned To: Aurelien Bompard
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2009-04-20 13:02 EDT by gb
Modified: 2009-09-07 22:25 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2009-09-07 22:25:07 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
OOM logs (88.30 KB, application/octet-stream)
2009-04-20 13:02 EDT, gb
no flags Details
Amarok crash logs (18.74 KB, application/octet-stream)
2009-05-14 12:49 EDT, gb
no flags Details

  None (edit)
Description gb 2009-04-20 13:02:14 EDT
Created attachment 340382 [details]
OOM logs

Description of problem: Opening Amarok to play CD triggered the oom-killer process. The system have 6 Gig physical ram and 3 Gig swap space, at the time of issue all process (around 2 Gig) was on the physical ram and nothing in swap space. Why did the system thinks it was out of memory when there was plenty available?


Version-Release number of selected component (if applicable):

Linux dragon-lx 2.6.27.19-170.2.35.fc10.x86_64 #1 SMP Mon Feb 23 13:00:23 EST 2009 x86_64 x86_64 x86_64 GNU/Linux


 amarok -version
Qt: 4.4.3
KDE: 4.2.1 (KDE 4.2.1)
Amarok: 2.0.2



How reproducible:
Have not been able to reproduce

Steps to Reproduce:
1. Open amarok, it appears it was building database
2.
3.
  
Actual results:


Expected results: Use available memory correctly.


Additional info:

I was attempting to play a CD with Amarok when the display became unresponsive around 00:59:24 Instead of restarting the box, I left it as it, few hours later  I notice that the display was responsive, the logs indicate the oom-killer proces was invoke about 46 minutes later and start killing processes.



 Apr 19 00:59:24 dragon-lx pulseaudio[3402]: module-alsa-sink.c: Error opening PCM device front:0: Device or resource busy
Apr 19 01:45:27 dragon-lx kernel: hald-addon-stor invoked oom-killer: gfp_mask=0x1201d2, order=0, oomkilladj=0
Apr 19 01:46:52 dragon-lx kernel: Pid: 2356, comm: hald-addon-stor Not tainted 2.6.27.19-170.2.35.fc10.x86_64 #1
Apr 19 01:46:52 dragon-lx kernel:
Apr 19 01:46:52 dragon-lx kernel: Call Trace:
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff8109183e>] oom_kill_process+0x5d/0x1f9
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff810729b2>] ? cpuset_mems_allowed_intersects+0x1c/0x1e
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff81091ba7>] ? badness+0x1a3/0x1e6
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff81091e5f>] out_of_memory+0x275/0x2b2
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff8109538a>] __alloc_pages_internal+0x395/0x457
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff810b2af6>] alloc_pages_current+0xb9/0xc2
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff8108f0ab>] __page_cache_alloc+0x67/0x6c
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff81096f65>] __do_page_cache_readahead+0x8d/0x163
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff8109708e>] do_page_cache_readahead+0x53/0x60
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff8108f61d>] filemap_fault+0x165/0x328
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff8109e0df>] __do_fault+0x54/0x31c
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff8104b71b>] ? lock_timer_base+0x26/0x4a
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff8109ffe5>] handle_mm_fault+0x3b3/0x863
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff810cc798>] ? poll_freewait+0x39/0x89
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff813365a9>] do_page_fault+0x6a8/0xaf6
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff8103a6d1>] ? default_wake_function+0x0/0xf
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff8103a6d1>] ? default_wake_function+0x0/0xf
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff8120e186>] ? sd_media_changed+0x18c/0x197
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff8133396a>] ? _spin_lock+0x9/0xc
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff8116877e>] ? kobject_put+0x47/0x4b
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff811f27de>] ? put_device+0x15/0x17
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff811ff235>] ? scsi_device_put+0x3d/0x42
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff8120d77b>] ? scsi_disk_put+0x3a/0x3f
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff810d2d3c>] ? iput+0x2f/0x65
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff810e729f>] ? do_open+0x2e5/0x2f7
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff8133396a>] ? _spin_lock+0x9/0xc
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff810d68c2>] ? mntput_no_expire+0x31/0x144
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff810e72b1>] ? blkdev_open+0x0/0x68
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff810bee6b>] ? __dentry_open+0x237/0x249
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff810bef33>] ? nameidata_to_filp+0x2e/0x40
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff810ca716>] ? getname+0x2c/0x1d2
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff81031103>] ? need_resched+0x1e/0x28
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff810ba34c>] ? __slab_free+0x83/0x28e
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff810ca6d5>] ? putname+0x30/0x39
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff810ca6d5>] ? putname+0x30/0x39
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff8105adda>] ? getnstimeofday+0x54/0xb0
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff810c792b>] ? path_put+0x1d/0x21
Apr 19 01:46:52 dragon-lx kernel: [<ffffffff81333dfa>] error_exit+0x0/0x75
Apr 19 01:46:52 dragon-lx kernel:
Apr 19 01:46:52 dragon-lx kernel: Mem-Info:
Apr 19 01:46:52 dragon-lx kernel: Node 0 DMA per-cpu:
Apr 19 01:46:52 dragon-lx kernel: CPU    0: hi:    0, btch:   1 usd:   0
Apr 19 01:46:52 dragon-lx kernel: CPU    1: hi:    0, btch:   1 usd:   0
Apr 19 01:46:52 dragon-lx kernel: Node 0 DMA32 per-cpu:
Apr 19 01:46:52 dragon-lx kernel: CPU    0: hi:  186, btch:  31 usd:  10
Apr 19 01:46:52 dragon-lx kernel: CPU    1: hi:  186, btch:  31 usd:  41
Apr 19 01:46:52 dragon-lx kernel: Node 0 Normal per-cpu:
Apr 19 01:46:52 dragon-lx kernel: CPU    0: hi:  186, btch:  31 usd: 170
Apr 19 01:46:52 dragon-lx kernel: CPU    1: hi:  186, btch:  31 usd: 170
Apr 19 01:46:52 dragon-lx kernel: Active:721922 inactive:725725 dirty:0 writeback:0 unstable:0
Apr 19 01:46:52 dragon-lx kernel: free:7377 slab:13874 mapped:249 pagetables:17582 bounce:0
Apr 19 01:46:52 dragon-lx kernel: Node 0 DMA free:8676kB min:8kB low:8kB high:12kB active:0kB inactive:0kB present:6832kB pages_scanned:0 all_unreclaimable? yes
Apr 19 01:46:52 dragon-lx kernel: lowmem_reserve[]: 0 3256 6033 6033
Apr 19 01:46:52 dragon-lx kernel: Node 0 DMA32 free:16320kB min:5360kB low:6700kB high:8040kB active:1552988kB inactive:1546920kB present:3334304kB pages_scanned:5043660 all_unreclaimable? yes
Apr 19 01:46:52 dragon-lx kernel: lowmem_reserve[]: 0 0 2777 2777
Apr 19 01:46:52 dragon-lx kernel: Node 0 Normal free:4512kB min:4572kB low:5712kB high:6856kB active:1334700kB inactive:1355980kB present:2844160kB pages_scanned:5527455 all_unreclaimable? yes
Apr 19 01:46:52 dragon-lx kernel: lowmem_reserve[]: 0 0 0 0
Apr 19 01:46:52 dragon-lx kernel: Node 0 DMA: 3*4kB 3*8kB 2*16kB 1*32kB 4*64kB 3*128kB 3*256kB 2*512kB 2*1024kB 0*2048kB 1*4096kB = 8676kB
Apr 19 01:46:52 dragon-lx kernel: Node 0 DMA32: 14*4kB 13*8kB 6*16kB 2*32kB 2*64kB 2*128kB 1*256kB 2*512kB 2*1024kB 2*2048kB 2*4096kB = 16320kB
Apr 19 01:46:52 dragon-lx kernel: Node 0 Normal: 94*4kB 9*8kB 2*16kB 0*32kB 1*64kB 1*128kB 1*256kB 1*512kB 1*1024kB 1*2048kB 0*4096kB = 4512kB
Apr 19 01:46:52 dragon-lx kernel: 2533 total pagecache pages
Apr 19 01:46:52 dragon-lx kernel: 0 pages in swap cache
Apr 19 01:46:52 dragon-lx kernel: Swap cache stats: add 819077, delete 819077, find 5257/8819
Apr 19 01:46:52 dragon-lx kernel: Free swap  = 0kB
Apr 19 01:46:52 dragon-lx kernel: Total swap = 3148732kB
Apr 19 01:46:52 dragon-lx kernel: 1572848 pages RAM
Apr 19 01:46:52 dragon-lx kernel: 40440 pages reserved
Apr 19 01:46:52 dragon-lx kernel: 34307 pages shared
Apr 19 01:46:52 dragon-lx kernel: 1494190 pages non-shared

**********See attachment for rest of log file*****************
Comment 1 Rex Dieter 2009-04-20 13:11:00 EDT
You may have been using amarok at the time, but the log's seem to point out errors in other areas, including:  pulseaudio/alsa and/or hald-addon-storage
Comment 2 Rex Dieter 2009-05-04 11:17:32 EDT
Was os is this reproducible?  Or have further suspicions or evidence that amarok was to blame here?
Comment 3 gb 2009-05-05 03:11:13 EDT
I have not been able to reproduce the oom-killer. I still believe it was amarok that triggered oom. I am noticing the program is either crashing or freezing when scanning a directory to build a collection and utilizing plenty of cpu resources.
Comment 4 gb 2009-05-14 12:49:50 EDT
Created attachment 344008 [details]
Amarok crash logs
Comment 5 gb 2009-05-14 12:54:39 EDT
Launch the application again today and my X window froze while it was performing a scan. After a few minutes the window return to user control and a Kcrash bug report generated. My system logs indicate the OOM-KILLER was triggered again.

May 14 10:53:53 dragon-lx kernel: __ratelimit: 251 callbacks suppressed
May 14 10:53:54 dragon-lx kernel: sendmail invoked oom-killer: gfp_mask=0x1201d2, order=0, oomkilladj=0
May 14 10:53:54 dragon-lx kernel: Pid: 2451, comm: sendmail Not tainted 2.6.27.15-78.2.23.fc9.x86_64 #1
May 14 10:53:54 dragon-lx kernel:
May 14 10:53:54 dragon-lx kernel: Call Trace:
May 14 10:53:54 dragon-lx kernel: [<ffffffff8108fbfe>] oom_kill_process+0x5d/0x1f9
May 14 10:53:54 dragon-lx kernel: [<ffffffff81070c26>] ? cpuset_mems_allowed_intersects+0x1c/0x1e
May 14 10:53:54 dragon-lx kernel: [<ffffffff8108ff67>] ? badness+0x1a3/0x1e6
May 14 10:53:54 dragon-lx kernel: [<ffffffff8109021f>] out_of_memory+0x275/0x2b2
May 14 10:53:54 dragon-lx kernel: [<ffffffff81093743>] __alloc_pages_internal+0x395/0x456
May 14 10:53:54 dragon-lx kernel: [<ffffffff810b10ba>] alloc_pages_current+0xb9/0xc2
May 14 10:53:54 dragon-lx kernel: [<ffffffff8108d46c>] __page_cache_alloc+0x67/0x6c
May 14 10:53:54 dragon-lx kernel: [<ffffffff81095315>] __do_page_cache_readahead+0x8d/0x163
May 14 10:53:54 dragon-lx kernel: [<ffffffff8109543e>] do_page_cache_readahead+0x53/0x60
May 14 10:53:54 dragon-lx kernel: [<ffffffff8108d9de>] filemap_fault+0x165/0x328
May 14 10:53:54 dragon-lx kernel: [<ffffffff8109c6c3>] __do_fault+0x54/0x31c
May 14 10:53:54 dragon-lx kernel: [<ffffffff8109e5f5>] handle_mm_fault+0x3df/0x890
May 14 10:53:54 dragon-lx kernel: [<ffffffff812c487e>] do_page_fault+0x66d/0xabb
May 14 10:53:54 dragon-lx kernel: [<ffffffff81103c41>] ? proc_lookup_de+0xca/0xe7
May 14 10:53:54 dragon-lx kernel: [<ffffffff81103c74>] ? proc_lookup+0x16/0x1a
May 14 10:53:54 dragon-lx kernel: [<ffffffff810ff6f8>] ? proc_root_lookup+0x1b/0x3e
May 14 10:53:54 dragon-lx kernel: [<ffffffff81099c88>] ? zone_statistics+0x62/0x67
May 14 10:53:54 dragon-lx kernel: [<ffffffff81092f27>] ? get_page_from_freelist+0x482/0x6d1
May 14 10:53:54 dragon-lx kernel: [<ffffffff81151c63>] ? sprintf+0x68/0x6a
May 14 10:53:54 dragon-lx kernel: [<ffffffff810b794e>] ? virt_to_head_page+0x31/0x41
May 14 10:53:55 dragon-lx kernel: [<ffffffff810fec11>] ? proc_destroy_inode+0x14/0x16
May 14 10:53:55 dragon-lx kernel: [<ffffffff810d1d1c>] ? destroy_inode+0x3a/0x4f
May 14 10:53:55 dragon-lx kernel: [<ffffffff810ff5d4>] ? proc_delete_inode+0x0/0x60
May 14 10:53:55 dragon-lx kernel: [<ffffffff810ff5d4>] ? proc_delete_inode+0x0/0x60
May 14 10:53:55 dragon-lx kernel: [<ffffffff810d2286>] ? generic_delete_inode+0x147/0x150
May 14 10:53:55 dragon-lx kernel: [<ffffffff810d1453>] ? iput+0x61/0x65
May 14 10:53:55 dragon-lx kernel: [<ffffffff810ceb26>] ? d_free+0x60/0x65
May 14 10:53:55 dragon-lx kernel: [<ffffffff810d4fba>] ? mntput_no_expire+0x31/0x144
May 14 10:53:55 dragon-lx kernel: [<ffffffff810bfbb2>] ? __fput+0x161/0x16d
May 14 10:53:55 dragon-lx kernel: [<ffffffff810c5f8b>] ? path_put+0x1d/0x21
May 14 10:53:55 dragon-lx kernel: [<ffffffff812c210a>] error_exit+0x0/0x75
May 14 10:53:55 dragon-lx kernel:
May 14 10:53:55 dragon-lx kernel: Mem-Info:
May 14 10:53:55 dragon-lx kernel: Node 0 DMA per-cpu:
May 14 10:53:55 dragon-lx kernel: CPU    0: hi:    0, btch:   1 usd:   0
May 14 10:53:55 dragon-lx kernel: CPU    1: hi:    0, btch:   1 usd:   0
May 14 10:53:55 dragon-lx kernel: Node 0 DMA32 per-cpu:
May 14 10:53:55 dragon-lx kernel: CPU    0: hi:  186, btch:  31 usd: 180
May 14 10:53:55 dragon-lx kernel: CPU    1: hi:  186, btch:  31 usd: 166
May 14 10:53:55 dragon-lx kernel: Node 0 Normal per-cpu:
May 14 10:53:55 dragon-lx kernel: CPU    0: hi:  186, btch:  31 usd: 161
May 14 10:53:55 dragon-lx kernel: CPU    1: hi:  186, btch:  31 usd: 156
May 14 10:53:55 dragon-lx kernel: Active:733691 inactive:729636 dirty:0 writeback:0 unstable:0
May 14 10:53:55 dragon-lx kernel: free:7560 slab:10424 mapped:714 pagetables:21706 bounce:0
May 14 10:53:55 dragon-lx kernel: Node 0 DMA free:9396kB min:12kB low:12kB high:16kB active:0kB inactive:0kB present:7588kB pages_scanned:0 all_unreclaimable? yes
May 14 10:53:55 dragon-lx kernel: lowmem_reserve[]: 0 3256 6033 6033
May 14 10:53:55 dragon-lx kernel: Node 0 DMA32 free:16376kB min:5360kB low:6700kB high:8040kB active:1600172kB inactive:1572768kB present:3334304kB pages_scanned:5354933 all_unreclaimable? yes
May 14 10:53:55 dragon-lx kernel: lowmem_reserve[]: 0 0 2777 2777
May 14 10:53:55 dragon-lx kernel: Node 0 Normal free:4468kB min:4572kB low:5712kB high:6856kB active:1334440kB inactive:1345888kB present:2844160kB pages_scanned:6825540 all_unreclaimable? yes
May 14 10:53:55 dragon-lx kernel: lowmem_reserve[]: 0 0 0 0
May 14 10:53:55 dragon-lx kernel: Node 0 DMA: 3*4kB 1*8kB 4*16kB 3*32kB 4*64kB 2*128kB 0*256kB 1*512kB 2*1024kB 1*2048kB 1*4096kB = 9396kB
May 14 10:53:55 dragon-lx kernel: Node 0 DMA32: 44*4kB 73*8kB 48*16kB 36*32kB 14*64kB 8*128kB 6*256kB 0*512kB 0*1024kB 1*2048kB 2*4096kB = 16376kB
May 14 10:53:56 dragon-lx kernel: Node 0 Normal: 25*4kB 8*8kB 3*16kB 1*32kB 0*64kB 1*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 4468kB
May 14 10:53:56 dragon-lx kernel: 2881 total pagecache pages
May 14 10:53:56 dragon-lx kernel: 1 pages in swap cache
May 14 10:53:56 dragon-lx kernel: Swap cache stats: add 2180111, delete 2180110, find 630858/734733
May 14 10:53:56 dragon-lx kernel: Free swap  = 0kB
May 14 10:53:56 dragon-lx kernel: Total swap = 3148732kB
May 14 10:53:56 dragon-lx kernel: 1572848 pages RAM
May 14 10:53:56 dragon-lx kernel: 48450 pages reserved
May 14 10:53:56 dragon-lx kernel: 3394 pages shared
May 14 10:53:56 dragon-lx kernel: 1513225 pages non-shared
May 14 10:53:56 dragon-lx kernel: Out of memory: kill process 22269 (amarokcollectio) score 1545205 or a child
May 14 10:53:56 dragon-lx kernel: Killed process 22269 (amarokcollectio)
Comment 6 Rex Dieter 2009-05-14 13:15:40 EDT
Clearly, you have a trouble-maker (or 2) in your collection.
Comment 7 gb 2009-05-14 13:54:10 EDT
(In reply to comment #6)
> Clearly, you have a trouble-maker (or 2) in your collection.  

Interesting point.. If that is the case how can I tell who is the trouble-maker, your advise is much appreciated.
Comment 8 Rex Dieter 2009-05-14 14:02:54 EDT
You can use bisection search, cut your collection in half, and use
amarokcollectionscanner by hand to search each half, to narrow down problem(s), and continue until you find it.
Comment 9 Steven M. Parrish 2009-07-05 08:32:27 EDT
Is this still an issue?  Were you able to located the offending file.  Please update.  

-- 
Steven M. Parrish - KDE Triage Master
                  - PackageKit Triager
Fedora Bugzappers volunteer triage team
https://fedoraproject.org/wiki/BugZappers
Comment 10 gb 2009-07-25 19:20:08 EDT
I was able to use the amarokcollectionscanner tool and pipe the output to a log file where I was able to identify the files it was complaining about. I am now using amarok 2.1.1 KDE 4.2.4, and is not seeing the problem. Feel free to close the ticket.

thank you

Note You need to log in before you can comment on or make changes to this bug.