Bug 519064 - Lots of page allocation failure. order:3, mode:0x4020 mesages
Summary: Lots of page allocation failure. order:3, mode:0x4020 mesages
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 11
Hardware: i686
OS: Linux
low
medium
Target Milestone: ---
Assignee: Kernel Maintainer List
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-08-24 19:31 UTC by Orion Poplawski
Modified: 2010-06-28 14:17 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-06-28 14:17:10 UTC


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Linux Kernel 13648 None None None 2019-06-27 12:47:03 UTC

Description Orion Poplawski 2009-08-24 19:31:14 UTC
Description of problem:

I'm seeing lots or the following:

Aug 23 04:02:10 saga kernel: md0_raid1: page allocation failure. order:3, mode:0x4020
Aug 23 04:02:18 saga kernel: kswapd0: page allocation failure. order:3, mode:0x4020
Aug 23 04:02:18 saga kernel: ksoftirqd/1: page allocation failure. order:3, mode:0x4020
Aug 24 06:18:02 saga kernel: rsyslogd: page allocation failure. order:3, mode:0x4020
Aug 24 06:18:02 saga kernel: kswapd0: page allocation failure. order:3, mode:0x4020
Aug 24 06:18:02 saga kernel: nfsd: page allocation failure. order:3, mode:0x4020
Aug 24 06:18:02 saga kernel: swapper: page allocation failure. order:3, mode:0x4020

Version-Release number of selected component (if applicable):
2.6.30.5-28.rc2.fc11.i686.PAE

kswapd0: page allocation failure. order:3, mode:0x4020          
Pid: 44, comm: kswapd0 Not tainted 2.6.30.5-28.rc2.fc11.i686.PAE #1         
Call Trace:                                                        
 [<c07d9865>] ? printk+0x22/0x35                     
 [<c04afc14>] __alloc_pages_internal+0x382/0x3ae                              
 [<c04afcb1>] __get_free_pages+0x28/0x49                        
 [<c04d4053>] __kmalloc_track_caller+0x45/0x163                         
 [<c073ab36>] ? __netdev_alloc_skb+0x28/0x54                                
 [<c073a08f>] __alloc_skb+0x59/0x126                              
 [<c073ab36>] __netdev_alloc_skb+0x28/0x54                 
 [<f88de5df>] e1000_alloc_rx_buffers+0x8b/0x2a2 [e1000]  
 [<f88de51b>] e1000_clean_rx_irq+0x378/0x3b1 [e1000]   
 [<f88dd526>] e1000_clean+0x28b/0x3fb [e1000]           
 [<c0741ce8>] net_rx_action+0xa7/0x19b                                  
 [<c0447008>] __do_softirq+0xaa/0x173                                    
 [<c044711a>] do_softirq+0x49/0x7f                                        
 [<c044726e>] irq_exit+0x48/0x87                                        
 [<c040bc34>] do_IRQ+0x7f/0xa4                                
 [<c040a6d5>] common_interrupt+0x35/0x3c                                      
 [<c048c089>] ? call_rcu+0x6a/0x85                                      
 [<c04eb1e2>] d_free+0x47/0x5c                                          
 [<c04eb23a>] d_kill+0x43/0x5b                           
 [<c04eb411>] __shrink_dcache_sb+0x1bf/0x252                          
 [<c04eb580>] shrink_dcache_memory+0xdc/0x169
 [<c04b561d>] shrink_slab+0xef/0x15a                                       
 [<c04b5a31>] kswapd+0x3a9/0x53e                                       
 [<c04b412b>] ? isolate_pages_global+0x0/0x1c3                                       
 [<c0457637>] ? autoremove_wake_function+0x0/0x55                                    
 [<c04b5688>] ? kswapd+0x0/0x53e                                      
 [<c0457245>] kthread+0x5a/0x90                                       
 [<c04571eb>] ? kthread+0x0/0x90                                        
 [<c040ab8f>] kernel_thread_helper+0x7/0x10                                          
Mem-Info:                    
DMA per-cpu:                                    
CPU    0: hi:    0, btch:   1 usd:   0            
CPU    1: hi:    0, btch:   1 usd:   0                 
CPU    2: hi:    0, btch:   1 usd:   0
CPU    3: hi:    0, btch:   1 usd:   0
Normal per-cpu:
CPU    0: hi:  186, btch:  31 usd: 157
CPU    1: hi:  186, btch:  31 usd: 167
CPU    2: hi:  186, btch:  31 usd:  65
CPU    3: hi:  186, btch:  31 usd: 146
HighMem per-cpu:
CPU    0: hi:  186, btch:  31 usd: 108
CPU    1: hi:  186, btch:  31 usd:  33
CPU    2: hi:  186, btch:  31 usd:  39
CPU    3: hi:  186, btch:  31 usd:  26
Active_anon:30464 active_file:80117 inactive_anon:10523
 inactive_file:257830 unevictable:0 dirty:842 writeback:0 unstable:0
 free:553479 slab:56771 mapped:5372 pagetables:1043 bounce:0
DMA free:3508kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_
file:5268kB inactive_file:4kB unevictable:0kB present:15800kB pages_scanned:0 all_unr
eclaimable? no
lowmem_reserve[]: 0 865 4046 4046
Normal free:6960kB min:3728kB low:4660kB high:5592kB active_anon:12kB inactive_anon:5
28kB active_file:218796kB inactive_file:218696kB unevictable:0kB present:885944kB pag
es_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 25451 25451
HighMem free:2203448kB min:512kB low:3940kB high:7368kB active_anon:121844kB inactive
_anon:41564kB active_file:96404kB inactive_file:812620kB unevictable:0kB present:3257
732kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 1*4kB 2*8kB 12*16kB 19*32kB 8*64kB 1*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB 0*4
096kB = 3508kB
Normal: 1026*4kB 277*8kB 20*16kB 4*32kB 0*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*204
8kB 0*4096kB = 7152kB
HighMem: 57*4kB 88*8kB 45*16kB 20345*32kB 11933*64kB 4266*128kB 676*256kB 64*512kB 22
*1024kB 6*2048kB 0*4096kB = 2203092kB
338418 total pagecache pages
311 pages in swap cache
Swap cache stats: add 4442, delete 4131, find 2621/2659
Free swap  = 4176584kB
Total swap = 4192944kB
1081328 pages RAM
854018 pages HighMem
81539 pages reserved
352876 pages shared
118849 pages non-shared

Comment 1 Chuck Ebbert 2009-08-24 21:44:27 UTC
Same as bug 453010 reported against Fedora 9 and still not fixed upstream I guess?

Are you using jumbo frames?

Comment 2 Orion Poplawski 2009-08-24 21:54:25 UTC
(In reply to comment #1)
> Same as bug 453010 reported against Fedora 9 and still not fixed upstream I
> guess?

Sorry, tried to find the old bug without success for some reason.

> Are you using jumbo frames?  

yes, on one of two interfaces.

Comment 3 Raman Gupta 2009-12-23 17:47:14 UTC
I have the same problem on Fedora 12 using the e1000 kernel module. All of the allocations include e1000_alloc_rx_buffers in the trace.

# rpm -q kernel
kernel-2.6.31.6-166.fc12.i686

I am also using jumbo frames. MTU size is 9000.

Here is the network adapter output from lspci -vv:

04:0f.0 Ethernet controller: Intel Corporation 82541PI Gigabit Ethernet Controller (rev 05)
        Subsystem: Intel Corporation PRO/1000 GT Desktop Adapter
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV+ VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx-
        Status: Cap+ 66MHz+ UDF- FastB2B- ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 64 (63750ns min), Cache Line Size: 64 bytes
        Interrupt: pin A routed to IRQ 19
        Region 0: Memory at fe1c0000 (32-bit, non-prefetchable) [size=128K]
        Region 1: Memory at fe1a0000 (32-bit, non-prefetchable) [size=128K]
        Region 2: I/O ports at ec00 [size=64]
        Expansion ROM at 50020000 [disabled] [size=128K]
        Capabilities: [dc] Power Management version 2
                Flags: PMEClk- DSI+ D1- D2- AuxCurrent=0mA PME(D0+,D1-,D2-,D3hot+,D3cold+)
                Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=1 PME-
        Capabilities: [e4] PCI-X non-bridge device
                Command: DPERE- ERO+ RBC=512 OST=1
                Status: Dev=00:00.0 64bit- 133MHz- SCD- USC- DC=simple DMMRBC=2048 DMOST=1 DMCRS=8 RSCEM- 266MHz- 533MHz-
        Kernel driver in use: e1000
        Kernel modules: e1000

Comment 4 Matt Piermarini 2009-12-24 17:53:47 UTC
I too see tons of these.   Happens (mostly from what I can tell) under heavy network IO to the NFSd and/or iSCSI targetd. The server is running raid5 which hosts nfs shares and iscsi targets.  Also using e1000 w/jumbo frames.

Fedora 12.

# rpm -q kernel
kernel-2.6.31.6-166.fc12.x86_64

Happens in many different processes, are are always order:3, mode:0x4020.


Here are a couple:

Dec 24 12:02:31 fs kernel: swapper: page allocation failure. order:3, mode:0x4020
Dec 24 12:02:31 fs kernel: Pid: 0, comm: swapper Not tainted 2.6.31.6-166.fc12.x86_64 #1
Dec 24 12:02:31 fs kernel: Call Trace:
Dec 24 12:02:31 fs kernel: <IRQ>  [<ffffffff810c7752>] __alloc_pages_nodemask+0x542/0x584
Dec 24 12:02:31 fs kernel: [<ffffffff810f2646>] alloc_pages_node+0x48/0x4a
Dec 24 12:02:31 fs kernel: [<ffffffff810f2672>] kmalloc_large_node+0x2a/0x67
Dec 24 12:02:31 fs kernel: [<ffffffff810f3902>] __kmalloc_node_track_caller+0x31/0x123
Dec 24 12:02:31 fs kernel: [<ffffffff81375ab2>] ? __netdev_alloc_skb+0x34/0x50
Dec 24 12:02:31 fs kernel: [<ffffffff81374e6c>] __alloc_skb+0x80/0x170
Dec 24 12:02:31 fs kernel: [<ffffffff81375ab2>] __netdev_alloc_skb+0x34/0x50
Dec 24 12:02:31 fs kernel: [<ffffffffa0151e79>] e1000_alloc_rx_buffers+0x8f/0x312 [e1000]
Dec 24 12:02:31 fs kernel: [<ffffffffa02cec3e>] ? br_handle_frame+0x194/0x1ae [bridge]
Dec 24 12:02:31 fs kernel: [<ffffffffa0151db0>] e1000_clean_rx_irq+0x3d8/0x412 [e1000]
Dec 24 12:02:31 fs kernel: [<ffffffffa0150fb3>] e1000_clean+0x309/0x47f [e1000]
Dec 24 12:02:31 fs kernel: [<ffffffff8137c4d0>] net_rx_action+0xaf/0x1c9
Dec 24 12:02:31 fs kernel: [<ffffffff81057614>] __do_softirq+0xdd/0x1ad
Dec 24 12:02:31 fs kernel: [<ffffffff81099bb1>] ? handle_IRQ_event+0x60/0x121
Dec 24 12:02:31 fs kernel: [<ffffffff81027e1e>] ? ack_apic_level+0x45/0xe0
Dec 24 12:02:31 fs kernel: [<ffffffff81012eac>] call_softirq+0x1c/0x30
Dec 24 12:02:31 fs kernel: [<ffffffff810143fb>] do_softirq+0x47/0x8d
Dec 24 12:02:31 fs kernel: [<ffffffff81057326>] irq_exit+0x44/0x86
Dec 24 12:02:31 fs kernel: [<ffffffff8141ecf5>] do_IRQ+0xa5/0xbc
Dec 24 12:02:31 fs kernel: [<ffffffff810126d3>] ret_from_intr+0x0/0x11
Dec 24 12:02:31 fs kernel: <EOI>  [<ffffffff8102e23d>] ? native_safe_halt+0xb/0xd
Dec 24 12:02:31 fs kernel: [<ffffffff8106aef6>] ? ktime_get_real+0x16/0x44
Dec 24 12:02:31 fs kernel: [<ffffffff81267672>] ? acpi_idle_do_entry+0x47/0x70
Dec 24 12:02:31 fs kernel: [<ffffffff8126770d>] ? acpi_idle_enter_c1+0x72/0xc1
Dec 24 12:02:31 fs kernel: [<ffffffff81354908>] ? menu_select+0xc0/0x177
Dec 24 12:02:31 fs kernel: [<ffffffff81353b7f>] ? cpuidle_idle_call+0x99/0xce
Dec 24 12:02:31 fs kernel: [<ffffffff81010c60>] ? cpu_idle+0xa6/0xe9
Dec 24 12:02:31 fs kernel: [<ffffffff8141489e>] ? start_secondary+0x1f3/0x234
Dec 24 12:02:31 fs kernel: Mem-Info:
Dec 24 12:02:31 fs kernel: Node 0 DMA per-cpu:
Dec 24 12:02:31 fs kernel: CPU    0: hi:    0, btch:   1 usd:   0
Dec 24 12:02:31 fs kernel: CPU    1: hi:    0, btch:   1 usd:   0
Dec 24 12:02:31 fs kernel: CPU    2: hi:    0, btch:   1 usd:   0
Dec 24 12:02:31 fs kernel: CPU    3: hi:    0, btch:   1 usd:   0
Dec 24 12:02:31 fs kernel: Node 0 DMA32 per-cpu:
Dec 24 12:02:31 fs kernel: CPU    0: hi:  186, btch:  31 usd: 162
Dec 24 12:02:31 fs kernel: CPU    1: hi:  186, btch:  31 usd: 181
Dec 24 12:02:31 fs kernel: CPU    2: hi:  186, btch:  31 usd: 159
Dec 24 12:02:31 fs kernel: CPU    3: hi:  186, btch:  31 usd:  70
Dec 24 12:02:31 fs kernel: Node 0 Normal per-cpu:
Dec 24 12:02:31 fs kernel: CPU    0: hi:  186, btch:  31 usd: 167
Dec 24 12:02:31 fs kernel: CPU    1: hi:  186, btch:  31 usd: 137
Dec 24 12:02:31 fs kernel: CPU    2: hi:  186, btch:  31 usd: 155
Dec 24 12:02:31 fs kernel: CPU    3: hi:  186, btch:  31 usd: 201
Dec 24 12:02:31 fs kernel: Active_anon:292675 active_file:687438 inactive_anon:52133
Dec 24 12:02:31 fs kernel: inactive_file:710807 unevictable:28791 dirty:0 writeback:7 unstable:0
Dec 24 12:02:31 fs kernel: free:66445 slab:77789 mapped:27642 pagetables:3170 bounce:0
Dec 24 12:02:31 fs kernel: Node 0 DMA free:15832kB min:120kB low:148kB high:180kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB present:15332kB pages_scanned:0 all_unreclaimable? yes
Dec 24 12:02:31 fs kernel: lowmem_reserve[]: 0 3511 8056 8056
Dec 24 12:02:31 fs kernel: Node 0 DMA32 free:204060kB min:28504kB low:35628kB high:42756kB active_anon:400316kB inactive_anon:80068kB active_file:1271300kB inactive_file:1334056kB unevictable:0kB present:3595360kB pages_scanned:0 all_unreclaimable? no
Dec 24 12:02:31 fs kernel: lowmem_reserve[]: 0 0 4545 4545
Dec 24 12:02:31 fs kernel: Node 0 Normal free:45888kB min:36900kB low:46124kB high:55348kB active_anon:770384kB inactive_anon:128464kB active_file:1478452kB inactive_file:1509172kB unevictable:115164kB present:4654080kB pages_scanned:0 all_unreclaimable? no
Dec 24 12:02:31 fs kernel: lowmem_reserve[]: 0 0 0 0
Dec 24 12:02:31 fs kernel: Node 0 DMA: 2*4kB 2*8kB 2*16kB 3*32kB 3*64kB 1*128kB 2*256kB 1*512kB 2*1024kB 2*2048kB 2*4096kB = 15832kB
Dec 24 12:02:31 fs kernel: Node 0 DMA32: 46051*4kB 2159*8kB 3*16kB 2*32kB 3*64kB 0*128kB 1*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 204084kB
Dec 24 12:02:31 fs kernel: Node 0 Normal: 10479*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 46012kB
Dec 24 12:02:31 fs kernel: 1433118 total pagecache pages
Dec 24 12:02:31 fs kernel: 8924 pages in swap cache
Dec 24 12:02:31 fs kernel: Swap cache stats: add 212041, delete 203117, find 270350/286964
Dec 24 12:02:31 fs kernel: Free swap  = 8307472kB
Dec 24 12:02:31 fs kernel: Total swap = 8388600kB
Dec 24 12:02:31 fs kernel: 2097152 pages RAM
Dec 24 12:02:31 fs kernel: 48424 pages reserved
Dec 24 12:02:31 fs kernel: 1444129 pages shared
Dec 24 12:02:31 fs kernel: 587477 pages non-shared


------------- Another one ----------------

Dec 24 11:55:48 fs kernel: tgtd: page allocation failure. order:3, mode:0x4020
Dec 24 11:55:48 fs kernel: Pid: 12603, comm: tgtd Not tainted 2.6.31.6-166.fc12.x86_64 #1
Dec 24 11:55:48 fs kernel: Call Trace:
Dec 24 11:55:48 fs kernel: <IRQ>  [<ffffffff810c7752>] __alloc_pages_nodemask+0x542/0x584
Dec 24 11:55:48 fs kernel: [<ffffffff810f2646>] alloc_pages_node+0x48/0x4a
Dec 24 11:55:48 fs kernel: [<ffffffff810f2672>] kmalloc_large_node+0x2a/0x67
Dec 24 11:55:48 fs kernel: [<ffffffff810f3902>] __kmalloc_node_track_caller+0x31/0x123
Dec 24 11:55:48 fs kernel: [<ffffffff81375ab2>] ? __netdev_alloc_skb+0x34/0x50
Dec 24 11:55:48 fs kernel: [<ffffffff81374e6c>] __alloc_skb+0x80/0x170
Dec 24 11:55:48 fs kernel: [<ffffffff81375ab2>] __netdev_alloc_skb+0x34/0x50
Dec 24 11:55:48 fs kernel: [<ffffffffa0151e79>] e1000_alloc_rx_buffers+0x8f/0x312 [e1000]
Dec 24 11:55:48 fs kernel: [<ffffffffa02cec3e>] ? br_handle_frame+0x194/0x1ae [bridge]
Dec 24 11:55:48 fs kernel: [<ffffffffa0151db0>] e1000_clean_rx_irq+0x3d8/0x412 [e1000]
Dec 24 11:55:48 fs kernel: [<ffffffffa0150fb3>] e1000_clean+0x309/0x47f [e1000]
Dec 24 11:55:48 fs kernel: [<ffffffff8106e8b3>] ? clocksource_read+0xf/0x11
Dec 24 11:55:48 fs kernel: [<ffffffff8106eb3b>] ? getnstimeofday+0x5b/0xaf
Dec 24 11:55:48 fs kernel: [<ffffffff81040488>] ? update_curr+0xf7/0x100
Dec 24 11:55:48 fs kernel: [<ffffffff8137c4d0>] net_rx_action+0xaf/0x1c9
Dec 24 11:55:48 fs kernel: [<ffffffff81057614>] __do_softirq+0xdd/0x1ad
Dec 24 11:55:48 fs kernel: [<ffffffff81099bb1>] ? handle_IRQ_event+0x60/0x121
Dec 24 11:55:48 fs kernel: [<ffffffff81027e1e>] ? ack_apic_level+0x45/0xe0
Dec 24 11:55:48 fs kernel: [<ffffffff81012eac>] call_softirq+0x1c/0x30
Dec 24 11:55:48 fs kernel: [<ffffffff810143fb>] do_softirq+0x47/0x8d
Dec 24 11:55:48 fs kernel: [<ffffffff81057326>] irq_exit+0x44/0x86
Dec 24 11:55:48 fs kernel: [<ffffffff8141ecf5>] do_IRQ+0xa5/0xbc
Dec 24 11:55:48 fs kernel: [<ffffffff810126d3>] ret_from_intr+0x0/0x11
Dec 24 11:55:48 fs kernel: <EOI>  [<ffffffff8141d107>] ? do_page_fault+0xc7/0x299
Dec 24 11:55:48 fs kernel: [<ffffffff8141d0d9>] ? do_page_fault+0x99/0x299
Dec 24 11:55:48 fs kernel: [<ffffffff8141b255>] ? page_fault+0x25/0x30
Dec 24 11:55:48 fs kernel: [<ffffffff8120256d>] ? copy_user_handle_tail+0x18/0x53
Dec 24 11:55:48 fs kernel: [<ffffffff813761ae>] ? memcpy_toiovec+0x39/0x69
Dec 24 11:55:48 fs kernel: [<ffffffff81376ae0>] ? skb_copy_datagram_iovec+0x52/0x1be
Dec 24 11:55:48 fs kernel: [<ffffffff8141af9f>] ? _spin_unlock_bh+0x15/0x17
Dec 24 11:55:48 fs kernel: [<ffffffff813b4386>] ? tcp_recvmsg+0x61c/0x98b
Dec 24 11:55:48 fs kernel: [<ffffffff811c133f>] ? socket_has_perm+0x6a/0x80
Dec 24 11:55:48 fs kernel: [<ffffffff8136ee18>] ? sock_common_recvmsg+0x39/0x4a
Dec 24 11:55:48 fs kernel: [<ffffffff8136c6e7>] ? __sock_recvmsg+0x71/0x7d
Dec 24 11:55:48 fs kernel: [<ffffffff8136ea75>] ? sock_aio_read+0xc7/0xde
Dec 24 11:55:48 fs kernel: [<ffffffff810fc57f>] ? do_sync_read+0xe8/0x125
Dec 24 11:55:48 fs kernel: [<ffffffff81067b03>] ? autoremove_wake_function+0x0/0x39
Dec 24 11:55:48 fs kernel: [<ffffffff811c206c>] ? selinux_file_permission+0x58/0x5d
Dec 24 11:55:48 fs kernel: [<ffffffff811ba7bd>] ? security_file_permission+0x16/0x18
Dec 24 11:55:48 fs kernel: [<ffffffff810fcc35>] ? vfs_read+0xbe/0x108
Dec 24 11:55:48 fs kernel: [<ffffffff810fcd3f>] ? sys_read+0x4a/0x6e
Dec 24 11:55:48 fs kernel: [<ffffffff81011cf2>] ? system_call_fastpath+0x16/0x1b
Dec 24 11:55:48 fs kernel: Mem-Info:
Dec 24 11:55:48 fs kernel: Node 0 DMA per-cpu:
Dec 24 11:55:48 fs kernel: CPU    0: hi:    0, btch:   1 usd:   0
Dec 24 11:55:48 fs kernel: CPU    1: hi:    0, btch:   1 usd:   0
Dec 24 11:55:48 fs kernel: CPU    2: hi:    0, btch:   1 usd:   0
Dec 24 11:55:48 fs kernel: CPU    3: hi:    0, btch:   1 usd:   0
Dec 24 11:55:48 fs kernel: Node 0 DMA32 per-cpu:
Dec 24 11:55:48 fs kernel: CPU    0: hi:  186, btch:  31 usd: 152
Dec 24 11:55:48 fs kernel: CPU    1: hi:  186, btch:  31 usd: 166
Dec 24 11:55:48 fs kernel: CPU    2: hi:  186, btch:  31 usd: 111
Dec 24 11:55:48 fs kernel: CPU    3: hi:  186, btch:  31 usd:  42
Dec 24 11:55:48 fs kernel: Node 0 Normal per-cpu:
Dec 24 11:55:48 fs kernel: CPU    0: hi:  186, btch:  31 usd: 181
Dec 24 11:55:48 fs kernel: CPU    1: hi:  186, btch:  31 usd: 120
Dec 24 11:55:48 fs kernel: CPU    2: hi:  186, btch:  31 usd: 118
Dec 24 11:55:48 fs kernel: CPU    3: hi:  186, btch:  31 usd:  78
Dec 24 11:55:48 fs kernel: Active_anon:257584 active_file:683736 inactive_anon:55688
Dec 24 11:55:48 fs kernel: inactive_file:700826 unevictable:28791 dirty:0 writeback:0 unstable:0
Dec 24 11:55:48 fs kernel: free:102139 slab:78429 mapped:27621 pagetables:3143 bounce:0
Dec 24 11:55:48 fs kernel: Node 0 DMA free:15832kB min:120kB low:148kB high:180kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB present:15332kB pages_scanned:0 all_unreclaimable? yes
Dec 24 11:55:48 fs kernel: lowmem_reserve[]: 0 3511 8056 8056
Dec 24 11:55:48 fs kernel: Node 0 DMA32 free:347024kB min:28504kB low:35628kB high:42756kB active_anon:314552kB inactive_anon:89836kB active_file:1256744kB inactive_file:1280860kB unevictable:0kB present:3595360kB pages_scanned:0 all_unreclaimable? no
Dec 24 11:55:48 fs kernel: lowmem_reserve[]: 0 0 4545 4545
Dec 24 11:55:48 fs kernel: Node 0 Normal free:45948kB min:36900kB low:46124kB high:55348kB active_anon:715784kB inactive_anon:132916kB active_file:1478200kB inactive_file:1522444kB unevictable:115164kB present:4654080kB pages_scanned:0 all_unreclaimable? no
Dec 24 11:55:48 fs kernel: lowmem_reserve[]: 0 0 0 0
Dec 24 11:55:48 fs kernel: Node 0 DMA: 2*4kB 2*8kB 2*16kB 3*32kB 3*64kB 1*128kB 2*256kB 1*512kB 2*1024kB 2*2048kB 2*4096kB = 15832kB
Dec 24 11:55:48 fs kernel: Node 0 DMA32: 67738*4kB 9159*8kB 17*16kB 2*32kB 3*64kB 0*128kB 1*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 347056kB
Dec 24 11:55:48 fs kernel: Node 0 Normal: 10419*4kB 16*8kB 1*16kB 1*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 45948kB
Dec 24 11:55:48 fs kernel: 1428884 total pagecache pages
Dec 24 11:55:48 fs kernel: 18433 pages in swap cache
Dec 24 11:55:48 fs kernel: Swap cache stats: add 205802, delete 187369, find 255537/270921
Dec 24 11:55:48 fs kernel: Free swap  = 8244944kB
Dec 24 11:55:48 fs kernel: Total swap = 8388600kB
Dec 24 11:55:48 fs kernel: 2097152 pages RAM
Dec 24 11:55:48 fs kernel: 48424 pages reserved
Dec 24 11:55:48 fs kernel: 1437547 pages shared
Dec 24 11:55:48 fs kernel: 550635 pages non-shared

------ Another one -----------
Dec 24 11:28:53 fs kernel: qemu-kvm: page allocation failure. order:3, mode:0x4020
Dec 24 11:28:53 fs kernel: Pid: 2107, comm: qemu-kvm Not tainted 2.6.31.6-166.fc12.x86_64 #1
Dec 24 11:28:53 fs kernel: Call Trace:
Dec 24 11:28:53 fs kernel: <IRQ>  [<ffffffff810c7752>] __alloc_pages_nodemask+0x542/0x584
Dec 24 11:28:53 fs kernel: [<ffffffff810f2646>] alloc_pages_node+0x48/0x4a
Dec 24 11:28:53 fs kernel: [<ffffffff810f2672>] kmalloc_large_node+0x2a/0x67
Dec 24 11:28:53 fs kernel: [<ffffffff810f3902>] __kmalloc_node_track_caller+0x31/0x123
Dec 24 11:28:53 fs kernel: [<ffffffff81375ab2>] ? __netdev_alloc_skb+0x34/0x50
Dec 24 11:28:53 fs kernel: [<ffffffff81374e6c>] __alloc_skb+0x80/0x170
Dec 24 11:28:53 fs kernel: [<ffffffff81375ab2>] __netdev_alloc_skb+0x34/0x50
Dec 24 11:28:53 fs kernel: [<ffffffffa0151e79>] e1000_alloc_rx_buffers+0x8f/0x312 [e1000]
Dec 24 11:28:53 fs kernel: [<ffffffffa02cec3e>] ? br_handle_frame+0x194/0x1ae [bridge]
Dec 24 11:28:53 fs kernel: [<ffffffffa0151db0>] e1000_clean_rx_irq+0x3d8/0x412 [e1000]
Dec 24 11:28:53 fs kernel: [<ffffffffa0150fb3>] e1000_clean+0x309/0x47f [e1000]
Dec 24 11:28:53 fs kernel: [<ffffffff810639e4>] ? insert_work+0x8e/0x9b
Dec 24 11:28:53 fs kernel: [<ffffffff8137c4d0>] net_rx_action+0xaf/0x1c9
Dec 24 11:28:53 fs kernel: [<ffffffff81057614>] __do_softirq+0xdd/0x1ad
Dec 24 11:28:53 fs kernel: [<ffffffff81099bb1>] ? handle_IRQ_event+0x60/0x121
Dec 24 11:28:53 fs kernel: [<ffffffff81027e1e>] ? ack_apic_level+0x45/0xe0
Dec 24 11:28:53 fs kernel: [<ffffffff81012eac>] call_softirq+0x1c/0x30
Dec 24 11:28:53 fs kernel: [<ffffffff810143fb>] do_softirq+0x47/0x8d
Dec 24 11:28:53 fs kernel: [<ffffffff81057326>] irq_exit+0x44/0x86
Dec 24 11:28:53 fs kernel: [<ffffffff8141ecf5>] do_IRQ+0xa5/0xbc
Dec 24 11:28:53 fs kernel: [<ffffffff810126d3>] ret_from_intr+0x0/0x11
Dec 24 11:28:53 fs kernel: <EOI>  [<ffffffffa01887bb>] ? kvm_arch_vcpu_ioctl_run+0x84b/0xb34 [kvm]
Dec 24 11:28:53 fs kernel: [<ffffffffa01887aa>] ? kvm_arch_vcpu_ioctl_run+0x83a/0xb34 [kvm]
Dec 24 11:28:53 fs kernel: [<ffffffff8104071d>] ? check_preempt_wakeup+0x176/0x1ad
Dec 24 11:28:53 fs kernel: [<ffffffff8104b093>] ? try_to_wake_up+0x2cc/0x2de
Dec 24 11:28:53 fs kernel: [<ffffffffa017e5e3>] ? kvm_vcpu_ioctl+0xfd/0x556 [kvm]
Dec 24 11:28:53 fs kernel: [<ffffffff81108adc>] ? vfs_ioctl+0x22/0x87
Dec 24 11:28:53 fs kernel: [<ffffffff8101286e>] ? apic_timer_interrupt+0xe/0x20
Dec 24 11:28:53 fs kernel: [<ffffffff81109038>] ? do_vfs_ioctl+0x47b/0x4c1
Dec 24 11:28:53 fs kernel: [<ffffffff811090d4>] ? sys_ioctl+0x56/0x79
Dec 24 11:28:53 fs kernel: [<ffffffff81011cf2>] ? system_call_fastpath+0x16/0x1b
Dec 24 11:28:53 fs kernel: Mem-Info:
Dec 24 11:28:53 fs kernel: Node 0 DMA per-cpu:
Dec 24 11:28:53 fs kernel: CPU    0: hi:    0, btch:   1 usd:   0
Dec 24 11:28:53 fs kernel: CPU    1: hi:    0, btch:   1 usd:   0
Dec 24 11:28:53 fs kernel: CPU    2: hi:    0, btch:   1 usd:   0
Dec 24 11:28:53 fs kernel: CPU    3: hi:    0, btch:   1 usd:   0
Dec 24 11:28:53 fs kernel: Node 0 DMA32 per-cpu:
Dec 24 11:28:53 fs kernel: CPU    0: hi:  186, btch:  31 usd:  28
Dec 24 11:28:53 fs kernel: CPU    1: hi:  186, btch:  31 usd: 184
Dec 24 11:28:53 fs kernel: CPU    2: hi:  186, btch:  31 usd: 120
Dec 24 11:28:53 fs kernel: CPU    3: hi:  186, btch:  31 usd:  17
Dec 24 11:28:53 fs kernel: Node 0 Normal per-cpu:
Dec 24 11:28:53 fs kernel: CPU    0: hi:  186, btch:  31 usd:  45
Dec 24 11:28:53 fs kernel: CPU    1: hi:  186, btch:  31 usd: 182
Dec 24 11:28:53 fs kernel: CPU    2: hi:  186, btch:  31 usd: 148
Dec 24 11:28:53 fs kernel: CPU    3: hi:  186, btch:  31 usd: 179
Dec 24 11:28:53 fs kernel: Active_anon:299350 active_file:686708 inactive_anon:54020
Dec 24 11:28:53 fs kernel: inactive_file:696149 unevictable:28791 dirty:445 writeback:2 unstable:0
Dec 24 11:28:53 fs kernel: free:80619 slab:77133 mapped:27621 pagetables:3209 bounce:0
Dec 24 11:28:53 fs kernel: Node 0 DMA free:15832kB min:120kB low:148kB high:180kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB present:15332kB pages_scanned:0 all_unreclaimable? yes
Dec 24 11:28:53 fs kernel: lowmem_reserve[]: 0 3511 8056 8056
Dec 24 11:28:53 fs kernel: Node 0 DMA32 free:260556kB min:28504kB low:35628kB high:42756kB active_anon:365332kB inactive_anon:76508kB active_file:1274516kB inactive_file:1303080kB unevictable:0kB present:3595360kB pages_scanned:33 all_unreclaimable? no
Dec 24 11:28:53 fs kernel: lowmem_reserve[]: 0 0 4545 4545
Dec 24 11:28:53 fs kernel: Node 0 Normal free:46088kB min:36900kB low:46124kB high:55348kB active_anon:832068kB inactive_anon:139572kB active_file:1472316kB inactive_file:1481516kB unevictable:115164kB present:4654080kB pages_scanned:0 all_unreclaimable? no
Dec 24 11:28:53 fs kernel: lowmem_reserve[]: 0 0 0 0
Dec 24 11:28:53 fs kernel: Node 0 DMA: 2*4kB 2*8kB 2*16kB 3*32kB 3*64kB 1*128kB 2*256kB 1*512kB 2*1024kB 2*2048kB 2*4096kB = 15832kB
Dec 24 11:28:53 fs kernel: Node 0 DMA32: 34704*4kB 14887*8kB 3*16kB 0*32kB 3*64kB 0*128kB 1*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 260456kB
Dec 24 11:28:53 fs kernel: Node 0 Normal: 10372*4kB 63*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 1*4096kB = 46088kB
Dec 24 11:28:53 fs kernel: 1425895 total pagecache pages
Dec 24 11:28:53 fs kernel: 17087 pages in swap cache
Dec 24 11:28:53 fs kernel: Swap cache stats: add 191803, delete 174716, find 243902/257081
Dec 24 11:28:53 fs kernel: Free swap  = 8195252kB
Dec 24 11:28:53 fs kernel: Total swap = 8388600kB
Dec 24 11:28:53 fs kernel: 2097152 pages RAM
Dec 24 11:28:53 fs kernel: 48424 pages reserved
Dec 24 11:28:53 fs kernel: 1415738 pages shared
Dec 24 11:28:53 fs kernel: 591550 pages non-shared

Comment 5 Raman Gupta 2010-01-05 05:45:07 UTC
A particularly good way to trigger these on my system is to access a block device on the affected system via nbd. Here is an example from one such use:

Jan  4 17:24:10 aa kernel: nbd-server: page allocation failure. order:3, mode:0x4020
Jan  4 17:24:10 aa kernel: Pid: 17976, comm: nbd-server Not tainted 2.6.31.6-166.fc12.i686 #1
Jan  4 17:24:10 aa kernel: Call Trace:
Jan  4 17:24:10 aa kernel: [<c07647c2>] ? printk+0x14/0x1a
Jan  4 17:24:10 aa kernel: [<c0498f8d>] __alloc_pages_nodemask+0x405/0x447
Jan  4 17:24:10 aa kernel: [<c071c8df>] ? tcp_v4_rcv+0x3f6/0x59b
Jan  4 17:24:10 aa kernel: [<c049900e>] __get_free_pages+0x14/0x26
Jan  4 17:24:10 aa kernel: [<c04ba075>] __kmalloc_track_caller+0x37/0x12d
Jan  4 17:24:10 aa kernel: [<c06d96d3>] ? __netdev_alloc_skb+0x1b/0x36
Jan  4 17:24:10 aa kernel: [<c06d8d05>] __alloc_skb+0x4e/0x10d
Jan  4 17:24:10 aa kernel: [<c06d96d3>] __netdev_alloc_skb+0x1b/0x36
Jan  4 17:24:10 aa kernel: [<f9c6bf94>] e1000_alloc_rx_buffers+0x80/0x28d [e1000]
Jan  4 17:24:10 aa kernel: [<f9c6beec>] e1000_clean_rx_irq+0x364/0x38c [e1000]
Jan  4 17:24:10 aa kernel: [<f9c6b11f>] e1000_clean+0x27c/0x3db [e1000]
Jan  4 17:24:10 aa kernel: [<c06df575>] net_rx_action+0x94/0x186
Jan  4 17:24:10 aa kernel: [<f9c688be>] ? e1000_intr+0x106/0x142 [e1000]
Jan  4 17:24:10 aa kernel: [<c043c03a>] __do_softirq+0xb1/0x157
Jan  4 17:24:10 aa kernel: [<c043c116>] do_softirq+0x36/0x41
Jan  4 17:24:10 aa kernel: [<c043c208>] irq_exit+0x2e/0x61
Jan  4 17:24:10 aa kernel: [<c0404ed1>] do_IRQ+0x86/0x9a
Jan  4 17:24:10 aa kernel: [<c0403c50>] common_interrupt+0x30/0x38
Jan  4 17:24:10 aa kernel: [<c04200e0>] ? __bad_area_nosemaphore+0x83/0x127
Jan  4 17:24:10 aa kernel: [<c059a7ec>] ? __copy_to_user_ll+0xd0/0xd8
Jan  4 17:24:10 aa kernel: [<c0494110>] file_read_actor+0x68/0xd3
Jan  4 17:24:10 aa kernel: [<c0495e69>] generic_file_aio_read+0x312/0x4d1
Jan  4 17:24:10 aa kernel: [<c04c0322>] do_sync_read+0xae/0xe9
Jan  4 17:24:10 aa kernel: [<c0450086>] ? clocksource_read+0xe/0x11
Jan  4 17:24:10 aa kernel: [<c0449ba5>] ? autoremove_wake_function+0x0/0x34
Jan  4 17:24:10 aa kernel: [<c07658cb>] ? mutex_lock+0x22/0x3c
Jan  4 17:24:10 aa kernel: [<c04defac>] ? block_llseek+0xb7/0xc3
Jan  4 17:24:10 aa kernel: [<c055deb3>] ? security_file_permission+0x14/0x16
Jan  4 17:24:10 aa kernel: [<c04c03fa>] ? rw_verify_area+0x9d/0xc0
Jan  4 17:24:10 aa kernel: [<c04c0274>] ? do_sync_read+0x0/0xe9
Jan  4 17:24:10 aa kernel: [<c04c0975>] vfs_read+0x82/0xe1
Jan  4 17:24:10 aa kernel: [<c04c0a72>] sys_read+0x40/0x62
Jan  4 17:24:10 aa kernel: [<c040363c>] syscall_call+0x7/0xb
Jan  4 17:24:10 aa kernel: Mem-Info:
Jan  4 17:24:10 aa kernel: DMA per-cpu:
Jan  4 17:24:10 aa kernel: CPU    0: hi:    0, btch:   1 usd:   0
Jan  4 17:24:10 aa kernel: Normal per-cpu:
Jan  4 17:24:10 aa kernel: CPU    0: hi:  186, btch:  31 usd: 164
Jan  4 17:24:10 aa kernel: HighMem per-cpu:
Jan  4 17:24:10 aa kernel: CPU    0: hi:  186, btch:  31 usd: 162
Jan  4 17:24:10 aa kernel: Active_anon:47876 active_file:32991 inactive_anon:60499
Jan  4 17:24:10 aa kernel: inactive_file:148289 unevictable:0 dirty:49 writeback:1 unstable:0
Jan  4 17:24:10 aa kernel: free:5974 slab:12954 mapped:5459 pagetables:1537 bounce:0
Jan  4 17:24:10 aa kernel: DMA free:3484kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:4kB active_file:1068kB inactive_file:4156kB unevictable:0kB present:15872kB pages_scanned:0 all_unreclaimable? no
Jan  4 17:24:10 aa kernel: lowmem_reserve[]: 0 861 1253 1253
Jan  4 17:24:10 aa kernel: Normal free:8644kB min:3720kB low:4648kB high:5580kB active_anon:69536kB inactive_anon:109516kB active_file:57944kB inactive_file:543988kB unevictable:0kB present:881880kB pages_scanned:0 all_unreclaimable? n
o
Jan  4 17:24:10 aa kernel: lowmem_reserve[]: 0 0 3139 3139
Jan  4 17:24:10 aa kernel: HighMem free:11768kB min:392kB low:812kB high:1236kB active_anon:121968kB inactive_anon:132476kB active_file:72952kB inactive_file:45012kB unevictable:0kB present:401800kB pages_scanned:0 all_unreclaimable? no
Jan  4 17:24:10 aa kernel: lowmem_reserve[]: 0 0 0 0
Jan  4 17:24:10 aa kernel: DMA: 7*4kB 2*8kB 3*16kB 0*32kB 1*64kB 2*128kB 0*256kB 0*512kB 1*1024kB 1*2048kB 0*4096kB = 3484kB
Jan  4 17:24:10 aa kernel: Normal: 1995*4kB 3*8kB 8*16kB 2*32kB 5*64kB 1*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 8644kB
Jan  4 17:24:10 aa kernel: HighMem: 1442*4kB 320*8kB 103*16kB 56*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 11768kB
Jan  4 17:24:10 aa kernel: 206156 total pagecache pages
Jan  4 17:24:10 aa kernel: 24843 pages in swap cache
Jan  4 17:24:10 aa kernel: Swap cache stats: add 1886319, delete 1861476, find 9446049/9651113
Jan  4 17:24:10 aa kernel: Free swap  = 19355504kB
Jan  4 17:24:10 aa kernel: Total swap = 19533816kB
Jan  4 17:24:10 aa kernel: 327543 pages RAM
Jan  4 17:24:10 aa kernel: 101241 pages HighMem
Jan  4 17:24:10 aa kernel: 6186 pages reserved
Jan  4 17:24:10 aa kernel: 240370 pages shared
Jan  4 17:24:10 aa kernel: 135800 pages non-shared

Comment 6 Matt Piermarini 2010-01-05 15:13:03 UTC
I have stopped seeing these errors/messages on my system.  
It was either due to:

Updated to kernel version: 2.6.31.9-174.fc12.x86_64

Added: echo 30000 >/proc/sys/vm/min_free_kbytes
  to my /etc/rc.local

Comment 7 Raman Gupta 2010-01-06 05:58:01 UTC
Indeed, it appears that the 2.6.31.9-174.fc12 kernel has solved the problem on my machine as well.

I did *not* change /proc/sys/vm/min_free_kbytes from the default:

# cat /proc/sys/vm/min_free_kbytes
3789

Comment 8 Orion Poplawski 2010-01-06 16:08:35 UTC
Still present on my 2.6.31.9-174.fc12.i686.PAE system.

Comment 9 James Ettle 2010-01-06 16:15:12 UTC
There seems to be something rather odd in Linux memory management of late. I can get these on any recent Fedora kernel (2.6.31 or .32) if I push the IO and VM subsystem hard enough. They'll typically happen during or just after kernel compile jobs or large file copies.

Comment 10 Jesse Brandeburg 2010-01-11 20:27:28 UTC
kernel version 2.6.32 contains commit edbbb3ca107715067b27a71e6ea7f58750912aa2 which converts e1000 into jumbo frames never using more than a 4kB page for a single allocation.  This is much more efficient for the memory manager and for the driver hardware is hardly any overhead.

Comment 11 Jesse Brandeburg 2010-02-17 18:57:30 UTC
Did I mention that the commit edbbb3ca107715067b27a71e6ea7f58750912aa2 would make it so e1000 only does order:0 allocations when in jumbo mode?

Comment 12 Jesse Brandeburg 2010-04-19 20:14:22 UTC
I think this is solved by the kernel moving forward to 2.6.32 based.

Can someone verify this is fixed so it can be closed?

Comment 13 Orion Poplawski 2010-04-19 20:20:02 UTC
I'm afraid I no longer have this machine in service, so I can no longer test.

Comment 14 Pete Zaitcev 2010-04-19 20:33:42 UTC
More like solved by returning to SLAB.
But truth to be told, I don't see them with SLUB these days either.

Comment 15 Raman Gupta 2010-04-19 21:00:09 UTC
Doesn't seem like 2.6.32 has fixed it here:

# grep -E "(Linux version|Emask)" /var/log/messages*
messages-20100328:Mar 24 16:53:38 zeus kernel: Linux version 2.6.32.9-70.fc12.x86_64 (mockbuild@x86-04.phx2.fedoraproject.org) (gcc version 4.4.3 20100127 (Red Hat 4.4.3-4) (GCC) ) #1 SMP Wed Mar 3 04:40:41 UTC 2010
messages-20100404:Apr  2 20:00:09 zeus kernel: ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
messages-20100404:Apr  2 20:00:09 zeus kernel:         res 40/00:00:5a:47:39/00:00:25:00:00/e0 Emask 0x4 (timeout)
messages-20100418:Apr 12 20:00:11 zeus kernel: ata7.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
messages-20100418:Apr 12 20:00:11 zeus kernel:         res 40/00:00:07:00:00/00:00:00:00:00/e0 Emask 0x4 (timeout)
messages-20100418:Apr 15 03:16:19 zeus kernel: Linux version 2.6.32.11-99.fc12.x86_64 (mockbuild@x86-02.phx2.fedoraproject.org) (gcc version 4.4.3 20100127 (Red Hat 4.4.3-4) (GCC) ) #1 SMP Mon Apr 5 19:59:38 UTC 2010

From the above, I note that

1) With 2.6.32.9-70.fc12.x86_64 I had the error a couple of times over the course of a few weeks (note that smart and hddtemp are still turned off).

2) On 2.6.32.11-99.fc12.x86_64 (running for the last 4 days) I haven't yet seen the problem. I will report back in another week or so to confirm that 2.6.32.11-99.fc12.x86_64 is still clean. If it is, then I'll bite the bullet and try turning smart and hddtemp back on.

Comment 16 Bug Zapper 2010-04-28 09:57:03 UTC
This message is a reminder that Fedora 11 is nearing its end of life.
Approximately 30 (thirty) days from now Fedora will stop maintaining
and issuing updates for Fedora 11.  It is Fedora's policy to close all
bug reports from releases that are no longer maintained.  At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '11'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 11's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 11 is end of life.  If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora please change the 'version' of this 
bug to the applicable version.  If you are unable to change the version, 
please add a comment here and someone will do it for you.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events.  Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 17 Raman Gupta 2010-05-07 23:37:05 UTC
(In reply to comment #15)
> Doesn't seem like 2.6.32 has fixed it here:

Please ignore this comment... I posted it on the wrong bug report. Sorry.

Comment 18 Bug Zapper 2010-06-28 14:17:10 UTC
Fedora 11 changed to end-of-life (EOL) status on 2010-06-25. Fedora 11 is 
no longer maintained, which means that it will not receive any further 
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of 
Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.