Bug 587301 - rpciod/0: page allocation failure. order:1, mode:0x4020
rpciod/0: page allocation failure. order:1, mode:0x4020
Status: CLOSED WONTFIX
Product: Fedora
Classification: Fedora
Component: kernel (Show other bugs)
13
All Linux
low Severity medium
: ---
: ---
Assigned To: Kernel Maintainer List
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2010-04-29 10:52 EDT by Orion Poplawski
Modified: 2011-06-27 11:57 EDT (History)
7 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2011-06-27 11:57:35 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
/var/log/messages (583.20 KB, text/plain)
2010-04-29 10:52 EDT, Orion Poplawski
no flags Details

  None (edit)
Description Orion Poplawski 2010-04-29 10:52:21 EDT
Created attachment 410136 [details]
/var/log/messages

Description of problem:

I'm seeing several instances of the following:

Pid: 946, comm: rpciod/0 Not tainted 2.6.33.2-57.fc13.i686.PAE #1
Call Trace:
 [<c07c437d>] ? printk+0x14/0x17
 [<c04bac3c>] __alloc_pages_nodemask+0x4b5/0x52a
 [<c04df018>] alloc_slab_page+0x1a/0x20
 [<c04df706>] __slab_alloc+0x132/0x3ae
 [<c05d3fc4>] ? should_fail+0x76/0xea
 [<c04e0ba9>] __kmalloc_track_caller+0xec/0x152
 [<c0769dd4>] ? sk_stream_alloc_skb+0x2c/0xc1
 [<c0769dd4>] ? sk_stream_alloc_skb+0x2c/0xc1
 [<c07319a6>] __alloc_skb+0x54/0x11a
 [<c0769dd4>] sk_stream_alloc_skb+0x2c/0xc1
 [<c076a3fc>] tcp_sendmsg+0x165/0x70c
 [<c0447edf>] ? local_bh_enable_ip+0xd/0xf
 [<c072aae6>] __sock_sendmsg+0x4a/0x53
 [<c072ada6>] sock_sendmsg+0x98/0xac
 [<c072ada6>] ? sock_sendmsg+0x98/0xac
 [<c072ade7>] kernel_sendmsg+0x2d/0x3c
 [<c072dd99>] sock_no_sendpage+0x4a/0x5d
 [<c0769ea1>] tcp_sendpage+0x38/0x38a
 [<fa168207>] ? xs_send_kvec+0x72/0x7c [sunrpc]
 [<fa168312>] xs_sendpages+0x101/0x178 [sunrpc]
 [<fa168476>] xs_tcp_send_request+0x46/0x11d [sunrpc]
 [<fa1670e9>] xprt_transmit+0x15b/0x223 [sunrpc]
 [<fa164c79>] call_transmit+0x1b7/0x1f2 [sunrpc]
 [<fa16ad8f>] __rpc_execute+0x73/0x1f2 [sunrpc]
 [<c0455e7d>] ? worker_thread+0x15d/0x262
 [<fa16af3d>] rpc_async_schedule+0x10/0x12 [sunrpc]
 [<c0455ebf>] worker_thread+0x19f/0x262
 [<c0455e7d>] ? worker_thread+0x15d/0x262
 [<fa16af2d>] ? rpc_async_schedule+0x0/0x12 [sunrpc]
 [<c0459654>] ? autoremove_wake_function+0x0/0x34
 [<c0455d20>] ? worker_thread+0x0/0x262
 [<c04592d8>] kthread+0x6f/0x74
 [<c0459269>] ? kthread+0x0/0x74
 [<c04091c2>] kernel_thread_helper+0x6/0x10
Mem-Info:
DMA per-cpu:
CPU    0: hi:    0, btch:   1 usd:   0
CPU    1: hi:    0, btch:   1 usd:   0
Normal per-cpu:
CPU    0: hi:  186, btch:  31 usd: 171
CPU    1: hi:  186, btch:  31 usd:  82
HighMem per-cpu:
CPU    0: hi:  186, btch:  31 usd: 157
CPU    1: hi:  186, btch:  31 usd:  30
active_anon:91740 inactive_anon:41418 isolated_anon:0
 active_file:110908 inactive_file:161182 isolated_file:32
 unevictable:0 dirty:10037 writeback:9417 unstable:436
 free:15503 slab_reclaimable:39261 slab_unreclaimable:26738
 mapped:18925 shmem:444 pagetables:3191 bounce:0
DMA free:3476kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:400kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15872kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:1492kB slab_unreclaimable:156kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 863 2014 2014
Normal free:56884kB min:3724kB low:4652kB high:5584kB active_anon:0kB inactive_anon:7900kB active_file:240108kB inactive_file:255992kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:883912kB mlocked:0kB dirty:22244kB writeback:29132kB mapped:3912kB shmem:4kB slab_reclaimable:155552kB slab_unreclaimable:106796kB kernel_stack:3032kB pagetables:340kB unstable:1296kB bounce:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
lowmem_reserve[]: 0 0 9211 9211
HighMem free:1652kB min:512kB low:1752kB high:2996kB active_anon:366960kB inactive_anon:157772kB active_file:203124kB inactive_file:388736kB unevictable:0kB isolated(anon):0kB isolated(file):128kB present:1179104kB mlocked:0kB dirty:17904kB writeback:8536kB mapped:71788kB shmem:1772kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:12424kB unstable:448kB bounce:0kB writeback_tmp:0kB pages_scanned:32 all_unreclaimable? no
lowmem_reserve[]: 0 0 0 0
DMA: 13*4kB 18*8kB 21*16kB 6*32kB 7*64kB 2*128kB 0*256kB 0*512kB 0*1024kB 1*2048kB 0*4096kB = 3476kB
Normal: 14053*4kB 0*8kB 0*16kB 5*32kB 2*64kB 1*128kB 1*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 56884kB
HighMem: 133*4kB 58*8kB 21*16kB 8*32kB 1*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1652kB
285212 total pagecache pages
12652 pages in swap cache
Swap cache stats: add 95422, delete 82770, find 48166/52382
Free swap  = 3924096kB
Total swap = 4094968kB
523912 pages RAM
297098 pages HighMem
13004 pages reserved
265073 pages shared
257219 pages non-shared
SLUB: Unable to allocate memory on node -1 (gfp=0x20)
  cache: kmalloc-4096, object size: 4096, buffer size: 4144, default order: 3, min order: 1
  kmalloc-4096 debugging increased min order, use slub_debug=O to disable.
  node 0: slabs: 35, objs: 155, free: 0

Version-Release number of selected component (if applicable):
2.6.33.2-57.fc13.i686.PAE

Not really sure what is triggering it.
Comment 1 Dave Hansen 2011-03-23 16:22:44 EDT
Slub debugging causing the (very likely to succeed) 1-page
allocations to turn in to less reliable 2-page allocations so that it
has padding for the guard and poison areas.

This looks like a TCP path to me.  Worse case, it'll drop the packet and
the other end will retransmit.  I'd also guess that the system is ~5-6
years old.  Is this really in production?  Part of the problem is
undoubtedly having a 32-bit kernel with highmem.

Is there an actual problem here other than the messages?

I wouldn't be concerned about the messages themselves.
Comment 2 Orion Poplawski 2011-03-30 12:48:31 EDT
For the most part it seems harmless, but I think I've seen automount nfs mounts fail because of it.  I have seen one message on a 64-bit machine under high memory load.
Comment 3 Bug Zapper 2011-06-02 10:43:40 EDT
This message is a reminder that Fedora 13 is nearing its end of life.
Approximately 30 (thirty) days from now Fedora will stop maintaining
and issuing updates for Fedora 13.  It is Fedora's policy to close all
bug reports from releases that are no longer maintained.  At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '13'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 13's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 13 is end of life.  If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora please change the 'version' of this 
bug to the applicable version.  If you are unable to change the version, 
please add a comment here and someone will do it for you.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events.  Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
http://fedoraproject.org/wiki/BugZappers/HouseKeeping
Comment 4 Bug Zapper 2011-06-27 11:57:35 EDT
Fedora 13 changed to end-of-life (EOL) status on 2011-06-25. Fedora 13 is 
no longer maintained, which means that it will not receive any further 
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of 
Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.

Note You need to log in before you can comment on or make changes to this bug.