Bug 1373399

Summary: ksoftirqd page allocation failure 4.8.0-0.rc4.git3.1.fc26.x86_64
Product: [Fedora] Fedora Reporter: Christopher Meng <i>
Component: kernelAssignee: Neil Horman <nhorman>
Status: CLOSED INSUFFICIENT_DATA QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rawhideCC: gansalmon, ichavero, i, itamar, jonathan, kernel-maint, madhu.chinakonda, mchehab, nhorman
Target Milestone: ---   
Target Release: ---   
Hardware: x86_64   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-10-25 17:22:50 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Christopher Meng 2016-09-06 08:02:09 UTC
ksoftirqd/7: page allocation failure: order:0, mode:0x2204010(GFP_NOWAIT|__GFP_COMP|__GFP_RECLAIMABLE|__GFP_NOTRACK)
CPU: 7 PID: 66 Comm: ksoftirqd/7 Not tainted 4.8.0-0.rc4.git3.1.fc26.x86_64 #1
Hardware name: System manufacturer System Product Name/P8H77-M PRO, BIOS 9002 05/30/2014
 0000000000000086 0000000070646647 ffff8e814c207528 ffffffffb6468ba3
 0000000000000000 0000000000000000 ffff8e814c2075b8 ffffffffb6204631
 022040104f7de320 0000000000000046 0000000000000000 0000000002204010
Call Trace:
 [<ffffffffb6468ba3>] dump_stack+0x86/0xc3
 [<ffffffffb6204631>] warn_alloc_failed+0x101/0x170
 [<ffffffffb6204e42>] __alloc_pages_nodemask+0x722/0x1140
 [<ffffffffb6263eb1>] alloc_pages_current+0xa1/0x1f0
 [<ffffffffb626f69a>] new_slab+0x30a/0x7b0
 [<ffffffffb627124b>] ___slab_alloc+0x3fb/0x5c0
 [<ffffffffb646e6e4>] ? radix_tree_node_alloc+0x34/0xa0
 [<ffffffffb646e6e4>] ? radix_tree_node_alloc+0x34/0xa0
 [<ffffffffb6271461>] __slab_alloc+0x51/0x90
 [<ffffffffb646e6e4>] ? radix_tree_node_alloc+0x34/0xa0
 [<ffffffffb62716e6>] kmem_cache_alloc+0x246/0x2d0
 [<ffffffffb646e6e4>] radix_tree_node_alloc+0x34/0xa0
 [<ffffffffb646ecd2>] __radix_tree_create+0x1a2/0x330
 [<ffffffffb646ee9d>] __radix_tree_insert+0x3d/0xd0
 [<ffffffffb68facd2>] ? _raw_spin_lock_irqsave+0x82/0x90
 [<ffffffffb649bbdc>] ? add_dma_entry+0x8c/0x170
 [<ffffffffb649bbf3>] add_dma_entry+0xa3/0x170
 [<ffffffffb603fd6b>] ? save_stack_trace+0x2b/0x50
 [<ffffffffb649c050>] debug_dma_map_sg+0x140/0x190
 [<ffffffffb6622aa2>] scsi_dma_map+0xe2/0x120
 [<ffffffffc04382c9>] megasas_make_sgl64.isra.5+0x19/0x60 [megaraid_sas]
 [<ffffffffc0439805>] megasas_build_and_issue_cmd+0x3f5/0x550 [megaraid_sas]
 [<ffffffffc0438008>] megasas_queue_command+0xf8/0x100 [megaraid_sas]
 [<ffffffffb661e07a>] scsi_dispatch_cmd+0x15a/0x390
 [<ffffffffb6621642>] scsi_request_fn+0x482/0x620
 [<ffffffffb642e173>] __blk_run_queue+0x33/0x40
 [<ffffffffb642e1a6>] blk_run_queue+0x26/0x40
 [<ffffffffb661db5a>] scsi_run_queue+0x27a/0x310
 [<ffffffffb6617730>] ? scsi_put_command+0x80/0xd0
 [<ffffffffb661ea75>] scsi_end_request+0x145/0x1e0
 [<ffffffffb6621a16>] scsi_io_completion+0x1b6/0x6a0
 [<ffffffffb6617c5f>] scsi_finish_command+0xcf/0x120
 [<ffffffffb6621192>] scsi_softirq_done+0x122/0x150
 [<ffffffffb643c0c6>] blk_done_softirq+0x96/0xc0
 [<ffffffffb68fde06>] __do_softirq+0xd6/0x4c5
 [<ffffffffb60daba4>] ? smpboot_thread_fn+0x34/0x1e0
 [<ffffffffb60dac99>] ? smpboot_thread_fn+0x129/0x1e0
 [<ffffffffb60b5cf5>] run_ksoftirqd+0x25/0x80
 [<ffffffffb60dac94>] smpboot_thread_fn+0x124/0x1e0
 [<ffffffffb60dab70>] ? sort_range+0x30/0x30
 [<ffffffffb60d6581>] kthread+0x101/0x120
 [<ffffffffb68f548f>] ? wait_for_completion+0x10f/0x140
 [<ffffffffb68fb02f>] ret_from_fork+0x1f/0x40
 [<ffffffffb60d6480>] ? kthread_create_on_node+0x250/0x250
SLUB: Unable to allocate memory on node -1, gfp=0x2000000(GFP_NOWAIT)
  cache: radix_tree_node, object size: 576, buffer size: 584, default order: 2, min order: 0
  node 0: slabs: 5013, objs: 140259, free: 0
DMA-API: cacheline tracking ENOMEM, dma-debug disabled

Comment 1 Neil Horman 2016-10-18 13:43:33 UTC
Looks like a standard ENOMEM, i.e. you ran out of memory, which isn't really a bug, is this happening consistently?

Comment 2 Neil Horman 2017-10-25 17:22:50 UTC
closing due to lack of response