Bug 1278992 - ceph-osd aborts during 'XFS: possible memory allocation deadlock in kmem_alloc (mode:0x8250)' when directory block size of 64k used [NEEDINFO]
ceph-osd aborts during 'XFS: possible memory allocation deadlock in kmem_allo...
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: kernel (Show other bugs)
All Linux
high Severity high
: rc
: 7.4
Assigned To: fs-maint
Zorro Lang
Depends On:
Blocks: 1203710 1298243 1313485 1469559 1295577
  Show dependency treegraph
Reported: 2015-11-06 18:47 EST by Kyle Squizzato
Modified: 2018-03-29 14:40 EDT (History)
18 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2018-03-29 14:40:57 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
jkachuck: needinfo? (fs-maint)

Attachments (Terms of Use)

External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 1597523 None None None Never

  None (edit)
Description Kyle Squizzato 2015-11-06 18:47:21 EST
Description of problem:
ceph-osd daemons begin to suicide during XFS memory allocation deadlocks.  The following messages are printed to /var/log/messages: 

XFS: possible memory allocation deadlock in kmem_alloc (mode:0x8250)  

This appears to occur when a directory block size of 64k used: 

 -n size=65536

Version-Release number of selected component (if applicable):

How reproducible:
Not sure how the issue can be reproduced, however the issue appears to occur when Ceph OSD's are under heavy load in a Ceph (Firefly) cluster.

Actual results:
XFS deadlocks and ceph-osd's suicide. 

Expected results:
No XFS deadlock or ceph-osd suicide's.
Comment 2 Brian Foster 2015-11-07 10:03:00 EST
Just as a quick first step experiment, I formatted an '-n size=64k' fs and ran a quick file creation/deletion loop with a debug printk() in xlog_cil_insert_format_items() to dump the size of any >PAGE_SIZE allocation requests. I very quickly see allocs up to around 64k, some even larger:

xlog_cil_insert_format_items(243): buf_size 64984 (nbytes 64880 niovecs 3)
xlog_cil_insert_format_items(243): buf_size 65112 (nbytes 65008 niovecs 3)
xlog_cil_insert_format_items(243): buf_size 65368 (nbytes 65264 niovecs 3)
xlog_cil_insert_format_items(243): buf_size 65496 (nbytes 65392 niovecs 3)
xlog_cil_insert_format_items(243): buf_size 65728 (nbytes 65640 niovecs 2)

From that perspective, it doesn't seem that surprising to see allocation failures from kmem_zalloc() calls here if we assume memory fragmentation is an eventuality. Further, we're in KM_NOFS context which I assume precludes things like writeback, etc., but even if we weren't, those are still order 4 or larger sized requests.

My first question is, without having yet dug into the core context for these allocation sizes, is there any reason for not using something like kmem_zalloc_large() here (assuming we preserve the KM_SLEEP behavior)?
Comment 3 Dave Chinner 2015-11-09 05:41:18 EST
Why are is the filesystem configured to use 64k directory block sizes? Are they putting millions of files in a single directory? If not, then just use the default directory block size and the problem goes away....

Comment 14 Eric Sandeen 2016-06-30 12:27:11 EDT
This is a known issue w/ 64k dirs, and there is no current solution, though workarounds exist (i.e. don't mkfs w/ that option).

For now moving to 7.4, though AFAIK there has been no upstream activity on this either, so 7.4 is not necessarily likely, either.
Comment 15 Joseph Kachuck 2017-10-24 16:44:28 EDT
Hello HPE,
From comment 14. Should this bug be moved to a medium or low bug?

As it appears this issue will not be fixed. Would HPE like a kbase stating this option do not work?

Thank You
Joe Kachuck
Comment 16 Dave Wysochanski 2018-03-29 14:40:57 EDT
As far as I know there's no plans to address this and there's no recent activity in the bug.  Feel free to reopen if you have new information.

Note You need to log in before you can comment on or make changes to this bug.