Bug 755917 - Possible deadlock in zram
Summary: Possible deadlock in zram
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: kernel
Version: 6.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Jerome Marchand
QA Contact: Red Hat Kernel QE team
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2011-11-22 12:09 UTC by Jerome Marchand
Modified: 2014-02-03 13:00 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-02-03 13:00:44 UTC
Target Upstream Version:


Attachments (Terms of Use)

Description Jerome Marchand 2011-11-22 12:09:54 UTC
Description of problem:

When swapping on a zram device on a heavy loaded system, I received the following warning:

Adding 1023992k swap on /dev/zram0.  Priority:10 extents:1 across:1023992k SS

=================================
[ INFO: inconsistent lock state ]
2.6.32-131.17.1.el6.x86_64.debug #1
---------------------------------
inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-R} usage.
kswapd0/38 [HC0[0]:SC0[0]:HE1:SE1] takes:
 (&zram->init_lock){+++++-}, at: [<ffffffffa043803a>] zram_make_request+0x4a/0x250 [zram]
{RECLAIM_FS-ON-W} state was registered at:
  [<ffffffff810abdd3>] mark_held_locks+0x73/0xa0
  [<ffffffff810abea1>] lockdep_trace_alloc+0xa1/0xe0
  [<ffffffff81174f17>] kmem_cache_alloc_notrace+0x37/0x260
  [<ffffffffa04374b7>] zram_init_device+0x87/0x280 [zram]
  [<ffffffffa043822f>] zram_make_request+0x23f/0x250 [zram]
  [<ffffffff8126b011>] generic_make_request+0x321/0x630
  [<ffffffff8126b3ad>] submit_bio+0x8d/0x120
  [<ffffffff811c02c6>] submit_bh+0xf6/0x150
  [<ffffffff811c2d3b>] block_read_full_page+0x28b/0x3f0
  [<ffffffff811c7bb8>] blkdev_readpage+0x18/0x20
  [<ffffffff81139515>] __do_page_cache_readahead+0x255/0x260
  [<ffffffff811395c9>] force_page_cache_readahead+0x79/0xb0
  [<ffffffff811399d3>] page_cache_sync_readahead+0x43/0x50
  [<ffffffff81124698>] generic_file_aio_read+0x598/0x740
  [<ffffffff8118ee5a>] do_sync_read+0xfa/0x140
  [<ffffffff8118f885>] vfs_read+0xb5/0x1a0
  [<ffffffff8118f9c1>] sys_read+0x51/0x90
  [<ffffffff8100b132>] system_call_fastpath+0x16/0x1b
irq event stamp: 37627129
hardirqs last  enabled at (37627129): [<ffffffff8150e720>] _spin_unlock_irq+0x30/0x40
hardirqs last disabled at (37627128): [<ffffffff8150ea6f>] _spin_lock_irq+0x1f/0x80
softirqs last  enabled at (37626488): [<ffffffff8107403a>] __do_softirq+0x14a/0x200
softirqs last disabled at (37626471): [<ffffffff8100c38c>] call_softirq+0x1c/0x30

other info that might help us debug this:
no locks held by kswapd0/38.

stack backtrace:
Pid: 38, comm: kswapd0 Tainted: G         C ----------------   2.6.32-131.17.1.el6.x86_64.debug #1
Call Trace:
 [<ffffffff810aace7>] ? print_usage_bug+0x177/0x180
 [<ffffffff810abc8d>] ? mark_lock+0x35d/0x430
 [<ffffffff810acc77>] ? __lock_acquire+0x487/0x1590
 [<ffffffff8109b705>] ? sched_clock_local+0x25/0x90
 [<ffffffff81013673>] ? native_sched_clock+0x13/0x60
 [<ffffffff81012b49>] ? sched_clock+0x9/0x10
 [<ffffffff8109b828>] ? sched_clock_cpu+0xb8/0x110
 [<ffffffff810a867d>] ? trace_hardirqs_off+0xd/0x10
 [<ffffffff8109b96f>] ? cpu_clock+0x6f/0x80
 [<ffffffff810ade24>] ? lock_acquire+0xa4/0x120
 [<ffffffffa043803a>] ? zram_make_request+0x4a/0x250 [zram]
 [<ffffffff810ac12d>] ? trace_hardirqs_on+0xd/0x10
 [<ffffffff8150d1e1>] ? down_read+0x51/0xa0
 [<ffffffffa043803a>] ? zram_make_request+0x4a/0x250 [zram]
 [<ffffffff811728b3>] ? cache_alloc_debugcheck_after+0xf3/0x230
 [<ffffffffa043803a>] ? zram_make_request+0x4a/0x250 [zram]
 [<ffffffff8126b011>] ? generic_make_request+0x321/0x630
 [<ffffffff8109b828>] ? sched_clock_cpu+0xb8/0x110
 [<ffffffff811464c8>] ? inc_zone_page_state+0x68/0xa0
 [<ffffffff810ac0dd>] ? trace_hardirqs_on_caller+0x14d/0x190
 [<ffffffff8126b3ad>] ? submit_bio+0x8d/0x120
 [<ffffffff8115e344>] ? swap_writepage+0x94/0xe0
 [<ffffffff8113d386>] ? pageout.clone.1+0x136/0x330
 [<ffffffff8113db6f>] ? shrink_page_list.clone.0+0x40f/0x6a0
 [<ffffffff8100bbd0>] ? restore_args+0x0/0x30
 [<ffffffff8109b828>] ? sched_clock_cpu+0xb8/0x110
 [<ffffffff810a867d>] ? trace_hardirqs_off+0xd/0x10
 [<ffffffff8109b96f>] ? cpu_clock+0x6f/0x80
 [<ffffffff810ab7dd>] ? lock_release_holdtime+0x3d/0x190
 [<ffffffff8150e720>] ? _spin_unlock_irq+0x30/0x40
 [<ffffffff8113e0f9>] ? shrink_inactive_list+0x2f9/0x750
 [<ffffffff8109b828>] ? sched_clock_cpu+0xb8/0x110
 [<ffffffff8109b96f>] ? cpu_clock+0x6f/0x80
 [<ffffffff8113e8df>] ? shrink_zone+0x38f/0x510
 [<ffffffff8113fe69>] ? balance_pgdat+0x709/0x800
 [<ffffffff8113c430>] ? isolate_pages_global+0x0/0x3a0
 [<ffffffff811400a6>] ? kswapd+0x146/0x3a0
 [<ffffffff810ab7dd>] ? lock_release_holdtime+0x3d/0x190
 [<ffffffff8150e770>] ? _spin_unlock_irqrestore+0x40/0x80
 [<ffffffff81094130>] ? autoremove_wake_function+0x0/0x40
 [<ffffffff8113ff60>] ? kswapd+0x0/0x3a0
 [<ffffffff81093de6>] ? kthread+0x96/0xa0
 [<ffffffff8100c28a>] ? child_rip+0xa/0x20
 [<ffffffff8100bbd0>] ? restore_args+0x0/0x30
 [<ffffffff81093d50>] ? kthread+0x0/0xa0
 [<ffffffff8100c280>] ? child_rip+0x0/0x20
Version-Release number of selected component (if applicable):


How reproducible:
Not always.


Steps to Reproduce:
1. Load a system memory so that swapout happens frequently
2. swapon a zram device
3.
  
Actual results:
See above.

Expected results:
No warning.

Additional info:
I believe the problem is related to memory allocation in zram_init_device(). We probably should use GFP_NOFS or GFP_NOIO there.

Comment 2 Jerome Marchand 2011-11-30 13:34:46 UTC
I know think this is a false positive: the reclaim can not happen with an uninitialized device.

I've posted a patch upstream to prevent the warning to occur.

Comment 6 Suzanne Logcher 2012-05-18 20:51:50 UTC
This request was not resolved in time for the current release.
Red Hat invites you to ask your support representative to
propose this request, if still desired, for consideration in
the next release of Red Hat Enterprise Linux.


Note You need to log in before you can comment on or make changes to this bug.