Bug 222323 - INFO: possible recursive locking detected
INFO: possible recursive locking detected
Status: CLOSED RAWHIDE
Product: Fedora
Classification: Fedora
Component: kernel (Show other bugs)
rawhide
All Linux
medium Severity medium
: ---
: ---
Assigned To: Peter Zijlstra
Brian Brock
:
Depends On:
Blocks: FCMETA_LOCKDEP
  Show dependency treegraph
 
Reported: 2007-01-11 12:47 EST by Orion Poplawski
Modified: 2014-08-11 01:40 EDT (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2007-03-14 16:50:06 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
lockdep annotation (1.38 KB, patch)
2007-02-16 05:42 EST, Peter Zijlstra
no flags Details | Diff

  None (edit)
Description Orion Poplawski 2007-01-11 12:47:36 EST
Description of problem:

During install from today's (20070111) rawhide PXE/NFS install.

<4>=============================================
<4>[ INFO: possible recursive locking detected ]
<4>2.6.19-1.2909.fc7 #1
<4>---------------------------------------------
<4>anaconda/587 is trying to acquire lock:
<4> (&bdev->bd_mutex){--..}, at: [<c05fb380>] mutex_lock+0x21/0x24
<4>
<4>but task is already holding lock:
<4> (&bdev->bd_mutex){--..}, at: [<c05fb380>] mutex_lock+0x21/0x24
<4>
<4>other info that might help us debug this:
<4>1 lock held by anaconda/587:
<4> #0:  (&bdev->bd_mutex){--..}, at: [<c05fb380>] mutex_lock+0x21/0x24
<4>
<4>stack backtrace:
<4> [<c0405812>] show_trace_log_lvl+0x1a/0x2f
<4> [<c0405db2>] show_trace+0x12/0x14
<4> [<c0405e36>] dump_stack+0x16/0x18
<4> [<c043bd84>] __lock_acquire+0x116/0xa09
<4> [<c043c960>] lock_acquire+0x56/0x6f
<4> [<c05fb1fa>] __mutex_lock_slowpath+0xe5/0x24a
<4> [<c05fb380>] mutex_lock+0x21/0x24
<4> [<c04d82fb>] blkdev_ioctl+0x600/0x76d
<4> [<c04946b1>] block_ioctl+0x1b/0x1f
<4> [<c047ed5a>] do_ioctl+0x22/0x68
<4> [<c047eff2>] vfs_ioctl+0x252/0x265
<4> [<c047f04e>] sys_ioctl+0x49/0x63
<4> [<c0404070>] syscall_call+0x7/0xb

Is this a kernel issue or a problem with anaconda?
Comment 1 Peter Zijlstra 2007-02-16 03:53:46 EST
Ooh, fun one; this is BLKPG_DEL_PARTITION, right?

looks like both a missing annotation and a potential ABBA deadlock due to lock
order problems.
Comment 2 Peter Zijlstra 2007-02-16 05:42:09 EST
Created attachment 148182 [details]
lockdep annotation

I got myself confused, no ABBA deadlock, locking order is OK.
Just a missing annotation.
Comment 3 Orion Poplawski 2007-02-22 11:34:06 EST
Still see with today's rawhide.

<4>[ INFO: possible recursive locking detected ]
<4>2.6.20-1.2936.fc7 #1
<4>---------------------------------------------
<4>anaconda/546 is trying to acquire lock:
<4> (&bdev->bd_mutex){--..}, at: [<c060cd60>] mutex_lock+0x21/0x24
<4>
<4>but task is already holding lock:
<4> (&bdev->bd_mutex){--..}, at: [<c060cd60>] mutex_lock+0x21/0x24
<4>
<4>other info that might help us debug this:
<4>1 lock held by anaconda/546:
<4> #0:  (&bdev->bd_mutex){--..}, at: [<c060cd60>] mutex_lock+0x21/0x24
<4>
<4>stack backtrace:
<4> [<c04068f2>] show_trace_log_lvl+0x1a/0x2f
<4> [<c0406eb1>] show_trace+0x12/0x14
<4> [<c0406f35>] dump_stack+0x16/0x18
<4> [<c043ff7a>] __lock_acquire+0x11f/0xba9
<4> [<c0440df6>] lock_acquire+0x56/0x6f
<4> [<c060cbc8>] __mutex_lock_slowpath+0xf7/0x26e
<4> [<c060cd60>] mutex_lock+0x21/0x24
<4> [<c04e5187>] blkdev_ioctl+0x608/0x775
<4> [<c049bc51>] block_ioctl+0x1b/0x1f
<4> [<c048626e>] do_ioctl+0x22/0x68
<4> [<c0486506>] vfs_ioctl+0x252/0x265
<4> [<c0486562>] sys_ioctl+0x49/0x63
<4> [<c0405150>] syscall_call+0x7/0xb
Comment 4 matti aarnio 2007-03-02 05:53:21 EST
Another datapoint, LVM on top of DMRAID.
Indeed I do see this even when the LVM is not present at disk surfaces, but it
is scanned for on a DMRAID "fake raid" surfaces.

On brand new "Rawhide" kernel.

   .....
device-mapper: ioctl: 4.11.0-ioctl (2006-10-12) initialised: dm-devel@redhat.com

=============================================
[ INFO: possible recursive locking detected ]
2.6.20-1.2949.fc7 #1
---------------------------------------------
init/1 is trying to acquire lock:
 (&md->io_lock){----}, at: [<ffffffff880dc95b>] dm_request+0x25/0x130 [dm_mod]

but task is already holding lock:
 (&md->io_lock){----}, at: [<ffffffff880dc95b>] dm_request+0x25/0x130 [dm_mod]

other info that might help us debug this:
1 lock held by init/1:
 #0:  (&md->io_lock){----}, at: [<ffffffff880dc95b>] dm_request+0x25/0x130 [dm_mod]

stack backtrace:

Call Trace:
 [<ffffffff802a30ad>] __lock_acquire+0x151/0xbc4
 [<ffffffff802a3f16>] lock_acquire+0x4c/0x65
 [<ffffffff880dc95b>] :dm_mod:dm_request+0x25/0x130
 [<ffffffff8029eb77>] down_read+0x3e/0x4a
 [<ffffffff880dc95b>] :dm_mod:dm_request+0x25/0x130
 [<ffffffff8021bf40>] generic_make_request+0x259/0x270
 [<ffffffff880db4be>] :dm_mod:__map_bio+0xc0/0x11d
 [<ffffffff880dbf7d>] :dm_mod:__split_bio+0x164/0x372
 [<ffffffff80263f18>] _spin_unlock_irq+0x2b/0x31
 [<ffffffff8026357e>] __down_read+0x3d/0xa1
 [<ffffffff880dca53>] :dm_mod:dm_request+0x11d/0x130
 [<ffffffff8021bf40>] generic_make_request+0x259/0x270
 [<ffffffff802c529c>] mempool_alloc_slab+0x11/0x13
 [<ffffffff80233f3f>] submit_bio+0xcf/0xd8
 [<ffffffff8021a37f>] submit_bh+0xed/0x111
 [<ffffffff802f1273>] block_read_full_page+0x296/0x2b4
 [<ffffffff802f35f2>] blkdev_get_block+0x0/0x4d
 [<ffffffff802f2850>] blkdev_readpage+0x13/0x15
 [<ffffffff8021288a>] __do_page_cache_readahead+0x197/0x212
 [<ffffffff802a2c1d>] debug_check_no_locks_freed+0x120/0x12f
 [<ffffffff802f3496>] bdev_alloc_inode+0x15/0x2a
 [<ffffffff802a2ad9>] trace_hardirqs_on+0x136/0x15a
 [<ffffffff80232fe7>] blockable_page_cache_readahead+0x5f/0xc1
 [<ffffffff80213a11>] page_cache_readahead+0x146/0x1bb
 [<ffffffff8020c42b>] do_generic_mapping_read+0x157/0x48d
 [<ffffffff8020d42f>] file_read_actor+0x0/0x19d
 [<ffffffff802167b7>] generic_file_aio_read+0x15a/0x19b
 [<ffffffff8020d15a>] do_sync_read+0xe2/0x126
 [<ffffffff8032430d>] file_has_perm+0xa7/0xb6
 [<ffffffff8029c78d>] autoremove_wake_function+0x0/0x38
 [<ffffffff8020b4e3>] vfs_read+0xcc/0x175
 [<ffffffff802114a4>] sys_read+0x47/0x6f
 [<ffffffff8025c11e>] system_call+0x7e/0x83
    ....
Comment 5 dex 2007-03-13 21:24:06 EDT
I've been getting these errors since the pata port on my m/b was enabled
(promise 376 fake raid) now its bugging me! latest rawhide kernel.

Linux dexterF7 2.6.20-1.2985.fc7 #1 SMP Mon Mar 12 20:21:25 EDT 2007 i686 athlon
i386 GNU/Linux

kernel: device-mapper: ioctl: 4.11.0-ioctl (2006-10-12) initialised:
dm-devel@redhat.com
 kernel: 
 kernel: =============================================
 kernel: [ INFO: possible recursive locking detected ]
 kernel: 2.6.20-1.2985.fc7 #1
 kernel: ---------------------------------------------
 kernel: init/1 is trying to acquire lock:
 kernel:  (&md->io_lock){----}, at: [<f896a7b1>] dm_request+0x18/0xea [dm_mod]
 kernel: 
 kernel: but task is already holding lock:
 kernel:  (&md->io_lock){----}, at: [<f896a7b1>] dm_request+0x18/0xea [dm_mod]
 kernel: 
 kernel: other info that might help us debug this:
 kernel: 1 lock held by init/1:
 kernel:  #0:  (&md->io_lock){----}, at: [<f896a7b1>] dm_request+0x18/0xea [dm_mod]
 kernel: 
 kernel: stack backtrace:
 kernel:  [<c04061ed>] show_trace_log_lvl+0x1a/0x2f
 kernel:  [<c04067b1>] show_trace+0x12/0x14
 kernel:  [<c0406835>] dump_stack+0x16/0x18
 kernel:  [<c0441f0b>] __lock_acquire+0x11f/0xba4
 kernel:  [<c0442d82>] lock_acquire+0x56/0x6f
 kernel:  [<c043b69c>] down_read+0x3f/0x51
 kernel:  [<f896a7b1>] dm_request+0x18/0xea [dm_mod]
 kernel:  [<c04e3f80>] generic_make_request+0x2d8/0x2eb
 kernel:  [<f8969516>] __map_bio+0xd5/0x128 [dm_mod]
 kernel:  [<f8969e8e>] __split_bio+0x16f/0x3d2 [dm_mod]
 kernel:  [<f896a875>] dm_request+0xdc/0xea [dm_mod]
 kernel:  [<c04e3f80>] generic_make_request+0x2d8/0x2eb
 kernel:  [<c04e5f78>] submit_bio+0xd7/0xdf
 kernel:  [<c0499aef>] submit_bh+0xf0/0x10f
 kernel:  [<c049c2ed>] block_read_full_page+0x2c9/0x2d9
 kernel:  [<c049dd4d>] blkdev_readpage+0xf/0x11
 kernel:  [<c0465d3f>] __do_page_cache_readahead+0x16a/0x1b6
 kernel:  [<c0465dd8>] blockable_page_cache_readahead+0x4d/0xa0
 kernel:  [<c0465ff0>] page_cache_readahead+0x129/0x190
 kernel:  [<c04609e3>] do_generic_mapping_read+0x12b/0x420
 kernel:  [<c046276c>] generic_file_aio_read+0x16a/0x19a
 kernel:  [<c047daff>] do_sync_read+0xc2/0xff
 kernel:  [<c047e3a4>] vfs_read+0xad/0x161
 kernel:  [<c047e830>] sys_read+0x3d/0x61
 kernel:  [<c040507c>] syscall_call+0x7/0xb
 kernel:  =======================
 kernel: kjournald starting.  Commit interval 5 seconds
Comment 6 Peter Zijlstra 2007-03-14 06:39:13 EDT
The md->io_lock issue is another one. Please see
https://bugzilla.redhat.com/bugzilla/show_bug.cgi?id=204311

If the bd_mutex one is solved I'd suggest closing this one.
Comment 7 Orion Poplawski 2007-03-14 16:50:06 EDT
I'm not seeing this anymore.

Note You need to log in before you can comment on or make changes to this bug.