Bug 514555 - Possible recursive locking in __blkdev_put
Summary: Possible recursive locking in __blkdev_put
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 12
Hardware: x86_64
OS: Linux
low
medium
Target Milestone: ---
Assignee: Kernel Maintainer List
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-07-29 16:23 UTC by Jerry James
Modified: 2010-12-05 06:40 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-12-05 06:40:03 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)

Description Jerry James 2009-07-29 16:23:19 UTC
Description of problem:
I tried to install today's Rawhide in a KVM virtual machine.  The install failed due to a bug in anaconda.  On the way out of the installer, this report was printed on the console:

=============================================
[ INFO: possible recursive locking detected ]
2.6.31-0.103.rc4.git2.fc12.x86_64 #1
---------------------------------------------
sh/225 is trying to acquire lock:
 (&bdev->bd_mutex){+.+.+.}, at: [<ffffffff8116f3f9>] __blkdev_put+0x48/0x161

but task is already holding lock:
 (&bdev->bd_mutex){+.+.+.}, at: [<ffffffff8116f3f9>] __blkdev_put+0x48/0x161

other info that might help us debug this:
1 lock held by sh/225:
 #0:   (&bdev->bd_mutex){+.+.+.}, at: [<ffffffff8116f3f9>] __blkdev_put+0x48/0x161

stack backtrace:
Pid: 225, comm: sh Not tainted 2.6.31-0.103.rc4.git2.fc12.x86_64 #1
Call Trace:
 [<ffffffff81097cc7>] __lock_acquire+0xb84/0xc0e
 [<ffffffff81097e3f>] lock_acquire+0xee/0x12e
 [<ffffffff8116f3f9>] ? __blkdev_put+0x48/0x161
 [<ffffffff814fba5d>] ? trace_hardirqs_on_thunk+0x3a/0x3f
 [<ffffffff8116f3f9>] ? __blkdev_put+0x48/0x161
 [<ffffffff8116f3f9>] ? __blkdev_put+0x48/0x161
 [<ffffffff814fa573>] __mutex_lock_common+0x5b/0x3bf
 [<ffffffff8116f3f9>] ? __blkdev_put+0x48/0x161
 [<ffffffff8116e0da>] ? bd_release+0x79/0x94
 [<ffffffff81037caf>] ? kvm_clock_read+0x34/0x4a
 [<ffffffff814fa9fa>] mutex_lock_nested+0x4f/0x6b
 [<ffffffff8116f3f9>] __blkdev_put+0x48/0x161
 [<ffffffff8116f535>] blkdev_put+0x23/0x39
 [<ffffffff8116f5ea>] close_bdev_exclusive+0x33/0x4e
 [<ffffffff811445d2>] kill_block_super+0x4d/0x68
 [<ffffffff81144eca>] deactivate_super+0x6e/0x9c
 [<ffffffff8115c565>] mutput_no_expire+0xd0/0x125
 [<ffffffff811431ff>] __fput+0x1d5/0x1f8
 [<ffffffff8114324f>] fput+0x2d/0x43
 [<ffffffff81354a96>] loop_clr_fd+0x1bb/0x1e0
 [<ffffffff813554e0>] lo_release+0x5a/0x9c
 [<ffffffff8116f446>] __blkdev_put+0x95/0x161
 [<ffffffff8116f535>] blkdev_put+0x23/0x39
 [<ffffffff8116f5ea>] close_bdev_exclusive+0x33/0x4e
 [<ffffffff811445d2>] kill_block_super+0x4d/0x68
 [<ffffffff81144eca>] deactivate_super+0x6e/0x9c
 [<ffffffff8115c565>] mutput_no_expire+0xd0/0x125
 [<ffffffff811431ff>] __fput+0x1d5/0x1f8
 [<ffffffff8114324f>] fput+0x2d/0x43
 [<ffffffff81119f75>] remove_vma+0x67/0xb5
 [<ffffffff8111a0d2>] exit_mmap+0x10f/0x145
 [<ffffffff81062258>] mmput+0x6b/0xdd
 [<ffffffff810671a6>] exit_mm+0x115/0x136
 [<ffffffff81068f07>] do_exit+0x1e1/0x768
 [<ffffffff81069521>] do_group_exit+0x93/0xc3
 [<ffffffff81077639>] get_signal_to_deliver+0x36f/0x3a1
 [<ffffffff81012324>] do_notify_resume+0x98/0x769
 [<ffffffff8131289b>] ? tty_read+0x9b/0xe8
 [<ffffffff81012fd9>] ? sysret_signal+0x5/0xd9
 [<ffffffff81096522>] ? trace_hardirqs_on_caller+0x139/0x175
 [<ffffffff81013057>] sysret_signal+0x83/0xd9

Version-Release number of selected component (if applicable):
kernel-2.6.31-0.103.rc4.git2.fc12.x86_64

How reproducible:
Always (or at least, twice in a row)

Steps to Reproduce:
1. Install the 29 July 2009 Rawhide image in a KVM guest
  
Actual results:
After an unrelated anaconda failure, the above backtrace is printed.

Expected results:
No kernel backtrace.

Additional info:

Comment 1 Allen Kistler 2009-08-23 10:17:38 UTC
Update to the trace for the boot.iso from 22-Aug-2009.
It's not exactly the same, but it seems quite similar to me.

In my case, it's i386 in VMware.
There don't appear to be any other errors or faults before this one.
This trace occurs after a successful installation.

=======================================================
[ INFO: possible circular locking dependency detected ]
2.6.31-0.167.rc6.git6.fc12.i686 #1
-------------------------------------------------------
sh/188 is trying to acquire lock:
 (&type->s_umount_key#25){++++..}, at: [<c04fa20d>] deactivate_super+0x56/0x81

but task is already holding lock:
 (&bdev->bd_mutex){+.+.+.}, at: [<c051e952>] __blkdev_put+0x36/0x12e

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&bdev->bd_mutex){+.+.+.}:
       [<c0471297>] __lock_acquire+0x9b3/0xb25
       [<c04714c0>] lock_acquire+0xb7/0xeb
       [<c0822be4>] __mutex_lock_common+0x43/0x32b
       [<c0822fbf>] mutex_lock_nested+0x41/0x5a
       [<c051e952>] __blkdev_put+0x36/0x12e
       [<c051ea66>] blkdev_put+0x1c/0x2f
       [<c051eafd>] close_bdev_exclusive+0x2d/0x43
       [<c04fa07c>] get_sb_bdev+0xb5/0x14e
       [<c05a0326>] isofs_get_sb+0x28/0x3e
       [<c04f9ced>] vfs_kern_mount+0x94/0x113
       [<c04f9deb>] do_kern_mount+0x48/0xe4
       [<c050f5e1>] do_mount+0x6b1/0x71c
       [<c050f6cd>] sys_mount+0x81/0xc5
       [<c0403a50>] syscall_call+0x7/0xb
       [<ffffffff>] 0xffffffff

-> #0 (&type->s_umount_key#25){++++..}:
       [<c047119e>] __lock_acquire+0x8ba/0xb25
       [<c04714c0>] lock_acquire+0xb7/0xeb
       [<c0823266>] down_write+0x4b/0x9a
       [<c04fa20d>] deactivate_super+0x56/0x81
       [<c050e054>] mntput_no_expire+0x9f/0xe0
       [<c04f896d>] __fput+0x190/0x1a9
       [<c04f89ad>] fput+0x27/0x3a
       [<c06b7b6d>] loop_clr_fd+0x19f/0x1ba
       [<c06b7bce>] lo_release+0x46/0x7d
       [<c051e996>] __blkdev_put+0x7a/0x12e
       [<c051ea66>] blkdev_put+0x1c/0x2f
       [<c051eafd>] close_bdev_exclusive+0x2d/0x43
       [<c04f9a96>] kill_block_super+0x41/0x57
       [<c04fa212>] deactivate_super+0x5b/0x81
       [<c050e054>] mntput_no_expire+0x9f/0xe0
       [<c04f896d>] __fput+0x190/0x1a9
       [<c04f89ad>] fput+0x27/0x3a
       [<c04dd570>] remove_vma+0x4f/0x80
       [<c04dd688>] exit_mmap+0xe7/0x113
       [<c04427dc>] mmput+0x4d/0xb0
       [<c0446aa2>] exit_mm+0xeb/0x104
       [<c04485e0>] do_exit+0x19e/0x648
       [<c0448afc>] do_group_exit+0x72/0x99
       [<c045489f>] get_signal_to_deliver+0x333/0x35b
       [<c0402c3f>] do_notify_resume+0x87/0x7a7
       [<c0403b58>] work_notifysig+0x13/0x1b
       [<ffffffff>] 0xffffffff

other info that might help us debug this:

1 lock held by sh/188:
 #0:  (&bdev->bd_mutex){+.+.+.}, at: [<c051e952>] __blkdev_put+0x36/0x12e

stack backtrace:
Pid: 188, comm: sh Not tainted 2.6.31-0.167.rc6.git6.fc12.i686 #1
Call Trace:
 [<c082177c>] ? printk+0x22/0x36
 [<c04705cc>] print_circular_bug_tail+0x68/0x84
 [<c047119e>] __lock_acquire+0x8ba/0xb25
 [<c04714c0>] lock_acquire+0xb7/0xeb
 [<c04fa20d>] ? deactivate_super+0x56/0x81
 [<c04fa20d>] ? deactivate_super+0x56/0x81
 [<c0823266>] down_write+0x4b/0x9a
 [<c04fa20d>] ? deactivate_super+0x56/0x81
 [<c04fa20d>] deactivate_super+0x56/0x81
 [<c050e054>] mntput_no_expire+0x9f/0xe0
 [<c04f896d>] __fput+0x190/0x1a9
 [<c04f89ad>] fput+0x27/0x3a
 [<c06b7b6d>] loop_clr_fd+0x19f/0x1ba
 [<c06b7bce>] lo_release+0x46/0x7d
 [<c051e996>] __blkdev_put+0x7a/0x12e
 [<c051ea66>] blkdev_put+0x1c/0x2f
 [<c051eafd>] close_bdev_exclusive+0x2d/0x43
 [<c04f9a96>] kill_block_super+0x41/0x57
 [<c04fa212>] deactivate_super+0x5b/0x81
 [<c050e054>] mntput_no_expire+0x9f/0xe0
 [<c04f896d>] __fput+0x190/0x1a9
 [<c04f89ad>] fput+0x27/0x3a
 [<c04dd570>] remove_vma+0x4f/0x80
 [<c04dd688>] exit_mmap+0xe7/0x113
 [<c04427dc>] mmput+0x4d/0xb0
 [<c0446aa2>] exit_mm+0xeb/0x104
 [<c04485e0>] do_exit+0x19e/0x648
 [<c046fef9>] ? trace_hardirqs_on_caller+0x122/0x155
 [<c0448afc>] do_group_exit+0x72/0x99
 [<c045489f>] get_signal_to_deliver+0x333/0x35b
 [<c0402c3f>] do_notify_resume+0x87/0x7a7
 [<c0681b2f>] ? put_ldisc+0xa5/0xc3
 [<c046ff45>] ? trace_hardirqs_on+0x19/0x2c
 [<c0681b69>] ? tty_ldisc_deref+0x1c/0x2f
 [<c067af74>] ? tty_read+0x82/0xbf
 [<c067eed2>] ? n_tty_read+0x0/0x5e4
 [<c067aef2>] ? tty_read+0x0/0xbf
 [<c04f7868>] ? vfs_read+0x9a/0x10a
 [<c0403b58>] work_notifysig+0x13/0x1b

Comment 2 Bug Zapper 2009-11-16 11:10:38 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 12 development cycle.
Changing version to '12'.

More information and reason for this action is here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 3 Bug Zapper 2010-11-04 10:39:12 UTC
This message is a reminder that Fedora 12 is nearing its end of life.
Approximately 30 (thirty) days from now Fedora will stop maintaining
and issuing updates for Fedora 12.  It is Fedora's policy to close all
bug reports from releases that are no longer maintained.  At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '12'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 12's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 12 is end of life.  If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora please change the 'version' of this 
bug to the applicable version.  If you are unable to change the version, 
please add a comment here and someone will do it for you.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events.  Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 4 Bug Zapper 2010-12-05 06:40:03 UTC
Fedora 12 changed to end-of-life (EOL) status on 2010-12-02. Fedora 12 is 
no longer maintained, which means that it will not receive any further 
security or bug fix updates. As a result we are closing this bug.

If you can reproduce this bug against a currently maintained version of 
Fedora please feel free to reopen this bug against that version.

Thank you for reporting this bug and we are sorry it could not be fixed.


Note You need to log in before you can comment on or make changes to this bug.