Bug 208840 - locking problem in md
locking problem in md
Status: CLOSED DUPLICATE of bug 208732
Product: Fedora
Classification: Fedora
Component: kernel (Show other bugs)
rawhide
All Linux
medium Severity medium
: ---
: ---
Assigned To: Kernel Maintainer List
Brian Brock
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2006-10-02 07:58 EDT by David Woodhouse
Modified: 2007-11-30 17:11 EST (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2006-10-04 19:27:34 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description David Woodhouse 2006-10-02 07:58:26 EDT
md: Autodetecting RAID arrays.
md: autorun ...
md: considering hde10 ...
md:  adding hde10 ...
md: hde9 has different UUID to hde10
md: hde8 has different UUID to hde10
md: hde7 has different UUID to hde10
md: hde5 has different UUID to hde10
md: hde3 has different UUID to hde10
md: hde2 has different UUID to hde10
md:  adding hda10 ...
md: hda9 has different UUID to hde10
md: hda8 has different UUID to hde10
md: hda7 has different UUID to hde10
md: hda5 has different UUID to hde10
md: hda3 has different UUID to hde10
md: hda2 has different UUID to hde10
md: created md6
md: bind<hda10>

=======================================================
[ INFO: possible circular locking dependency detected ]
2.6.18-1.2708.fc6 #1
-------------------------------------------------------
init/1 is trying to acquire lock:
 (&bdev_part_lock_key){--..}, at: [<c0613762>] mutex_lock+0x21/0x24

but task is already holding lock:
 (&new->reconfig_mutex){--..}, at: [<c0613431>] mutex_lock_interruptible+0x21/0x24

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&new->reconfig_mutex){--..}:
       [<c043bf98>] lock_acquire+0x4b/0x6b
       [<c0613270>] __mutex_lock_interruptible_slowpath+0xbc/0x25c
       [<c0613431>] mutex_lock_interruptible+0x21/0x24
       [<c05a724e>] md_open+0x28/0x5d
       [<c047a159>] do_open+0x8b/0x2f3
       [<c047a55a>] blkdev_open+0x1d/0x46
       [<c0471a4a>] __dentry_open+0xc8/0x1ab
       [<c0471b9b>] nameidata_to_filp+0x1c/0x2e
       [<c0471bdb>] do_filp_open+0x2e/0x35
       [<c0471c22>] do_sys_open+0x40/0xb5
       [<c0471cc3>] sys_open+0x16/0x18
       [<c0403fb7>] syscall_call+0x7/0xb

-> #1 (&bdev->bd_mutex){--..}:
       [<c043bf98>] lock_acquire+0x4b/0x6b
       [<c06135f3>] __mutex_lock_slowpath+0xbc/0x20a
       [<c0613762>] mutex_lock+0x21/0x24
       [<c047a12a>] do_open+0x5c/0x2f3
       [<c047a430>] blkdev_get+0x6f/0x7a
       [<c047a1d7>] do_open+0x109/0x2f3
       [<c047a55a>] blkdev_open+0x1d/0x46
       [<c0471a4a>] __dentry_open+0xc8/0x1ab
       [<c0471b9b>] nameidata_to_filp+0x1c/0x2e
       [<c0471bdb>] do_filp_open+0x2e/0x35
       [<c0471c22>] do_sys_open+0x40/0xb5
       [<c0471cc3>] sys_open+0x16/0x18
       [<c0403fb7>] syscall_call+0x7/0xb

-> #0 (&bdev_part_lock_key){--..}:
       [<c043bf98>] lock_acquire+0x4b/0x6b
       [<c06135f3>] __mutex_lock_slowpath+0xbc/0x20a
       [<c0613762>] mutex_lock+0x21/0x24
       [<c0479c8a>] bd_claim_by_disk+0x5f/0x169
       [<c05a16ca>] bind_rdev_to_array+0x20a/0x228
       [<c05a3571>] autorun_devices+0x1c8/0x29d
       [<c05a5dea>] md_ioctl+0x104/0x1540
       [<c04df7e0>] blkdev_driver_ioctl+0x49/0x5b
       [<c04dff06>] blkdev_ioctl+0x714/0x762
       [<c0479933>] block_ioctl+0x16/0x1b
       [<c048317e>] do_ioctl+0x22/0x67
       [<c048341b>] vfs_ioctl+0x258/0x26b
       [<c0483475>] sys_ioctl+0x47/0x62
       [<c0403fb7>] syscall_call+0x7/0xb

other info that might help us debug this:

1 lock held by init/1:
 #0:  (&new->reconfig_mutex){--..}, at: [<c0613431>]
mutex_lock_interruptible+0x21/0x24

stack backtrace:
 [<c04051ed>] show_trace_log_lvl+0x58/0x16a
 [<c04057fa>] show_trace+0xd/0x10
 [<c0405913>] dump_stack+0x19/0x1b
 [<c043b10f>] print_circular_bug_tail+0x59/0x64
 [<c043b898>] __lock_acquire+0x77e/0x90d
 [<c043bf98>] lock_acquire+0x4b/0x6b
 [<c06135f3>] __mutex_lock_slowpath+0xbc/0x20a
 [<c0613762>] mutex_lock+0x21/0x24
 [<c0479c8a>] bd_claim_by_disk+0x5f/0x169
 [<c05a16ca>] bind_rdev_to_array+0x20a/0x228
 [<c05a3571>] autorun_devices+0x1c8/0x29d
 [<c05a5dea>] md_ioctl+0x104/0x1540
 [<c04df7e0>] blkdev_driver_ioctl+0x49/0x5b
 [<c04dff06>] blkdev_ioctl+0x714/0x762
 [<c0479933>] block_ioctl+0x16/0x1b
 [<c048317e>] do_ioctl+0x22/0x67
 [<c048341b>] vfs_ioctl+0x258/0x26b
 [<c0483475>] sys_ioctl+0x47/0x62
 [<c0403fb7>] syscall_call+0x7/0xb
DWARF2 unwinder stuck at syscall_call+0x7/0xb
Leftover inexact backtrace:
 [<c04057fa>] show_trace+0xd/0x10
 [<c0405913>] dump_stack+0x19/0x1b
 [<c043b10f>] print_circular_bug_tail+0x59/0x64
 [<c043b898>] __lock_acquire+0x77e/0x90d
 [<c043bf98>] lock_acquire+0x4b/0x6b
 [<c06135f3>] __mutex_lock_slowpath+0xbc/0x20a
 [<c0613762>] mutex_lock+0x21/0x24
 [<c0479c8a>] bd_claim_by_disk+0x5f/0x169
 [<c05a16ca>] bind_rdev_to_array+0x20a/0x228
 [<c05a3571>] autorun_devices+0x1c8/0x29d
 [<c05a5dea>] md_ioctl+0x104/0x1540
 [<c04df7e0>] blkdev_driver_ioctl+0x49/0x5b
 [<c04dff06>] blkdev_ioctl+0x714/0x762
 [<c0479933>] block_ioctl+0x16/0x1b
 [<c048317e>] do_ioctl+0x22/0x67
 [<c048341b>] vfs_ioctl+0x258/0x26b
 [<c0483475>] sys_ioctl+0x47/0x62
 [<c0403fb7>] syscall_call+0x7/0xb
md: bind<hde10>
md: running: <hde10><hda10>
Comment 1 Dave Jones 2006-10-04 19:27:34 EDT

*** This bug has been marked as a duplicate of 208732 ***

Note You need to log in before you can comment on or make changes to this bug.