Bug 558230 - INFO: possible circular locking dependency detected
Summary: INFO: possible circular locking dependency detected
Keywords:
Status: CLOSED DUPLICATE of bug 576156
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 13
Hardware: All
OS: Linux
low
medium
Target Milestone: ---
Assignee: Kernel Maintainer List
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2010-01-24 13:06 UTC by matti aarnio
Modified: 2010-03-30 23:06 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2010-03-30 23:06:30 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Linux Kernel 15142 0 None None None Never

Description matti aarnio 2010-01-24 13:06:39 UTC
Kernel dmesg report generated during boot:


dracut: Autoassembling MD Raid
md: md0 stopped.
md: bind<sdc1>
md: bind<sdb1>
md: bind<sdd1>
md: bind<sda1>
md: kicking non-fresh sdd1 from array!

=======================================================
[ INFO: possible circular locking dependency detected ]
2.6.33-0.18.rc4.git7.fc13.x86_64 #1
-------------------------------------------------------
mdadm/474 is trying to acquire lock:
 (s_active){++++.+}, at: [<ffffffff81175fde>] sysfs_addrm_finish+0x36/0x55

but task is already holding lock:
 (&bdev->bd_mutex){+.+.+.}, at: [<ffffffff811451c4>] bd_release_from_disk+0x3a/0xec

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&bdev->bd_mutex){+.+.+.}:
       [<ffffffff8107e402>] __lock_acquire+0xb71/0xd19
       [<ffffffff8107e686>] lock_acquire+0xdc/0x102
       [<ffffffff814757ee>] __mutex_lock_common+0x4b/0x392
       [<ffffffff81475bf9>] mutex_lock_nested+0x3e/0x43
       [<ffffffff81144570>] __blkdev_get+0x91/0x395
       [<ffffffff81144884>] blkdev_get+0x10/0x12
       [<ffffffff81144dc5>] open_by_devnum+0x2e/0x3f
       [<ffffffff813856eb>] lock_rdev+0x39/0xe4
       [<ffffffff81385881>] md_import_device+0xeb/0x2aa
       [<ffffffff81389fd9>] add_new_disk+0x71/0x441
       [<ffffffff8138c127>] md_ioctl+0xa01/0xf49
       [<ffffffff8121afd5>] __blkdev_driver_ioctl+0x39/0xa3
       [<ffffffff8121b970>] blkdev_ioctl+0x67d/0x6b1
       [<ffffffff81143b90>] block_ioctl+0x37/0x3b
       [<ffffffff8112af18>] vfs_ioctl+0x32/0xa6
       [<ffffffff8112b498>] do_vfs_ioctl+0x490/0x4d6
       [<ffffffff8112b534>] sys_ioctl+0x56/0x79
       [<ffffffff81009c32>] system_call_fastpath+0x16/0x1b

-> #1 (&new->reconfig_mutex){+.+.+.}:
       [<ffffffff8107e402>] __lock_acquire+0xb71/0xd19
       [<ffffffff8107e686>] lock_acquire+0xdc/0x102
       [<ffffffff814757ee>] __mutex_lock_common+0x4b/0x392
       [<ffffffff81475b73>] mutex_lock_interruptible_nested+0x3e/0x43
       [<ffffffff813844ee>] mddev_lock+0x17/0x19
       [<ffffffff813847c5>] md_attr_show+0x32/0x5d
       [<ffffffff81174e64>] sysfs_read_file+0xbd/0x17f
       [<ffffffff8111e4d1>] vfs_read+0xab/0x108
       [<ffffffff8111e5ee>] sys_read+0x4a/0x6e
       [<ffffffff81009c32>] system_call_fastpath+0x16/0x1b

-> #0 (s_active){++++.+}:
       [<ffffffff8107e2ac>] __lock_acquire+0xa1b/0xd19
       [<ffffffff8107e686>] lock_acquire+0xdc/0x102
       [<ffffffff811759eb>] sysfs_deactivate+0x9a/0x103
       [<ffffffff81175fde>] sysfs_addrm_finish+0x36/0x55
       [<ffffffff8117434c>] sysfs_hash_and_remove+0x53/0x6a
       [<ffffffff81176567>] sysfs_remove_link+0x21/0x23
       [<ffffffff81143f28>] del_symlink+0x1b/0x1d
       [<ffffffff8114520b>] bd_release_from_disk+0x81/0xec
       [<ffffffff813840c8>] unbind_rdev_from_array+0x67/0x154
       [<ffffffff81386327>] kick_rdev_from_array+0x16/0x23
       [<ffffffff8138946c>] do_md_run+0x1a1/0x873
       [<ffffffff8138c408>] md_ioctl+0xce2/0xf49
       [<ffffffff8121afd5>] __blkdev_driver_ioctl+0x39/0xa3
       [<ffffffff8121b970>] blkdev_ioctl+0x67d/0x6b1
       [<ffffffff81143b90>] block_ioctl+0x37/0x3b
       [<ffffffff8112af18>] vfs_ioctl+0x32/0xa6
       [<ffffffff8112b498>] do_vfs_ioctl+0x490/0x4d6
       [<ffffffff8112b534>] sys_ioctl+0x56/0x79
       [<ffffffff81009c32>] system_call_fastpath+0x16/0x1b

other info that might help us debug this:

2 locks held by mdadm/474:
 #0:  (&new->reconfig_mutex){+.+.+.}, at: [<ffffffff813844ee>] mddev_lock+0x17/0x19
 #1:  (&bdev->bd_mutex){+.+.+.}, at: [<ffffffff811451c4>] bd_release_from_disk+0x3a/0xec

stack backtrace:
Pid: 474, comm: mdadm Not tainted 2.6.33-0.18.rc4.git7.fc13.x86_64 #1
Call Trace:
 [<ffffffff8107d46f>] print_circular_bug+0xa8/0xb6
 [<ffffffff8107e2ac>] __lock_acquire+0xa1b/0xd19
 [<ffffffff8107e686>] lock_acquire+0xdc/0x102
 [<ffffffff81175fde>] ? sysfs_addrm_finish+0x36/0x55
 [<ffffffff8107c15e>] ? lockdep_init_map+0x9e/0x113
 [<ffffffff811759eb>] sysfs_deactivate+0x9a/0x103
 [<ffffffff81175fde>] ? sysfs_addrm_finish+0x36/0x55
 [<ffffffff8147560d>] ? __mutex_unlock_slowpath+0x120/0x132
 [<ffffffff81175fde>] sysfs_addrm_finish+0x36/0x55
 [<ffffffff8117434c>] sysfs_hash_and_remove+0x53/0x6a
 [<ffffffff81176567>] sysfs_remove_link+0x21/0x23
 [<ffffffff81143f28>] del_symlink+0x1b/0x1d
 [<ffffffff8114520b>] bd_release_from_disk+0x81/0xec
 [<ffffffff813840c8>] unbind_rdev_from_array+0x67/0x154
 [<ffffffff814743e1>] ? printk+0x41/0x48
 [<ffffffff81386327>] kick_rdev_from_array+0x16/0x23
 [<ffffffff8138946c>] do_md_run+0x1a1/0x873
 [<ffffffff8138c408>] md_ioctl+0xce2/0xf49
 [<ffffffff81010385>] ? native_sched_clock+0x2d/0x5f
 [<ffffffff81070ffc>] ? cpu_clock+0x43/0x5e
 [<ffffffff8121afd5>] __blkdev_driver_ioctl+0x39/0xa3
 [<ffffffff8107b913>] ? lock_release_holdtime+0x2c/0xdb
 [<ffffffff8121b970>] blkdev_ioctl+0x67d/0x6b1
 [<ffffffff81143b90>] block_ioctl+0x37/0x3b
 [<ffffffff8112af18>] vfs_ioctl+0x32/0xa6
 [<ffffffff8112b498>] do_vfs_ioctl+0x490/0x4d6
 [<ffffffff8112b534>] sys_ioctl+0x56/0x79
 [<ffffffff81009c32>] system_call_fastpath+0x16/0x1b
md: unbind<sdd1>
md: export_rdev(sdd1)

Comment 1 matti aarnio 2010-01-24 13:14:40 UTC
The troubled component device in my RAID5 array is correctly kicked out of the array and system boots up, so this is not a _fatal_ thing in itself.

The array gets fixed over next about 12 hours and no further lock troubles appear.

Previous boot (untroubled) had some 2.6.32 series kernel.

Comment 2 matti aarnio 2010-02-01 00:32:55 UTC
bugzilla.kernel.org:

> This trace seems to suggest that there is a lock called 's_active'
> in sysfs.  However the only 's_active' in sysfs is an atomic_t.
> It did used to be a rwsem, but that was back in 2.6.22.
> 
> Can you check what patches that kernel has which are not in mainline?
> And find out what s_active might be?
> 
> NeilBrown

Anybody familiar with Rawhide kernel source tree?

Comment 3 matti aarnio 2010-02-01 00:37:24 UTC
The bugzilla.kernel.org:15201 happens on a bit newer kernel version
when starting radeon driver for X.

(II) LoadModule: "radeon"
(II) Loading /usr/lib64/xorg/modules/drivers/radeon_drv.so
(II) Module radeon: vendor="X.Org Foundation"
        compiled for 1.7.99.3, module version = 6.12.99
        Module class: X.Org Video Driver
        ABI class: X.Org Video Driver, version 7.0

Comment 4 Bug Zapper 2010-03-15 14:12:10 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 13 development cycle.
Changing version to '13'.

More information and reason for this action is here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 5 Chuck Ebbert 2010-03-30 23:06:30 UTC
(In reply to comment #2)
> bugzilla.kernel.org:
> 
> > This trace seems to suggest that there is a lock called 's_active'
> > in sysfs.  However the only 's_active' in sysfs is an atomic_t.
> > It did used to be a rwsem, but that was back in 2.6.22.
> > 
> > Can you check what patches that kernel has which are not in mainline?
> > And find out what s_active might be?
> > 
> > NeilBrown
> 
> Anybody familiar with Rawhide kernel source tree?    

That part is just 2.6.33 with CONFIG_LOCKDEP enabled...

*** This bug has been marked as a duplicate of bug 576156 ***


Note You need to log in before you can comment on or make changes to this bug.