Hide Forgot
abrt version: 2.0.1 architecture: x86_64 cmdline: ro root=UUID=48708e44-0d78-4227-a17a-75170ac0cb4b rd_NO_LUKS rd_NO_LVM rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us rhgb quiet comment: Home directories are automounted. Ran rsync backup over nfs. Otherwise, not sure what may have triggered it. component: kernel kernel: undefined os_release: Fedora release 15 (Lovelock) package: kernel reason: [ INFO: possible recursive locking detected ] reported_to: kerneloops: URL=http://submit.kerneloops.org/submitoops.php time: Mon Jun 20 16:08:06 2011 backtrace: :[ INFO: possible recursive locking detected ] :3.0-0.rc3.git5.1.fc16.x86_64 #1 :--------------------------------------------- :automount/3606 is trying to acquire lock: : (&(&dentry->d_lock)->rlock/1){+.+...}, at: [<ffffffff811fab69>] autofs4_expire_indirect+0x2bf/0x3ef :but task is already holding lock: : (&(&dentry->d_lock)->rlock/1){+.+...}, at: [<ffffffff811fab69>] autofs4_expire_indirect+0x2bf/0x3ef :other info that might help us debug this: : Possible unsafe locking scenario: : CPU0 : ---- : lock(&(&dentry->d_lock)->rlock); : lock(&(&dentry->d_lock)->rlock); : *** DEADLOCK *** : May be due to missing lock nesting notation :2 locks held by automount/3606: : #0: (&(&sbi->lookup_lock)->rlock){+.+...}, at: [<ffffffff811faae9>] autofs4_expire_indirect+0x23f/0x3ef : #1: (&(&dentry->d_lock)->rlock/1){+.+...}, at: [<ffffffff811fab69>] autofs4_expire_indirect+0x2bf/0x3ef :stack backtrace: :Pid: 3606, comm: automount Not tainted 3.0-0.rc3.git5.1.fc16.x86_64 #1 :Call Trace: : [<ffffffff81088a15>] __lock_acquire+0x917/0xcf7 : [<ffffffff81085bd6>] ? trace_hardirqs_off+0xd/0xf : [<ffffffff81088fc2>] ? lock_release_non_nested+0x1cd/0x232 : [<ffffffff811fab69>] ? autofs4_expire_indirect+0x2bf/0x3ef : [<ffffffff81089282>] lock_acquire+0xbf/0x103 : [<ffffffff811fab69>] ? autofs4_expire_indirect+0x2bf/0x3ef : [<ffffffff814f4595>] _raw_spin_lock_nested+0x34/0x69 : [<ffffffff811fab69>] ? autofs4_expire_indirect+0x2bf/0x3ef : [<ffffffff814f4e75>] ? _raw_spin_unlock+0x28/0x2c : [<ffffffff811fab69>] autofs4_expire_indirect+0x2bf/0x3ef : [<ffffffff811fb248>] ? autofs_dev_ioctl_askumount+0x2f/0x2f : [<ffffffff811fae6f>] autofs4_do_expire_multi+0x40/0xf9 : [<ffffffff81071596>] ? rcu_read_unlock+0x21/0x23 : [<ffffffff811fb248>] ? autofs_dev_ioctl_askumount+0x2f/0x2f : [<ffffffff811fb267>] autofs_dev_ioctl_expire+0x1f/0x21 : [<ffffffff811fb8ae>] _autofs_dev_ioctl+0x2aa/0x347 : [<ffffffff811fb95e>] autofs_dev_ioctl+0x13/0x17 : [<ffffffff811469ed>] do_vfs_ioctl+0x47b/0x4bc : [<ffffffff81146a84>] sys_ioctl+0x56/0x7a : [<ffffffff814fba02>] system_call_fastpath+0x16/0x1b event_log: :2011-06-20-16:35:52> Submitting oops report to http://submit.kerneloops.org/submitoops.php :2011-06-20-16:35:53 Kernel oops report was uploaded
Also, this is an F15 system with just rawhide kernel.
See https://bugzilla.kernel.org/show_bug.cgi?id=33242. I'll have a look at this again. I couldn't see the problem last time I looked but there wasn't a proposed patch to check against. If it makes sense to me I'll build a test kernel.
Poke. Is this still happening on a 3.0.1 F16 kernel?
(In reply to comment #3) > Poke. Is this still happening on a 3.0.1 F16 kernel? Looks like it is still a problem in the upstream kernel because the suggested patch hasn't been included yet. I will grab a copy of the patch and send it to the VFS maintainer and see how it goes.
Created attachment 518167 [details] autofs4 - fix lockdep splat in autofs Are my changes to the description of this patch OK with you Steven?
*** Bug 784089 has been marked as a duplicate of this bug. ***
I spoke with Ian and applied: http://article.gmane.org/gmane.linux.kernel/1182197/raw The patch should get upstream very soon.
Should be in tomorrow's rawhide.