Description of problem: I seem to be getting this each time I start 'qemu-kvm': Aug 4 06:58:47 localhost kernel: Aug 4 06:58:47 localhost kernel: ============================================= Aug 4 06:58:47 localhost kernel: [ INFO: possible recursive locking detected ] Aug 4 06:58:47 localhost kernel: 2.6.27-0.215.rc1.git4.fc10.i686 #1 Aug 4 06:58:47 localhost kernel: --------------------------------------------- Aug 4 06:58:47 localhost kernel: qemu-kvm/4065 is trying to acquire lock: Aug 4 06:58:47 localhost kernel: (&inode->i_data.i_mmap_lock){--..}, at: [<c0488387>] mm_take_all_locks+0xb6/0xd8 Aug 4 06:58:47 localhost kernel: Aug 4 06:58:47 localhost kernel: but task is already holding lock: Aug 4 06:58:47 localhost kernel: (&inode->i_data.i_mmap_lock){--..}, at: [<c0488387>] mm_take_all_locks+0xb6/0xd8 Aug 4 06:58:47 localhost kernel: Aug 4 06:58:47 localhost kernel: other info that might help us debug this: Aug 4 06:58:47 localhost kernel: 4 locks held by qemu-kvm/4065: Aug 4 06:58:47 localhost kernel: #0: (&mm->mmap_sem){----}, at: [<c0495dc0>] do_mmu_notifier_register+0x4d/0xea Aug 4 06:58:47 localhost kernel: #1: (mm_all_locks_mutex){--..}, at: [<c04882fe>] mm_take_all_locks+0x2d/0xd8 Aug 4 06:58:47 localhost kernel: #2: (&inode->i_data.i_mmap_lock){--..}, at: [<c0488387>] mm_take_all_locks+0xb6/0xd8 Aug 4 06:58:47 localhost kernel: #3: (&anon_vma->lock){--..}, at: [<c048833e>] mm_take_all_locks+0x6d/0xd8 Aug 4 06:58:47 localhost kernel: Aug 4 06:58:47 localhost kernel: stack backtrace: Aug 4 06:58:47 localhost kernel: Pid: 4065, comm: qemu-kvm Not tainted 2.6.27-0.215.rc1.git4.fc10.i686 #1 Aug 4 06:58:47 localhost kernel: [<c06808dd>] ? printk+0x14/0x17 Aug 4 06:58:47 localhost kernel: [<c044b7e8>] __lock_acquire+0x6be/0x97d Aug 4 06:58:47 localhost kernel: [<c044bb11>] lock_acquire+0x6a/0x90 Aug 4 06:58:47 localhost kernel: [<c0488387>] ? mm_take_all_locks+0xb6/0xd8 Aug 4 06:58:47 localhost kernel: [<c0682ca7>] _spin_lock+0x21/0x4e Aug 4 06:58:47 localhost kernel: [<c0488387>] ? mm_take_all_locks+0xb6/0xd8 Aug 4 06:58:47 localhost kernel: [<c0488387>] mm_take_all_locks+0xb6/0xd8 Aug 4 06:58:47 localhost kernel: [<c0495dc7>] do_mmu_notifier_register+0x54/0xea Aug 4 06:58:47 localhost kernel: [<c0495e80>] mmu_notifier_register+0x12/0x16 Aug 4 06:58:47 localhost kernel: [<f93d27cd>] kvm_dev_ioctl+0xe4/0x20e [kvm] Aug 4 06:58:47 localhost kernel: [<f93d26e9>] ? kvm_dev_ioctl+0x0/0x20e [kvm] Aug 4 06:58:47 localhost kernel: [<c04a626f>] vfs_ioctl+0x27/0x6e Aug 4 06:58:47 localhost kernel: [<c04a6505>] do_vfs_ioctl+0x24f/0x262 Aug 4 06:58:47 localhost kernel: [<c04f10dd>] ? selinux_file_ioctl+0x3a/0x3d Aug 4 06:58:47 localhost kernel: [<c04a655d>] sys_ioctl+0x45/0x60 Aug 4 06:58:47 localhost kernel: [<c0403cbe>] syscall_call+0x7/0xb Aug 4 06:58:47 localhost kernel: [<c068007b>] ? acpi_processor_start+0x2cd/0x6b6 Aug 4 06:58:47 localhost kernel: ======================= Version-Release number of selected component (if applicable): kernel-2.6.27-0.215.rc1.git4.fc10.i686 How reproducible: Seems repeatable Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Created attachment 313389 [details] please test this patch
I applied the above patch to 0.215 and built it locally. I cannot seem to reproduce the above "possible recursive locking" spew running this patched system.
Haven't be able to reproduce in kernel-2.6.27-0.254.rc3.fc10.i686. Fixed in rc3?
This bug appears to have been reported against 'rawhide' during the Fedora 10 development cycle. Changing version to '10'. More information and reason for this action is here: http://fedoraproject.org/wiki/BugZappers/HouseKeeping
*** Bug 459669 has been marked as a duplicate of this bug. ***
Based on Comment #3, I'm going to close this out as CURRENTRELEASE. If the problem happens again, please feel free to re-open. Chris Lalancette