Bug 459669 - kernel lock warning when launching a VM with virsh
kernel lock warning when launching a VM with virsh
Status: CLOSED DUPLICATE of bug 457779
Product: Fedora
Classification: Fedora
Component: kvm (Show other bugs)
10
x86_64 Linux
medium Severity medium
: ---
: ---
Assigned To: Glauber Costa
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2008-08-20 19:30 EDT by James Morris
Modified: 2009-10-29 06:55 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2009-10-29 06:55:35 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description James Morris 2008-08-20 19:30:12 EDT
Description of problem:

When launching a VM with virsh (this is an AMD SVM system), I get the following warning:


=============================================
[ INFO: possible recursive locking detected ]
2.6.27-0.244.rc2.git1.fc10.x86_64 #1
---------------------------------------------
qemu-kvm/3161 is trying to acquire lock:
 (&inode->i_data.i_mmap_lock){--..}, at: [<ffffffff810b119e>] mm_take_all_locks+0xd5/0xfc

but task is already holding lock:
 (&inode->i_data.i_mmap_lock){--..}, at: [<ffffffff810b119e>] mm_take_all_locks+0xd5/0xfc

other info that might help us debug this:
4 locks held by qemu-kvm/3161:
 #0:  (&mm->mmap_sem){----}, at: [<ffffffff810c5bc7>] do_mmu_notifier_register+0x5a/0x116
 #1:  (mm_all_locks_mutex){--..}, at: [<ffffffff810b10fb>] mm_take_all_locks+0x32/0xfc
 #2:  (&inode->i_data.i_mmap_lock){--..}, at: [<ffffffff810b119e>] mm_take_all_locks+0xd5/0xfc
 #3:  (&anon_vma->lock){--..}, at: [<ffffffff810b1146>] mm_take_all_locks+0x7d/0xfc

stack backtrace:
Pid: 3161, comm: qemu-kvm Not tainted 2.6.27-0.244.rc2.git1.fc10.x86_64 #1

Call Trace:
 [<ffffffff810668f5>] __lock_acquire+0x790/0xaa7
 [<ffffffff8130b961>] ? __mutex_lock_common+0x30a/0x35b
 [<ffffffff810b10fb>] ? mm_take_all_locks+0x32/0xfc
 [<ffffffff810b119e>] ? mm_take_all_locks+0xd5/0xfc
 [<ffffffff81066ca2>] lock_acquire+0x96/0xc3
 [<ffffffff810b119e>] ? mm_take_all_locks+0xd5/0xfc
 [<ffffffff8130d068>] _spin_lock+0x2b/0x58
 [<ffffffff810b119e>] mm_take_all_locks+0xd5/0xfc
 [<ffffffff810c5bcf>] do_mmu_notifier_register+0x62/0x116
 [<ffffffff810c5ca8>] mmu_notifier_register+0x13/0x15
 [<ffffffffa01a7c10>] kvm_dev_ioctl+0x11c/0x27f [kvm]
 [<ffffffff8113a88e>] ? file_has_perm+0x88/0x93
 [<ffffffff810db4eb>] vfs_ioctl+0x2f/0x7d
 [<ffffffff810db795>] do_vfs_ioctl+0x25c/0x279
 [<ffffffff810db80c>] sys_ioctl+0x5a/0x7e
 [<ffffffff8101034a>] system_call_fastpath+0x16/0x1b

Version-Release number of selected component (if applicable):


How reproducible:

Always

Steps to Reproduce:
1. virsh start sys1
2.
3.
  
Additional info:

The VM seems to be running ok anyway.
Comment 1 Bug Zapper 2008-11-25 21:50:01 EST
This bug appears to have been reported against 'rawhide' during the Fedora 10 development cycle.
Changing version to '10'.

More information and reason for this action is here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping
Comment 2 Chris Lalancette 2009-10-29 06:55:35 EDT
This looks like a dup of 457779 (which incidentally looks like it doesn't happen anymore).  Closing it.

Chris Lalancette

*** This bug has been marked as a duplicate of bug 457779 ***

Note You need to log in before you can comment on or make changes to this bug.