Bug 798061

Summary: lockdep: (&sb->s_type->i_mutex_key#12){+.+.+.}, at: [<ffffffffa01a649b>] btrfs_page_mkwrite+0x5b/0x310
Product: [Fedora] Fedora Reporter: Dave Jones <davej>
Component: kernelAssignee: Josef Bacik <jbacik>
Status: CLOSED ERRATA QA Contact: Fedora Extras Quality Assurance <extras-qa>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: 16CC: gansalmon, itamar, jonathan, kernel-maint, madhu.chinakonda, pfrields
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2012-05-14 21:46:29 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Dave Jones 2012-02-27 23:43:00 UTC
======================================================
[ INFO: possible circular locking dependency detected ]
3.2.7-1.fc16.x86_64.debug #1
-------------------------------------------------------
yumBackend.py/1705 is trying to acquire lock:
 (&sb->s_type->i_mutex_key#12){+.+.+.}, at: [<ffffffffa01a649b>] btrfs_page_mkwrite+0x5b/0x310 [btrfs]

but task is already holding lock:
 (&mm->mmap_sem){++++++}, at: [<ffffffff8167839a>] do_page_fault+0xea/0x590

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&mm->mmap_sem){++++++}:
       [<ffffffff810be5dd>] lock_acquire+0x9d/0x1f0
       [<ffffffff81167ad9>] might_fault+0x89/0xb0
       [<ffffffff811c11e7>] filldir+0x77/0xe0
       [<ffffffffa019cd0f>] btrfs_real_readdir+0xbf/0x740 [btrfs]
       [<ffffffff811c14b8>] vfs_readdir+0xb8/0xf0
       [<ffffffff811c15e9>] sys_getdents+0x89/0x100
       [<ffffffff8167d082>] system_call_fastpath+0x16/0x1b

-> #0 (&sb->s_type->i_mutex_key#12){+.+.+.}:
       [<ffffffff810bd95e>] __lock_acquire+0x16ee/0x1c60
       [<ffffffff810be5dd>] lock_acquire+0x9d/0x1f0
       [<ffffffff816718c4>] mutex_lock_nested+0x74/0x3a0
       [<ffffffffa01a649b>] btrfs_page_mkwrite+0x5b/0x310 [btrfs]
       [<ffffffff8116806e>] __do_fault+0xee/0x4f0
       [<ffffffff8116ab80>] handle_pte_fault+0x90/0xa10
       [<ffffffff8116b8a8>] handle_mm_fault+0x1e8/0x2f0
       [<ffffffff81678420>] do_page_fault+0x170/0x590
       [<ffffffff81674d75>] page_fault+0x25/0x30

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&mm->mmap_sem);
                               lock(&sb->s_type->i_mutex_key);
                               lock(&mm->mmap_sem);
  lock(&sb->s_type->i_mutex_key);

 *** DEADLOCK ***

1 lock held by yumBackend.py/1705:
 #0:  (&mm->mmap_sem){++++++}, at: [<ffffffff8167839a>] do_page_fault+0xea/0x590

stack backtrace:
Pid: 1705, comm: yumBackend.py Not tainted 3.2.7-1.fc16.x86_64.debug #1
Call Trace:
 [<ffffffff81667854>] print_circular_bug+0x202/0x213
 [<ffffffff810bd95e>] __lock_acquire+0x16ee/0x1c60
 [<ffffffff810aac68>] ? sched_clock_cpu+0xa8/0x110
 [<ffffffff810bc6ae>] ? __lock_acquire+0x43e/0x1c60
 [<ffffffff810be5dd>] lock_acquire+0x9d/0x1f0
 [<ffffffffa01a649b>] ? btrfs_page_mkwrite+0x5b/0x310 [btrfs]
 [<ffffffffa01a649b>] ? btrfs_page_mkwrite+0x5b/0x310 [btrfs]
 [<ffffffff816718c4>] mutex_lock_nested+0x74/0x3a0
 [<ffffffffa01a649b>] ? btrfs_page_mkwrite+0x5b/0x310 [btrfs]
 [<ffffffffa01a649b>] btrfs_page_mkwrite+0x5b/0x310 [btrfs]
 [<ffffffff811456d4>] ? filemap_fault+0x104/0x4d0
 [<ffffffff8116806e>] __do_fault+0xee/0x4f0
 [<ffffffff8116ab80>] handle_pte_fault+0x90/0xa10
 [<ffffffff810b92f5>] ? lock_release_holdtime.part.9+0x15/0x1a0
 [<ffffffff811a0c95>] ? mem_cgroup_count_vm_event+0x95/0x140
 [<ffffffff8116b8a8>] handle_mm_fault+0x1e8/0x2f0
 [<ffffffff81678420>] do_page_fault+0x170/0x590
 [<ffffffff810b8a5d>] ? trace_hardirqs_off+0xd/0x10
 [<ffffffff810aad3f>] ? local_clock+0x6f/0x80
 [<ffffffff810b92f5>] ? lock_release_holdtime.part.9+0x15/0x1a0
 [<ffffffff811aaa35>] ? sys_close+0xb5/0x1a0
 [<ffffffff81314dad>] ? trace_hardirqs_off_thunk+0x3a/0x3c
 [<ffffffff81674d75>] page_fault+0x25/0x30

Comment 1 Josef Bacik 2012-02-28 16:31:08 UTC
This is fixed upstream, we need f248679e86fead40cc78e724c7181d6bec1a2046, I thought it had been sent back to -stable, I'll double check on that.

Comment 2 Dave Jones 2012-03-22 17:14:56 UTC
[mass update]
kernel-3.3.0-4.fc16 has been pushed to the Fedora 16 stable repository.
Please retest with this update.

Comment 3 Dave Jones 2012-03-22 17:17:08 UTC
[mass update]
kernel-3.3.0-4.fc16 has been pushed to the Fedora 16 stable repository.
Please retest with this update.

Comment 4 Dave Jones 2012-03-22 17:25:46 UTC
[mass update]
kernel-3.3.0-4.fc16 has been pushed to the Fedora 16 stable repository.
Please retest with this update.