Bug 481687 - drm: lockdep: intel: 2.6.29-0.53.rc2.git1 INFO: possible circular locking; i915_gem_execbuffer/might_fault
Summary: drm: lockdep: intel: 2.6.29-0.53.rc2.git1 INFO: possible circular locking; i9...
Keywords:
Status: CLOSED RAWHIDE
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: rawhide
Hardware: All
OS: Linux
low
medium
Target Milestone: ---
Assignee: Dave Airlie
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
: 478583 488633 492684 493462 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2009-01-27 02:06 UTC by Tom London
Modified: 2009-10-26 14:43 UTC (History)
15 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2009-10-26 14:43:52 UTC
Type: ---
Embargoed:


Attachments (Terms of Use)
traces of circular locking from i686 2.6.29 series kernel (39.11 KB, text/plain)
2009-02-05 06:40 UTC, Michal Jaegermann
no flags Details
kernel configuration with PAE that runs fine in a Dell Inspiron 1420 with 4 GiB RAM (66.27 KB, text/plain)
2009-04-16 14:06 UTC, P. A. López-Valencia
no flags Details

Description Tom London 2009-01-27 02:06:51 UTC
Description of problem:
Got this on fresh boot of 0.53 on Thinkpad X200 (Intel graphics).

[drm] Initialized i915 1.6.0 20080730 on minor 0
eth1: no IPv6 routers present
wlan1: deauthenticated (Reason: 6)
SELinux: initialized (dev fuse, type fuse), uses genfs_contexts

=======================================================
[ INFO: possible circular locking dependency detected ]
2.6.29-0.53.rc2.git1.fc11.x86_64 #1
-------------------------------------------------------
Xorg/2845 is trying to acquire lock:
 (&mm->mmap_sem){----}, at: [<ffffffff810ba071>] might_fault+0x5d/0xb1

but task is already holding lock:
 (&dev->struct_mutex){--..}, at: [<ffffffffa0360da1>] i915_gem_execbuffer+0x139/
0xb2a [i915]

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&dev->struct_mutex){--..}:
       [<ffffffff8106e95d>] __lock_acquire+0xaab/0xc41
       [<ffffffff8106eb80>] lock_acquire+0x8d/0xba
       [<ffffffff813818aa>] __mutex_lock_common+0x107/0x39c
       [<ffffffff81381be8>] mutex_lock_nested+0x35/0x3a
       [<ffffffffa0330bf0>] drm_vm_open+0x31/0x46 [drm]
       [<ffffffff8104860c>] dup_mm+0x2e6/0x3cc
       [<ffffffff810492b0>] copy_process+0xb82/0x136d
       [<ffffffff81049bfb>] do_fork+0x160/0x31f
       [<ffffffff8100f62d>] sys_clone+0x23/0x25
       [<ffffffff810117c3>] stub_clone+0x13/0x20
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #1 (&mm->mmap_sem/1){--..}:
       [<ffffffff8106e95d>] __lock_acquire+0xaab/0xc41
       [<ffffffff8106eb80>] lock_acquire+0x8d/0xba
       [<ffffffff8106234a>] down_write_nested+0x4b/0x7f
       [<ffffffff810483f5>] dup_mm+0xcf/0x3cc
       [<ffffffff810492b0>] copy_process+0xb82/0x136d
       [<ffffffff81049bfb>] do_fork+0x160/0x31f
       [<ffffffff8100f62d>] sys_clone+0x23/0x25
       [<ffffffff810117c3>] stub_clone+0x13/0x20
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #0 (&mm->mmap_sem){----}:
       [<ffffffff8106e7fe>] __lock_acquire+0x94c/0xc41
       [<ffffffff8106eb80>] lock_acquire+0x8d/0xba
       [<ffffffff810ba09e>] might_fault+0x8a/0xb1
       [<ffffffffa035b7f8>] i915_emit_box+0x2f/0x259 [i915]
       [<ffffffffa03613db>] i915_gem_execbuffer+0x773/0xb2a [i915]
       [<ffffffffa032ae59>] drm_ioctl+0x1e6/0x271 [drm]
       [<ffffffff810eab55>] vfs_ioctl+0x5f/0x78
       [<ffffffff810eafd9>] do_vfs_ioctl+0x46b/0x4ab
       [<ffffffff810eb06e>] sys_ioctl+0x55/0x77
       [<ffffffff810112ba>] system_call_fastpath+0x16/0x1b
       [<ffffffffffffffff>] 0xffffffffffffffff

other info that might help us debug this:

1 lock held by Xorg/2845:
 #0:  (&dev->struct_mutex){--..}, at: [<ffffffffa0360da1>] i915_gem_execbuffer+0
x139/0xb2a [i915]

stack backtrace:
Pid: 2845, comm: Xorg Not tainted 2.6.29-0.53.rc2.git1.fc11.x86_64 #1
Call Trace:
 [<ffffffff8106dc01>] print_circular_bug_tail+0x71/0x7c
 [<ffffffff8106e7fe>] __lock_acquire+0x94c/0xc41
 [<ffffffff8106eb80>] lock_acquire+0x8d/0xba
 [<ffffffff810ba071>] ? might_fault+0x5d/0xb1
 [<ffffffff810ba09e>] might_fault+0x8a/0xb1
 [<ffffffff810ba071>] ? might_fault+0x5d/0xb1
 [<ffffffffa035b7f8>] i915_emit_box+0x2f/0x259 [i915]
 [<ffffffffa03613db>] i915_gem_execbuffer+0x773/0xb2a [i915]
 [<ffffffffa032ae1f>] ? drm_ioctl+0x1ac/0x271 [drm]
 [<ffffffffa032ae59>] drm_ioctl+0x1e6/0x271 [drm]
 [<ffffffff8119a5cc>] ? _raw_spin_lock+0x68/0x116
 [<ffffffffa0360c68>] ? i915_gem_execbuffer+0x0/0xb2a [i915]
 [<ffffffff810eab55>] vfs_ioctl+0x5f/0x78
 [<ffffffff810eafd9>] do_vfs_ioctl+0x46b/0x4ab
 [<ffffffff810eb06e>] sys_ioctl+0x55/0x77
 [<ffffffff810112ba>] system_call_fastpath+0x16/0x1b
wlan1: authenticate with AP 00:19:77:00:4f:f1
wlan1: authenticate with AP 00:19:77:00:4f:f1
wlan1: authenticated
wlan1: associate with AP 00:19:77:00:4f:f1
wlan1: RX AssocResp from 00:19:77:00:4f:f1 (capab=0x431 status=0 aid=3)
wlan1: associated
ADDRCONF(NETDEV_CHANGE): wlan1: link becomes ready
cfg80211: Calling CRDA for country: US



Version-Release number of selected component (if applicable):
kernel-2.6.29-0.53.rc2.git1.fc11.x86_64

How reproducible:
Not sure

Steps to Reproduce:
1.
2.
3.
  
Actual results:


Expected results:


Additional info:

Comment 1 Tom London 2009-01-27 14:18:25 UTC
Appears to happen every boot on Thinkpad X200 (looks similar to #1):

[drm] Initialized i915 1.6.0 20080730 on minor 0
eth1: no IPv6 routers present
SELinux: initialized (dev fuse, type fuse), uses genfs_contexts

=======================================================
[ INFO: possible circular locking dependency detected ]
2.6.29-0.53.rc2.git1.fc11.x86_64 #1
-------------------------------------------------------
Xorg/2802 is trying to acquire lock:
 (&mm->mmap_sem){----}, at: [<ffffffff810ba071>] might_fault+0x5d/0xb1

but task is already holding lock:
 (&dev->struct_mutex){--..}, at: [<ffffffffa0362da1>] i915_gem_execbuffer+0x139/0xb2a [i915]

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&dev->struct_mutex){--..}:
       [<ffffffff8106e95d>] __lock_acquire+0xaab/0xc41
       [<ffffffff8106eb80>] lock_acquire+0x8d/0xba
       [<ffffffff813818aa>] __mutex_lock_common+0x107/0x39c
       [<ffffffff81381be8>] mutex_lock_nested+0x35/0x3a
       [<ffffffffa0332bf0>] drm_vm_open+0x31/0x46 [drm]
       [<ffffffff8104860c>] dup_mm+0x2e6/0x3cc
       [<ffffffff810492b0>] copy_process+0xb82/0x136d
       [<ffffffff81049bfb>] do_fork+0x160/0x31f
       [<ffffffff8100f62d>] sys_clone+0x23/0x25
       [<ffffffff810117c3>] stub_clone+0x13/0x20
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #1 (&mm->mmap_sem/1){--..}:
       [<ffffffff8106e95d>] __lock_acquire+0xaab/0xc41
       [<ffffffff8106eb80>] lock_acquire+0x8d/0xba
       [<ffffffff8106234a>] down_write_nested+0x4b/0x7f
       [<ffffffff810483f5>] dup_mm+0xcf/0x3cc
       [<ffffffff810492b0>] copy_process+0xb82/0x136d
       [<ffffffff81049bfb>] do_fork+0x160/0x31f
       [<ffffffff8100f62d>] sys_clone+0x23/0x25
       [<ffffffff810117c3>] stub_clone+0x13/0x20
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #0 (&mm->mmap_sem){----}:
       [<ffffffff8106e7fe>] __lock_acquire+0x94c/0xc41
       [<ffffffff8106eb80>] lock_acquire+0x8d/0xba
       [<ffffffff810ba09e>] might_fault+0x8a/0xb1
       [<ffffffffa035d7f8>] i915_emit_box+0x2f/0x259 [i915]
       [<ffffffffa03633db>] i915_gem_execbuffer+0x773/0xb2a [i915]
       [<ffffffffa032ce59>] drm_ioctl+0x1e6/0x271 [drm]
       [<ffffffff810eab55>] vfs_ioctl+0x5f/0x78
       [<ffffffff810eafd9>] do_vfs_ioctl+0x46b/0x4ab
       [<ffffffff810eb06e>] sys_ioctl+0x55/0x77
       [<ffffffff810112ba>] system_call_fastpath+0x16/0x1b
       [<ffffffffffffffff>] 0xffffffffffffffff

other info that might help us debug this:

1 lock held by Xorg/2802:
 #0:  (&dev->struct_mutex){--..}, at: [<ffffffffa0362da1>] i915_gem_execbuffer+0x139/0xb2a [i915]

stack backtrace:
Pid: 2802, comm: Xorg Not tainted 2.6.29-0.53.rc2.git1.fc11.x86_64 #1
Call Trace:
 [<ffffffff8106dc01>] print_circular_bug_tail+0x71/0x7c
 [<ffffffff8106e7fe>] __lock_acquire+0x94c/0xc41
 [<ffffffff8106eb80>] lock_acquire+0x8d/0xba
 [<ffffffff810ba071>] ? might_fault+0x5d/0xb1
 [<ffffffff810ba09e>] might_fault+0x8a/0xb1
 [<ffffffff810ba071>] ? might_fault+0x5d/0xb1
 [<ffffffffa035d7f8>] i915_emit_box+0x2f/0x259 [i915]
 [<ffffffffa03633db>] i915_gem_execbuffer+0x773/0xb2a [i915]
 [<ffffffffa032ce1f>] ? drm_ioctl+0x1ac/0x271 [drm]
 [<ffffffffa032ce59>] drm_ioctl+0x1e6/0x271 [drm]
 [<ffffffff8119a5cc>] ? _raw_spin_lock+0x68/0x116
 [<ffffffffa0362c68>] ? i915_gem_execbuffer+0x0/0xb2a [i915]
 [<ffffffff810eab55>] vfs_ioctl+0x5f/0x78
 [<ffffffff810eafd9>] do_vfs_ioctl+0x46b/0x4ab
 [<ffffffff810eb06e>] sys_ioctl+0x55/0x77
 [<ffffffff810112ba>] system_call_fastpath+0x16/0x1b
wlan1: authenticate with AP 00:12:17:46:42:51
wlan1: authenticated
wlan1: associate with AP 00:12:17:46:42:51
wlan1: RX AssocResp from 00:12:17:46:42:51 (capab=0x431 status=0 aid=2)

Comment 2 Michal Jaegermann 2009-02-05 06:40:29 UTC
Created attachment 330963 [details]
traces of circular locking from i686  2.6.29 series kernel

I got really the same traces from a netboot running kernel 2.6.29-0.78.rc3.git5.fc11.i686 and with "Intel Corporation Mobile 945GME Express Integrated Graphics Controller".

I do not really plan to run this kernel as this is F10 installation and I was just asked to test this kernel due to a bug 481259.  Just in case dmesg output
attached.

Comment 3 Joachim Frieben 2009-02-19 10:00:42 UTC
Happens also on "Intel Corporation 82845G/GL[Brookdale-G]/GE Chipset Integrated Graphics Device rev 3" running kernel 2.6.29-0.131.rc5.git2.fc11.i586.
The system starts up, and GDM appears, but after entering user name and password, the system just sits there displaying the default background.
Fortunately, it is still possible to switch to a VT and to shut down the X server. Setting driver to "VESA" in xorg.conf allows to restore basic operation.

Comment 4 Kyle McMartin 2009-02-27 13:41:48 UTC
Tom, I believe the circular locking issue should have been fixed around a week ago. Can you confirm?

cheers, Kyle

Comment 5 Kyle McMartin 2009-02-27 13:43:17 UTC
*** Bug 478583 has been marked as a duplicate of this bug. ***

Comment 6 Tom London 2009-02-27 18:17:14 UTC
My logs show the last occurrence of this on 5 February:

Feb  5 09:06:12 tlondon kernel: =======================================================
Feb  5 09:06:12 tlondon kernel: [ INFO: possible circular locking dependency detected ]
Feb  5 09:06:12 tlondon kernel: 2.6.29-0.82.rc3.git7.fc11.x86_64 #1


Haven't seen it since....

Comment 7 Zdenek Kabelac 2009-03-03 13:14:33 UTC
I'm still getting this INFO warning message with vanilla upstream kernel 2.6.29-rc6  (commit 2450cf51a1bdba7037e91b1bcc494b01c58aaf66).


Is this fix only local for Fedora kernels ?

If there is no 'extra' Fedora patch - then I'm getting this trace with following configuration:

T61, 4GB, xorg-x11-server-Xorg-1.6.0-3.fc11.x86_64

Comment 8 Kyle McMartin 2009-03-03 16:29:32 UTC
This is not an upstream bug tracker.

Comment 10 Jurgen Kramer 2009-03-22 16:38:07 UTC
With the current kernel (2.6.29-0.258.rc8.git2.fc11.i586) from rawhide I also get the "possible circular locking dependency detected" messages. The system itself (Samsung NC10) appears to work fine.

ADDRCONF(NETDEV_CHANGE): wlan0: link becomes ready
wlan0: disassociating by local choice (reason=3)

=======================================================
[ INFO: possible circular locking dependency detected ]
2.6.29-0.258.rc8.git2.fc11.i586 #1
-------------------------------------------------------
Xorg/2569 is trying to acquire lock:
 (&mm->mmap_sem){----}, at: [<c04909ef>] might_fault+0x48/0x85

but task is already holding lock:
 (&dev->struct_mutex){--..}, at: [<f7ce2abb>] i915_gem_execbuffer+0xd6/0xa12 [i915]

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&dev->struct_mutex){--..}:
       [<c045062c>] __lock_acquire+0x970/0xace
       [<c04507e5>] lock_acquire+0x5b/0x81
       [<c06edf07>] __mutex_lock_common+0xdd/0x338
       [<c06ee209>] mutex_lock_nested+0x33/0x3b
       [<f7dbb575>] drm_gem_mmap+0x36/0xf7 [drm]
       [<c0496ffb>] mmap_region+0x243/0x3cb
       [<c04973d5>] do_mmap_pgoff+0x252/0x2a2
       [<c0407124>] sys_mmap2+0x5f/0x80
       [<c0403f92>] syscall_call+0x7/0xb
       [<ffffffff>] 0xffffffff

-> #0 (&mm->mmap_sem){----}:
       [<c04504f9>] __lock_acquire+0x83d/0xace
       [<c04507e5>] lock_acquire+0x5b/0x81
       [<c0490a0c>] might_fault+0x65/0x85
       [<c0542983>] copy_from_user+0x32/0x119
       [<f7ce2c4e>] i915_gem_execbuffer+0x269/0xa12 [i915]
       [<f7dba6c7>] drm_ioctl+0x1b7/0x236 [drm]
       [<c04b3d24>] vfs_ioctl+0x5a/0x74
       [<c04b42b8>] do_vfs_ioctl+0x483/0x4bd
       [<c04b4338>] sys_ioctl+0x46/0x66
       [<c0403f92>] syscall_call+0x7/0xb
       [<ffffffff>] 0xffffffff

other info that might help us debug this:

1 lock held by Xorg/2569:
 #0:  (&dev->struct_mutex){--..}, at: [<f7ce2abb>] i915_gem_execbuffer+0xd6/0xa12 [i915]

stack backtrace:
Pid: 2569, comm: Xorg Not tainted 2.6.29-0.258.rc8.git2.fc11.i586 #1
Call Trace:
 [<c06ecd8e>] ? printk+0x14/0x16
 [<c044faa7>] print_circular_bug_tail+0x5d/0x68
 [<c04504f9>] __lock_acquire+0x83d/0xace
 [<c06ef227>] ? _spin_unlock+0x22/0x25
 [<c04909ef>] ? might_fault+0x48/0x85
 [<c04507e5>] lock_acquire+0x5b/0x81
 [<c04909ef>] ? might_fault+0x48/0x85
 [<c0490a0c>] might_fault+0x65/0x85
 [<c04909ef>] ? might_fault+0x48/0x85
 [<c0542983>] copy_from_user+0x32/0x119
 [<f7ce2c4e>] i915_gem_execbuffer+0x269/0xa12 [i915]
 [<c044e012>] ? lock_release_holdtime+0x2b/0x123
 [<c0490a2a>] ? might_fault+0x83/0x85
 [<c0542983>] ? copy_from_user+0x32/0x119
 [<f7dba6c7>] drm_ioctl+0x1b7/0x236 [drm]
 [<f7ce29e5>] ? i915_gem_execbuffer+0x0/0xa12 [i915]
 [<c04b3d24>] vfs_ioctl+0x5a/0x74
 [<c04b42b8>] do_vfs_ioctl+0x483/0x4bd
 [<c05166b9>] ? selinux_file_ioctl+0x3f/0x42
 [<c04b4338>] sys_ioctl+0x46/0x66
 [<c0403f92>] syscall_call+0x7/0xb
wlan0: no IPv6 routers present

Comment 11 Mace Moneta 2009-03-24 16:54:11 UTC
I just received this running:

kernel-2.6.29-0.279.rc8.git6.fc11.x86_64
mesa-dri-drivers-7.5-0.2.fc11.x86_64
mesa-libGL-7.5-0.2.fc11.x86_64
mesa-libGL-devel-7.5-0.2.fc11.x86_64
mesa-libGLU-7.5-0.2.fc11.x86_64
mesa-libGLU-devel-7.5-0.2.fc11.x86_64
xorg-x11-drv-intel-2.6.99.902-1.fc11.x86_64

Mar 24 12:40:17 slayer kernel:=======================================================
Mar 24 12:40:17 slayer kernel:[ INFO: possible circular locking dependency detected ]
Mar 24 12:40:17 slayer kernel:2.6.29-0.279.rc8.git6.fc11.x86_64 #1
Mar 24 12:40:17 slayer kernel:-------------------------------------------------------
Mar 24 12:40:17 slayer kernel:Xorg/3368 is trying to acquire lock:
Mar 24 12:40:17 slayer kernel: (&mm->mmap_sem){----}, at: [<ffffffff810c042a>] might_fault+0x62/0xb6
Mar 24 12:40:17 slayer kernel:
Mar 24 12:40:17 slayer kernel:but task is already holding lock:
Mar 24 12:40:17 slayer kernel: (&dev->struct_mutex){--..}, at: [<ffffffffa0060805>] i915_gem_execbuffer+0x101/0xb9e [i915]
Mar 24 12:40:17 slayer kernel:
Mar 24 12:40:17 slayer kernel:which lock already depends on the new lock.
Mar 24 12:40:17 slayer kernel:
Mar 24 12:40:17 slayer kernel:
Mar 24 12:40:17 slayer kernel:the existing dependency chain (in reverse order) is:
Mar 24 12:40:17 slayer kernel:
Mar 24 12:40:17 slayer kernel:-> #2 (&dev->struct_mutex){--..}:
Mar 24 12:40:17 slayer kernel:       [<ffffffff81071ca6>] __lock_acquire+0xa8a/0xc06
Mar 24 12:40:17 slayer kernel:       [<ffffffff81071eb4>] lock_acquire+0x92/0xc0
Mar 24 12:40:17 slayer kernel:       [<ffffffff81396208>] __mutex_lock_common+0xff/0x399
Mar 24 12:40:17 slayer kernel:       [<ffffffff81396560>] mutex_lock_nested+0x3c/0x41
Mar 24 12:40:17 slayer kernel:       [<ffffffffa0028ef7>] drm_vm_open+0x36/0x4b [drm]
Mar 24 12:40:17 slayer kernel:       [<ffffffff8104b03b>] dup_mm+0x2e8/0x3c3
Mar 24 12:40:17 slayer kernel:       [<ffffffff8104bce0>] copy_process+0xb88/0x13a9
Mar 24 12:40:17 slayer kernel:       [<ffffffff8104c667>] do_fork+0x166/0x354
Mar 24 12:40:17 slayer kernel:       [<ffffffff8100f5e5>] sys_clone+0x28/0x2a
Mar 24 12:40:17 slayer kernel:       [<ffffffff81011843>] stub_clone+0x13/0x20
Mar 24 12:40:17 slayer kernel:       [<ffffffffffffffff>] 0xffffffffffffffff
Mar 24 12:40:17 slayer kernel:
Mar 24 12:40:17 slayer kernel:-> #1 (&mm->mmap_sem/1){--..}:
Mar 24 12:40:17 slayer kernel:       [<ffffffff81071ca6>] __lock_acquire+0xa8a/0xc06
Mar 24 12:40:17 slayer kernel:       [<ffffffff81071eb4>] lock_acquire+0x92/0xc0
Mar 24 12:40:17 slayer kernel:       [<ffffffff81065321>] down_write_nested+0x52/0x89
Mar 24 12:40:17 slayer kernel:       [<ffffffff8104ae21>] dup_mm+0xce/0x3c3
Mar 24 12:40:17 slayer kernel:       [<ffffffff8104bce0>] copy_process+0xb88/0x13a9
Mar 24 12:40:17 slayer kernel:       [<ffffffff8104c667>] do_fork+0x166/0x354
Mar 24 12:40:17 slayer kernel:       [<ffffffff8100f5e5>] sys_clone+0x28/0x2a
Mar 24 12:40:17 slayer kernel:       [<ffffffff81011843>] stub_clone+0x13/0x20
Mar 24 12:40:17 slayer kernel:       [<ffffffffffffffff>] 0xffffffffffffffff
Mar 24 12:40:17 slayer kernel:
Mar 24 12:40:17 slayer kernel:-> #0 (&mm->mmap_sem){----}:
Mar 24 12:40:17 slayer kernel:       [<ffffffff81071b31>] __lock_acquire+0x915/0xc06
Mar 24 12:40:17 slayer kernel:       [<ffffffff81071eb4>] lock_acquire+0x92/0xc0
Mar 24 12:40:17 slayer kernel:       [<ffffffff810c0457>] might_fault+0x8f/0xb6
Mar 24 12:40:17 slayer kernel:       [<ffffffffa005aa8d>] i915_emit_box+0x34/0x25e [i915]
Mar 24 12:40:17 slayer kernel:       [<ffffffffa0060eec>] i915_gem_execbuffer+0x7e8/0xb9e [i915]
Mar 24 12:40:17 slayer kernel:       [<ffffffffa0023d82>] drm_ioctl+0x1fe/0x297 [drm]
Mar 24 12:40:17 slayer kernel:       [<ffffffff810f0e9c>] vfs_ioctl+0x6f/0x87
Mar 24 12:40:17 slayer kernel:       [<ffffffff810f131f>] do_vfs_ioctl+0x46b/0x4ac
Mar 24 12:40:17 slayer kernel:       [<ffffffff810f13b6>] sys_ioctl+0x56/0x79
Mar 24 12:40:17 slayer kernel:       [<ffffffff8101133a>] system_call_fastpath+0x16/0x1b
Mar 24 12:40:17 slayer kernel:       [<ffffffffffffffff>] 0xffffffffffffffff
Mar 24 12:40:17 slayer kernel:
Mar 24 12:40:17 slayer kernel:other info that might help us debug this:
Mar 24 12:40:17 slayer kernel:
Mar 24 12:40:17 slayer kernel:1 lock held by Xorg/3368:
Mar 24 12:40:17 slayer kernel: #0:  (&dev->struct_mutex){--..}, at: [<ffffffffa0060805>] i915_gem_execbuffer+0x101/0xb9e [i915]
Mar 24 12:40:17 slayer kernel:
Mar 24 12:40:17 slayer kernel:stack backtrace:
Mar 24 12:40:17 slayer kernel:Pid: 3368, comm: Xorg Not tainted 2.6.29-0.279.rc8.git6.fc11.x86_64 #1
Mar 24 12:40:17 slayer kernel:Call Trace:
Mar 24 12:40:17 slayer kernel: [<ffffffff81070f75>] print_circular_bug_tail+0x71/0x7c
Mar 24 12:40:17 slayer kernel: [<ffffffff81071b31>] __lock_acquire+0x915/0xc06
Mar 24 12:40:17 slayer kernel: [<ffffffff81070401>] ? print_irq_inversion_bug+0x11c/0x131
Mar 24 12:40:17 slayer kernel: [<ffffffff81071eb4>] lock_acquire+0x92/0xc0
Mar 24 12:40:17 slayer kernel: [<ffffffff810c042a>] ? might_fault+0x62/0xb6
Mar 24 12:40:17 slayer kernel: [<ffffffff810c0457>] might_fault+0x8f/0xb6
Mar 24 12:40:17 slayer kernel: [<ffffffff810c042a>] ? might_fault+0x62/0xb6
Mar 24 12:40:17 slayer kernel: [<ffffffff8105ea87>] ? queue_delayed_work+0x26/0x28
Mar 24 12:40:17 slayer kernel: [<ffffffffa005aa8d>] i915_emit_box+0x34/0x25e [i915]
Mar 24 12:40:17 slayer kernel: [<ffffffffa0060eec>] i915_gem_execbuffer+0x7e8/0xb9e [i915]
Mar 24 12:40:17 slayer kernel: [<ffffffffa0060704>] ? i915_gem_execbuffer+0x0/0xb9e [i915]
Mar 24 12:40:17 slayer kernel: [<ffffffffa0023d82>] drm_ioctl+0x1fe/0x297 [drm]
Mar 24 12:40:17 slayer kernel: [<ffffffff810f0e9c>] vfs_ioctl+0x6f/0x87
Mar 24 12:40:17 slayer kernel: [<ffffffff810f131f>] do_vfs_ioctl+0x46b/0x4ac
Mar 24 12:40:17 slayer kernel: [<ffffffff810f13b6>] sys_ioctl+0x56/0x79
Mar 24 12:40:17 slayer kernel: [<ffffffff8101133a>] system_call_fastpath+0x16/0x1b

Comment 12 Tom London 2009-03-24 17:15:39 UTC
Yeah, me too:

Looks the same as above.

Got this on Thinkpad X200, booting with "nomodeset nopat".

Mar 24 07:26:33 tlondon kernel: =======================================================
Mar 24 07:26:33 tlondon kernel: [ INFO: possible circular locking dependency detected ]
Mar 24 07:26:33 tlondon kernel: 2.6.29-0.279.rc8.git6.fc11.x86_64 #1
Mar 24 07:26:33 tlondon kernel: -------------------------------------------------------
Mar 24 07:26:33 tlondon kernel: Xorg/2793 is trying to acquire lock:
Mar 24 07:26:33 tlondon kernel: (&mm->mmap_sem){----}, at: [<ffffffff810c042a>] might_fault+0x62/0xb6
Mar 24 07:26:33 tlondon kernel:
Mar 24 07:26:33 tlondon kernel: but task is already holding lock:
Mar 24 07:26:33 tlondon kernel: (&dev->struct_mutex){--..}, at: [<ffffffffa0060805>] i915_gem_execbuffer+0x101/0xb9e [i915]
Mar 24 07:26:33 tlondon kernel:
Mar 24 07:26:33 tlondon kernel: which lock already depends on the new lock.
Mar 24 07:26:33 tlondon kernel:
Mar 24 07:26:33 tlondon kernel:
Mar 24 07:26:33 tlondon kernel: the existing dependency chain (in reverse order) is:
Mar 24 07:26:33 tlondon kernel:
Mar 24 07:26:33 tlondon kernel: -> #2 (&dev->struct_mutex){--..}:
Mar 24 07:26:33 tlondon kernel:       [<ffffffff81071ca6>] __lock_acquire+0xa8a/0xc06
Mar 24 07:26:33 tlondon kernel:       [<ffffffff81071eb4>] lock_acquire+0x92/0xc0
Mar 24 07:26:33 tlondon kernel:       [<ffffffff81396208>] __mutex_lock_common+0xff/0x399
Mar 24 07:26:33 tlondon kernel:       [<ffffffff81396560>] mutex_lock_nested+0x3c/0x41
Mar 24 07:26:33 tlondon kernel:       [<ffffffffa0028ef7>] drm_vm_open+0x36/0x4b [drm]
Mar 24 07:26:33 tlondon kernel:       [<ffffffff8104b03b>] dup_mm+0x2e8/0x3c3
Mar 24 07:26:33 tlondon kernel:       [<ffffffff8104bce0>] copy_process+0xb88/0x13a9
Mar 24 07:26:33 tlondon kernel:       [<ffffffff8104c667>] do_fork+0x166/0x354
Mar 24 07:26:33 tlondon kernel:       [<ffffffff8100f5e5>] sys_clone+0x28/0x2a
Mar 24 07:26:33 tlondon kernel:       [<ffffffff81011843>] stub_clone+0x13/0x20
Mar 24 07:26:33 tlondon kernel:       [<ffffffffffffffff>] 0xffffffffffffffff
Mar 24 07:26:33 tlondon kernel:
Mar 24 07:26:33 tlondon kernel: -> #1 (&mm->mmap_sem/1){--..}:
Mar 24 07:26:33 tlondon kernel:       [<ffffffff81071ca6>] __lock_acquire+0xa8a/0xc06
Mar 24 07:26:33 tlondon kernel:       [<ffffffff81071eb4>] lock_acquire+0x92/0xc0
Mar 24 07:26:33 tlondon kernel:       [<ffffffff81065321>] down_write_nested+0x52/0x89
Mar 24 07:26:33 tlondon kernel:       [<ffffffff8104ae21>] dup_mm+0xce/0x3c3
Mar 24 07:26:33 tlondon kernel:       [<ffffffff8104bce0>] copy_process+0xb88/0x13a9
Mar 24 07:26:33 tlondon kernel:       [<ffffffff8104c667>] do_fork+0x166/0x354
Mar 24 07:26:33 tlondon kernel:       [<ffffffff8100f5e5>] sys_clone+0x28/0x2a
Mar 24 07:26:33 tlondon kernel:       [<ffffffff81011843>] stub_clone+0x13/0x20
Mar 24 07:26:33 tlondon kernel:       [<ffffffffffffffff>] 0xffffffffffffffff
Mar 24 07:26:33 tlondon kernel:
Mar 24 07:26:33 tlondon kernel: -> #0 (&mm->mmap_sem){----}:
Mar 24 07:26:33 tlondon kernel:       [<ffffffff81071b31>] __lock_acquire+0x915/0xc06
Mar 24 07:26:33 tlondon kernel:       [<ffffffff81071eb4>] lock_acquire+0x92/0xc0
Mar 24 07:26:33 tlondon kernel:       [<ffffffff810c0457>] might_fault+0x8f/0xb6
Mar 24 07:26:33 tlondon kernel:       [<ffffffffa005aa8d>] i915_emit_box+0x34/0x25e [i915]
Mar 24 07:26:33 tlondon kernel:       [<ffffffffa0060eec>] i915_gem_execbuffer+0x7e8/0xb9e [i915]
Mar 24 07:26:33 tlondon kernel:       [<ffffffffa0023d82>] drm_ioctl+0x1fe/0x297 [drm]
Mar 24 07:26:33 tlondon kernel:       [<ffffffff810f0e9c>] vfs_ioctl+0x6f/0x87
Mar 24 07:26:33 tlondon kernel:       [<ffffffff810f131f>] do_vfs_ioctl+0x46b/0x4ac
Mar 24 07:26:33 tlondon kernel:       [<ffffffff810f13b6>] sys_ioctl+0x56/0x79
Mar 24 07:26:33 tlondon kernel:       [<ffffffff8101133a>] system_call_fastpath+0x16/0x1b
Mar 24 07:26:33 tlondon kernel:       [<ffffffffffffffff>] 0xffffffffffffffff
Mar 24 07:26:33 tlondon kernel:
Mar 24 07:26:33 tlondon kernel: other info that might help us debug this:
Mar 24 07:26:33 tlondon kernel:
Mar 24 07:26:33 tlondon kernel: 1 lock held by Xorg/2793:
Mar 24 07:26:33 tlondon kernel: #0:  (&dev->struct_mutex){--..}, at: [<ffffffffa0060805>] i915_gem_execbuffer+0x101/0xb9e [i915]
Mar 24 07:26:33 tlondon kernel:
Mar 24 07:26:33 tlondon kernel: stack backtrace:
Mar 24 07:26:33 tlondon kernel: Pid: 2793, comm: Xorg Not tainted 2.6.29-0.279.rc8.git6.fc11.x86_64 #1
Mar 24 07:26:33 tlondon kernel: Call Trace:
Mar 24 07:26:33 tlondon kernel: [<ffffffff81070f75>] print_circular_bug_tail+0x71/0x7c
Mar 24 07:26:33 tlondon kernel: [<ffffffff81071b31>] __lock_acquire+0x915/0xc06
Mar 24 07:26:33 tlondon kernel: [<ffffffff81070401>] ? print_irq_inversion_bug+0x11c/0x131
Mar 24 07:26:33 tlondon kernel: [<ffffffff81071eb4>] lock_acquire+0x92/0xc0
Mar 24 07:26:33 tlondon kernel: [<ffffffff810c042a>] ? might_fault+0x62/0xb6
Mar 24 07:26:33 tlondon kernel: [<ffffffff810c0457>] might_fault+0x8f/0xb6
Mar 24 07:26:33 tlondon kernel: [<ffffffff810c042a>] ? might_fault+0x62/0xb6
Mar 24 07:26:33 tlondon kernel: [<ffffffff8105ea87>] ? queue_delayed_work+0x26/0x28
Mar 24 07:26:33 tlondon kernel: [<ffffffffa005aa8d>] i915_emit_box+0x34/0x25e [i915]
Mar 24 07:26:33 tlondon kernel: [<ffffffffa0060eec>] i915_gem_execbuffer+0x7e8/0xb9e [i915]
Mar 24 07:26:33 tlondon kernel: [<ffffffffa0060704>] ? i915_gem_execbuffer+0x0/0xb9e [i915]
Mar 24 07:26:33 tlondon kernel: [<ffffffffa0023d82>] drm_ioctl+0x1fe/0x297 [drm]
Mar 24 07:26:33 tlondon kernel: [<ffffffff810f0e9c>] vfs_ioctl+0x6f/0x87
Mar 24 07:26:33 tlondon kernel: [<ffffffff810f131f>] do_vfs_ioctl+0x46b/0x4ac
Mar 24 07:26:33 tlondon kernel: [<ffffffff810f13b6>] sys_ioctl+0x56/0x79
Mar 24 07:26:33 tlondon kernel: [<ffffffff8101133a>] system_call_fastpath+0x16/0x1b

Comment 13 Tom London 2009-03-24 17:30:39 UTC
BTW, a search of my logs show this starting again this morning (24 March, kernel-2.6.29-0.279.rc8.git6.fc11.x86_64) after not seeing this since 5 February (kernel-2.6.29-0.82.rc3.git7.fc11.x86_64)

Comment 14 Tomislav Vujec 2009-04-01 14:29:00 UTC
Same oops still happens with:
kernel-PAE-2.6.29-21.fc11.i686
xorg-x11-server-Xorg-1.6.0-16.fc11.i586
xorg-x11-drv-intel-2.6.99.902-1.fc11.i586

Comment 15 P. A. López-Valencia 2009-04-04 15:12:27 UTC
I have the same problem which I reported in  Bug 493462

Comment 16 Tomislav Vujec 2009-04-04 16:03:02 UTC
It seems fixed with kernel-PAE-2.6.29.1-46.fc11.i686 and same xorg packages as before.

Comment 17 P. A. López-Valencia 2009-04-04 18:25:15 UTC
For i915 video perhaps. For a moblie Intel 965G with integrated video (X3100), no.

Comment 18 Josh Stone 2009-04-04 19:23:25 UTC
The lockdep is gone on my Intel 945GME with kernel-PAE-2.6.29.1-46.fc11.i686 too.

Comment 19 Dave Jones 2009-04-06 16:26:09 UTC
*** Bug 493462 has been marked as a duplicate of this bug. ***

Comment 20 P. A. López-Valencia 2009-04-16 14:04:24 UTC
I've been doing some experimentation and using a vanilla 2.6.29.1 kernel with a configuration file based on an Ubuntu server kernel (that works fine with PAE in this very same laptop) tuned up similarly to the kernels in Rawhide, including PAE extensions, there is no lock up in my laptop and X loads up just fine. 

Intriguingly, since version 52 there is no trace of the crash in the system logs. The kernel finishes loading OK, the wireless card loads OS, X starts OK but dies without leaving a trace behind. The 4GiB limited (a.k.a. i586) kernel works fine but obviously I can only access 3.5 GiB instead of the 4GiB RAM installed in the laptop. The only real change besides having dropped lots of drivers and debugging flags I don't need, I reduced the number of allowed SMP CPUs to 4.  Soooo... The problem, I think, is in the secret sauce patched into Fedora's kernel at this point.

I'm attaching a copy of the configuration used with the PAE kernel I'm running right now with Rawhide.

Comment 21 P. A. López-Valencia 2009-04-16 14:06:03 UTC
Created attachment 339846 [details]
kernel configuration with PAE that runs fine in a Dell Inspiron 1420 with 4 GiB RAM

Comment 22 Mary Ellen Foster 2009-04-17 19:53:55 UTC
I'm told that my bug (bug 496283) is a duplicate of this one. I have an Intel X4500HD graphics card and 4G of memory, for what it's worth ...

Comment 23 P. A. López-Valencia 2009-04-20 14:15:21 UTC
I'm still having no luck with the 2.6.29.1-100.fc11.PAE kernel. Same lockup, no trace of it in the logs.

Comment 24 P. A. López-Valencia 2009-05-21 21:18:15 UTC
The bug has been apparently fixed in 2.6.29.3-153 according to bug 493526

Time to set this bug as CLOSED in rawhide? I can't confirm it right now as my laptop has a different GNU/Linux distribution that works fine with a PAE kernel at the moment...

Comment 25 Bug Zapper 2009-06-09 10:53:59 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 11 development cycle.
Changing version to '11'.

More information and reason for this action is here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping

Comment 26 James 2009-06-15 20:35:51 UTC
It's back in Rawhide.

=======================================================
[ INFO: possible circular locking dependency detected ]
2.6.30-6.fc12.x86_64 #1
-------------------------------------------------------
Xorg/1790 is trying to acquire lock:
 (&mm->mmap_sem){++++++}, at: [<ffffffff810f8cd8>] might_fault+0x71/0xd9

but task is already holding lock:
 (&dev->mode_config.mutex){+.+.+.}, at: [<ffffffffa0033ae6>] drm_mode_getresources+0x46/0x5a5 [drm]

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #3 (&dev->mode_config.mutex){+.+.+.}:
       [<ffffffff81089f5d>] __lock_acquire+0xa75/0xc0a
       [<ffffffff8108a1e0>] lock_acquire+0xee/0x12e
       [<ffffffff814b7ec3>] __mutex_lock_common+0x5b/0x3bf
       [<ffffffff814b834a>] mutex_lock_nested+0x4f/0x6b
       [<ffffffffa006a447>] intelfb_pan_display+0x98/0x11d [i915]
       [<ffffffff8127675d>] fb_pan_display+0xe2/0x13f
       [<ffffffff81285ab2>] bit_update_start+0x33/0x72
       [<ffffffff81282bd3>] fbcon_switch+0x411/0x42d
       [<ffffffff812ed5b7>] redraw_screen+0xe1/0x192
       [<ffffffff812ed938>] bind_con_driver+0x2d0/0x31a
       [<ffffffff812ed9cd>] take_over_console+0x4b/0x6e
       [<ffffffff81284b90>] fbcon_takeover+0x6f/0xb1
       [<ffffffff8128518e>] fbcon_event_notify+0x21a/0x581
       [<ffffffff814bd115>] notifier_call_chain+0x72/0xba
       [<ffffffff8107b068>] __blocking_notifier_call_chain+0x63/0x8e
       [<ffffffff8107b0ba>] blocking_notifier_call_chain+0x27/0x3d
       [<ffffffff8127629e>] fb_notifier_call_chain+0x2e/0x44
       [<ffffffff812789dd>] register_framebuffer+0x23e/0x267
       [<ffffffffa006b2f9>] intelfb_probe+0x501/0x599 [i915]
       [<ffffffffa003760c>] drm_helper_initial_config+0x187/0x1ac [drm]
       [<ffffffffa0057576>] i915_driver_load+0x8d5/0x942 [i915]
       [<ffffffffa002da35>] drm_get_dev+0x394/0x4ab [drm]
       [<ffffffffa0070c1e>] i915_pci_probe+0x28/0xfc [i915]
       [<ffffffff81259c5f>] local_pci_probe+0x2a/0x42
       [<ffffffff810706a5>] do_work_for_cpu+0x27/0x50
       [<ffffffff81075380>] kthread+0x6d/0xae
       [<ffffffff8101418a>] child_rip+0xa/0x20
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #2 ((fb_notifier_list).rwsem){.+.+.+}:
       [<ffffffff81089f5d>] __lock_acquire+0xa75/0xc0a
       [<ffffffff8108a1e0>] lock_acquire+0xee/0x12e
       [<ffffffff814b8771>] down_read+0x5e/0xa7
       [<ffffffff8107b051>] __blocking_notifier_call_chain+0x4c/0x8e
       [<ffffffff8107b0ba>] blocking_notifier_call_chain+0x27/0x3d
       [<ffffffff8127629e>] fb_notifier_call_chain+0x2e/0x44
       [<ffffffff812789dd>] register_framebuffer+0x23e/0x267
       [<ffffffffa006b2f9>] intelfb_probe+0x501/0x599 [i915]
       [<ffffffffa003760c>] drm_helper_initial_config+0x187/0x1ac [drm]
       [<ffffffffa0057576>] i915_driver_load+0x8d5/0x942 [i915]
       [<ffffffffa002da35>] drm_get_dev+0x394/0x4ab [drm]
       [<ffffffffa0070c1e>] i915_pci_probe+0x28/0xfc [i915]
       [<ffffffff81259c5f>] local_pci_probe+0x2a/0x42
       [<ffffffff810706a5>] do_work_for_cpu+0x27/0x50
       [<ffffffff81075380>] kthread+0x6d/0xae
       [<ffffffff8101418a>] child_rip+0xa/0x20
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #1 (&fb_info->lock){+.+.+.}:
       [<ffffffff81089f5d>] __lock_acquire+0xa75/0xc0a
       [<ffffffff8108a1e0>] lock_acquire+0xee/0x12e
       [<ffffffff814b7ec3>] __mutex_lock_common+0x5b/0x3bf
       [<ffffffff814b834a>] mutex_lock_nested+0x4f/0x6b
       [<ffffffff81276ad3>] fb_mmap+0xb5/0x1b8
       [<ffffffff811023a4>] mmap_region+0x2cc/0x4c1
       [<ffffffff811028ab>] do_mmap_pgoff+0x312/0x38b
       [<ffffffff81017ced>] sys_mmap+0xab/0x100
       [<ffffffff81013002>] system_call_fastpath+0x16/0x1b
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #0 (&mm->mmap_sem){++++++}:
       [<ffffffff81089e3a>] __lock_acquire+0x952/0xc0a
       [<ffffffff8108a1e0>] lock_acquire+0xee/0x12e
       [<ffffffff810f8d05>] might_fault+0x9e/0xd9
       [<ffffffffa0033d53>] drm_mode_getresources+0x2b3/0x5a5 [drm]
       [<ffffffffa0028b54>] drm_ioctl+0x223/0x2ef [drm]
       [<ffffffff8113484c>] vfs_ioctl+0x7e/0xaa
       [<ffffffff81134cf5>] do_vfs_ioctl+0x47d/0x4d4
       [<ffffffff81134db1>] sys_ioctl+0x65/0x9c
       [<ffffffff81013002>] system_call_fastpath+0x16/0x1b
       [<ffffffffffffffff>] 0xffffffffffffffff

other info that might help us debug this:

1 lock held by Xorg/1790:
 #0:  (&dev->mode_config.mutex){+.+.+.}, at: [<ffffffffa0033ae6>] drm_mode_getresources+0x46/0x5a5 [drm]

stack backtrace:
Pid: 1790, comm: Xorg Not tainted 2.6.30-6.fc12.x86_64 #1
Call Trace:
 [<ffffffff81089123>] print_circular_bug_tail+0x80/0x9f
 [<ffffffff8108906b>] ? check_noncircular+0xb0/0xe8
 [<ffffffff81089e3a>] __lock_acquire+0x952/0xc0a
 [<ffffffff8108a1e0>] lock_acquire+0xee/0x12e
 [<ffffffff810f8cd8>] ? might_fault+0x71/0xd9
 [<ffffffff810f8cd8>] ? might_fault+0x71/0xd9
 [<ffffffff810f8d05>] might_fault+0x9e/0xd9
 [<ffffffff810f8cd8>] ? might_fault+0x71/0xd9
 [<ffffffffa0033d53>] drm_mode_getresources+0x2b3/0x5a5 [drm]
 [<ffffffffa0033aa0>] ? drm_mode_getresources+0x0/0x5a5 [drm]
 [<ffffffffa0028b54>] drm_ioctl+0x223/0x2ef [drm]
 [<ffffffff8113484c>] vfs_ioctl+0x7e/0xaa
 [<ffffffff81134cf5>] do_vfs_ioctl+0x47d/0x4d4
 [<ffffffff81134db1>] sys_ioctl+0x65/0x9c
 [<ffffffff810b34b7>] ? audit_filter_syscall+0x44/0x13d
 [<ffffffff81013002>] system_call_fastpath+0x16/0x1b

Comment 27 Adam Williamson 2009-06-15 21:25:51 UTC
Kevin Fenzi confirms this: his log is http://scrye.com/pastebin/636 . Kristian thinks it's because patches from F11 kernel didn't yet get forward ported to F12's. Setting back to NEW, I hope that's right.

-- 
Fedora Bugzappers volunteer triage team
https://fedoraproject.org/wiki/BugZappers

Comment 28 Adam Williamson 2009-06-15 21:27:03 UTC
*** Bug 492684 has been marked as a duplicate of this bug. ***

Comment 29 Adam Williamson 2009-06-15 21:27:15 UTC
*** Bug 488633 has been marked as a duplicate of this bug. ***

Comment 30 Vedran Miletić 2009-10-26 11:43:14 UTC
Is this still an issue with Fedora 12 Beta?

Comment 31 Tom London 2009-10-26 13:29:30 UTC
Haven't seen this with F12 (or with Rawhide) in ages (4-5 months?)....

Comment 32 Vedran Miletić 2009-10-26 14:43:52 UTC
Thanks for reporting back.


Note You need to log in before you can comment on or make changes to this bug.