Bug 1008585 - possible circular locking dependency detected
Summary: possible circular locking dependency detected
Keywords:
Status: CLOSED UPSTREAM
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 22
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Kernel Maintainer List
QA Contact: Fedora Extras Quality Assurance
URL: https://bugzilla.redhat.com/attachmen...
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2013-09-16 16:12 UTC by Nicolas Mailhot
Modified: 2015-03-25 12:29 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-03-25 12:29:51 UTC
Type: Bug
Embargoed:


Attachments (Terms of Use)

Description Nicolas Mailhot 2013-09-16 16:12:18 UTC
sept. 16 16:58:04 arekh.okg kernel: sept. 16 16:58:04 arekh.okg kernel: ======================================================
sept. 16 16:58:04 arekh.okg kernel: [ INFO: possible circular locking dependency detected ]
sept. 16 16:58:04 arekh.okg kernel: 3.12.0-0.rc0.git26.1.fc21.x86_64 #1 Not tainted
sept. 16 16:58:04 arekh.okg kernel: -------------------------------------------------------
sept. 16 16:58:04 arekh.okg kernel: kworker/0:1/26 is trying to acquire lock:
sept. 16 16:58:04 arekh.okg kernel:  (&fb_info->lock){+.+.+.}, at: [<ffffffff813cd45d>] lock_fb_info+0x1d/0x50
sept. 16 16:58:04 arekh.okg kernel: 
                                    but task is already holding lock:
sept. 16 16:58:04 arekh.okg kernel:  (console_lock){+.+.+.}, at: [<ffffffff8146c173>] console_callback+0x13/0x180
sept. 16 16:58:04 arekh.okg kernel: 
                                    which lock already depends on the new lock.
sept. 16 16:58:04 arekh.okg kernel: 
                                    the existing dependency chain (in reverse order) is:
sept. 16 16:58:04 arekh.okg kernel: 
                                    -> #1 (console_lock){+.+.+.}:
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff810f1272>] lock_acquire+0xa2/0x1f0
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff810d55c7>] console_lock+0x77/0x80
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff813cf0ad>] register_framebuffer+0x1dd/0x320
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff813e708d>] vesafb_probe+0x5bd/0xa70
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff8149a57c>] platform_drv_probe+0x3c/0x70
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff814982e7>] driver_probe_device+0x87/0x390
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff814986c3>] __driver_attach+0x93/0xa0
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff814960ab>] bus_for_each_dev+0x6b/0xb0
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff81497d0e>] driver_attach+0x1e/0x20
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff81497868>] bus_add_driver+0x208/0x2e0
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff81498d34>] driver_register+0x64/0xf0
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff8149a36a>] __platform_driver_register+0x4a/0x50
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff81f6bb21>] vesafb_driver_init+0x12/0x14
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff810020fa>] do_one_initcall+0xfa/0x1b0
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff81f2b0c3>] kernel_init_freeable+0x17c/0x202
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff81735afe>] kernel_init+0xe/0x190
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff8175442c>] ret_from_fork+0x7c/0xb0
sept. 16 16:58:04 arekh.okg kernel: 
                                    -> #0 (&fb_info->lock){+.+.+.}:
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff810f08d2>] __lock_acquire+0x18f2/0x1aa0
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff810f1272>] lock_acquire+0xa2/0x1f0
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff81744b58>] mutex_lock_nested+0x88/0x400
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff813cd45d>] lock_fb_info+0x1d/0x50
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff813d925b>] fbcon_blank+0x28b/0x2d0
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff81469cf8>] do_blank_screen+0x1d8/0x280
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff8146c1c8>] console_callback+0x68/0x180
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff810981e1>] process_one_work+0x211/0x6e0
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff810987cd>] worker_thread+0x11d/0x3a0
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff810a15dd>] kthread+0xed/0x100
sept. 16 16:58:04 arekh.okg kernel:        [<ffffffff8175442c>] ret_from_fork+0x7c/0xb0
sept. 16 16:58:04 arekh.okg kernel: 
                                    other info that might help us debug this:
sept. 16 16:58:04 arekh.okg kernel:  Possible unsafe locking scenario:
sept. 16 16:58:04 arekh.okg kernel:        CPU0                    CPU1
sept. 16 16:58:04 arekh.okg kernel:        ----                    ----
sept. 16 16:58:04 arekh.okg kernel:   lock(console_lock);
sept. 16 16:58:04 arekh.okg kernel:                                lock(&fb_info->lock);
sept. 16 16:58:04 arekh.okg kernel:                                lock(console_lock);
sept. 16 16:58:04 arekh.okg kernel:   lock(&fb_info->lock);
sept. 16 16:58:04 arekh.okg kernel: 
                                     *** DEADLOCK ***
sept. 16 16:58:04 arekh.okg kernel: 3 locks held by kworker/0:1/26:
sept. 16 16:58:04 arekh.okg kernel:  #0:  (events){.+.+.+}, at: [<ffffffff81098175>] process_one_work+0x1a5/0x6e0
sept. 16 16:58:04 arekh.okg kernel:  #1:  (console_work){+.+...}, at: [<ffffffff81098175>] process_one_work+0x1a5/0x6e0
sept. 16 16:58:04 arekh.okg kernel:  #2:  (console_lock){+.+.+.}, at: [<ffffffff8146c173>] console_callback+0x13/0x180
sept. 16 16:58:04 arekh.okg kernel: 
                                    stack backtrace:
sept. 16 16:58:04 arekh.okg kernel: CPU: 0 PID: 26 Comm: kworker/0:1 Not tainted 3.12.0-0.rc0.git26.1.fc21.x86_64 #1
sept. 16 16:58:04 arekh.okg kernel: Hardware name: Gigabyte Technology Co., Ltd. EP45-DS5/EP45-DS5, BIOS F12 09/30/2008
sept. 16 16:58:04 arekh.okg kernel: Workqueue: events console_callback
sept. 16 16:58:04 arekh.okg kernel:  ffffffff8259f9a0 ffff880128a07a20 ffffffff81741573 ffffffff8259f9a0
sept. 16 16:58:04 arekh.okg kernel:  ffff880128a07a60 ffffffff8173d0a9 ffff880128a07ab0 ffff8801289cce70
sept. 16 16:58:04 arekh.okg kernel:  ffff8801289cc590 0000000000000003 0000000000000003 ffff8801289cce70
sept. 16 16:58:04 arekh.okg kernel: Call Trace:
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff81741573>] dump_stack+0x54/0x74
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff8173d0a9>] print_circular_bug+0x201/0x210
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff810f08d2>] __lock_acquire+0x18f2/0x1aa0
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff8174b273>] ? retint_restore_args+0x13/0x13
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff810f1272>] lock_acquire+0xa2/0x1f0
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff813cd45d>] ? lock_fb_info+0x1d/0x50
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff81744b58>] mutex_lock_nested+0x88/0x400
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff813cd45d>] ? lock_fb_info+0x1d/0x50
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff813cd45d>] ? lock_fb_info+0x1d/0x50
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff813dc5ee>] ? bit_clear+0xce/0x100
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff813cd45d>] lock_fb_info+0x1d/0x50
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff813d925b>] fbcon_blank+0x28b/0x2d0
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff8174a3b6>] ? _raw_spin_unlock_irqrestore+0x36/0x70
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff810eebad>] ? trace_hardirqs_on_caller+0xfd/0x1c0
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff810eec7d>] ? trace_hardirqs_on+0xd/0x10
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff81084feb>] ? try_to_del_timer_sync+0x5b/0x80
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff81469cf8>] do_blank_screen+0x1d8/0x280
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff8146c1c8>] console_callback+0x68/0x180
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff810981e1>] process_one_work+0x211/0x6e0
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff81098175>] ? process_one_work+0x1a5/0x6e0
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff810987cd>] worker_thread+0x11d/0x3a0
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff810986b0>] ? process_one_work+0x6e0/0x6e0
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff810a15dd>] kthread+0xed/0x100
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff810a14f0>] ? insert_kthread_work+0x80/0x80
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff8175442c>] ret_from_fork+0x7c/0xb0
sept. 16 16:58:04 arekh.okg kernel:  [<ffffffff810a14f0>] ? insert_kthread_work+0x80/0x80

Comment 1 Josh Boyer 2013-12-16 15:12:12 UTC
are you still seeing this with the 3.13 kernels?

Comment 2 Nicolas Mailhot 2013-12-16 20:20:09 UTC
[   99.488010] [ INFO: possible recursive locking detected ]
[   99.488010] 3.13.0-0.rc3.git5.1.fc21.x86_64 #1 Not tainted
[   99.488010] ---------------------------------------------
[   99.488010] pulseaudio/1178 is trying to acquire lock:
[   99.488010]  (&(&substream->self_group.lock)->rlock/1){......}, at: [<ffffffffa014f3c3>] snd_pcm_action_group+0xa3/0x240 [snd_pcm]
[   99.488010] 
but task is already holding lock:
[   99.488010]  (&(&substream->self_group.lock)->rlock/1){......}, at: [<ffffffffa014f3c3>] snd_pcm_action_group+0xa3/0x240 [snd_pcm]
[   99.488010] 
other info that might help us debug this:
[   99.488010]  Possible unsafe locking scenario:

[   99.488010]        CPU0
[   99.488010]        ----
[   99.488010]   lock(&(&substream->self_group.lock)->rlock/1);
[   99.488010]   lock(&(&substream->self_group.lock)->rlock/1);
[   99.488010] 
 *** DEADLOCK ***

[   99.488010]  May be due to missing lock nesting notation

[   99.488010] 4 locks held by pulseaudio/1178:
[   99.488010]  #0:  (snd_pcm_link_rwlock){......}, at: [<ffffffffa0150e67>] snd_pcm_drop+0x57/0xe0 [snd_pcm]
[   99.488010]  #1:  (&(&substream->self_group.lock)->rlock){......}, at: [<ffffffffa0150e6f>] snd_pcm_drop+0x5f/0xe0 [snd_pcm]
[   99.488010]  #2:  (&(&substream->group->lock)->rlock){......}, at: [<ffffffffa0150c91>] snd_pcm_action+0x31/0xa0 [snd_pcm]
[   99.488010]  #3:  (&(&substream->self_group.lock)->rlock/1){......}, at: [<ffffffffa014f3c3>] snd_pcm_action_group+0xa3/0x240 [snd_pcm]
[   99.488010] 
stack backtrace:
[   99.488010] CPU: 0 PID: 1178 Comm: pulseaudio Not tainted 3.13.0-0.rc3.git5.1.fc21.x86_64 #1
[   99.488010] Hardware name: Gigabyte Technology Co., Ltd. EP45-DS5/EP45-DS5, BIOS F12 09/30/2008
[   99.488010]  ffffffff825fb750 ffff880115535bd8 ffffffff8175559d ffffffff825fb750
[   99.488010]  ffff880115535ca0 ffffffff810d093a ffff88011557b3a0 ffff880100000001
[   99.488010]  0000000000000002 ffff88011557b3a0 ffffffff00000000 4f527b1400a02501
[   99.488010] Call Trace:
[   99.488010]  [<ffffffff8175559d>] dump_stack+0x4d/0x66
[   99.488010]  [<ffffffff810d093a>] __lock_acquire+0x1b7a/0x1c10
[   99.488010]  [<ffffffff810cf0a5>] ? __lock_acquire+0x2e5/0x1c10
[   99.488010]  [<ffffffff810d11c2>] lock_acquire+0xa2/0x1d0
[   99.488010]  [<ffffffffa014f3c3>] ? snd_pcm_action_group+0xa3/0x240 [snd_pcm]
[   99.488010]  [<ffffffff8175e0bc>] _raw_spin_lock_nested+0x3c/0x80
[   99.488010]  [<ffffffffa014f3c3>] ? snd_pcm_action_group+0xa3/0x240 [snd_pcm]
[   99.488010]  [<ffffffffa014f3c3>] snd_pcm_action_group+0xa3/0x240 [snd_pcm]
[   99.488010]  [<ffffffffa0150cc4>] snd_pcm_action+0x64/0xa0 [snd_pcm]
[   99.488010]  [<ffffffffa0150e90>] snd_pcm_drop+0x80/0xe0 [snd_pcm]
[   99.488010]  [<ffffffffa0152e30>] snd_pcm_common_ioctl1+0x910/0xc00 [snd_pcm]
[   99.488010]  [<ffffffff813137bd>] ? avc_has_perm+0x15d/0x340
[   99.488010]  [<ffffffff81313685>] ? avc_has_perm+0x25/0x340
[   99.488010]  [<ffffffffa01532ab>] snd_pcm_playback_ioctl1+0x18b/0x290 [snd_pcm]
[   99.488010]  [<ffffffffa01533e0>] snd_pcm_playback_ioctl+0x30/0x40 [snd_pcm]
[   99.488010]  [<ffffffff8120f0c0>] do_vfs_ioctl+0x300/0x520
[   99.488010]  [<ffffffff8131625b>] ? selinux_file_ioctl+0x5b/0x110
[   99.488010]  [<ffffffff8120f361>] SyS_ioctl+0x81/0xa0
[   99.488010]  [<ffffffff81768269>] system_call_fastpath+0x16/0x1b

Comment 3 Josh Boyer 2013-12-16 21:20:26 UTC
That's an entirely different backtrace fromthe original submission.  I take it you don't see the fb_info lock issue anymore then?

Comment 4 Nicolas Mailhot 2013-12-18 10:21:58 UTC
Not on this boot I haven't investigated others (the psu has problems on reboot, it lacks the juice to start spinning fans four time out of five, so I avoid rebooting too often)

Comment 5 Jaroslav Reznik 2015-03-03 15:03:49 UTC
This bug appears to have been reported against 'rawhide' during the Fedora 22 development cycle.
Changing version to '22'.

More information and reason for this action is here:
https://fedoraproject.org/wiki/Fedora_Program_Management/HouseKeeping/Fedora22


Note You need to log in before you can comment on or make changes to this bug.