Bug 728316
Summary: | recursive locking (hdl->lock taken in v4l2_ctrl_add_handler & find_ref_lock) | ||||||
---|---|---|---|---|---|---|---|
Product: | [Fedora] Fedora | Reporter: | Nicolas Mailhot <nicolas.mailhot> | ||||
Component: | kernel | Assignee: | Mauro Carvalho Chehab <mchehab> | ||||
Status: | CLOSED NOTABUG | QA Contact: | Fedora Extras Quality Assurance <extras-qa> | ||||
Severity: | unspecified | Docs Contact: | |||||
Priority: | unspecified | ||||||
Version: | rawhide | CC: | gansalmon, gtmkramer, hdegoede, itamar, jonathan, jones.peter.busi, kernel-maint, lwang, madhu.chinakonda, selinux | ||||
Target Milestone: | --- | ||||||
Target Release: | --- | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2012-02-02 16:57:24 UTC | Type: | --- | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Nicolas Mailhot
2011-08-04 16:58:05 UTC
Did this happen with the 3.0 kernels? Jul 25 20:02:57 arekh kernel: [ 36.332117] ============================================= Jul 25 20:02:57 arekh kernel: [ 36.332121] [ INFO: possible recursive locking detected ] Jul 25 20:02:57 arekh kernel: [ 36.332124] 3.0.0-1.fc16.x86_64 #1 Jul 25 20:02:57 arekh kernel: [ 36.332126] --------------------------------------------- Jul 25 20:02:57 arekh kernel: [ 36.332129] modprobe/680 is trying to acquire lock: Jul 25 20:02:57 arekh kernel: [ 36.332131] (&hdl->lock){+.+.+.}, at: [<ffffffffa02f4623>] handler_new_ref+0xe9/0x18a [videodev] Jul 25 20:02:57 arekh kernel: [ 36.332143] Jul 25 20:02:57 arekh kernel: [ 36.332144] but task is already holding lock: Jul 25 20:02:57 arekh kernel: [ 36.332146] (&hdl->lock){+.+.+.}, at: [<ffffffffa02f470d>] v4l2_ctrl_add_handler+0x49/0x8e [videodev] Jul 25 20:02:57 arekh kernel: [ 36.332156] Jul 25 20:02:57 arekh kernel: [ 36.332156] other info that might help us debug this: Jul 25 20:02:57 arekh kernel: [ 36.332159] Possible unsafe locking scenario: Jul 25 20:02:57 arekh kernel: [ 36.332160] Jul 25 20:02:57 arekh kernel: [ 36.332162] CPU0 Jul 25 20:02:57 arekh kernel: [ 36.332163] ---- Jul 25 20:02:57 arekh kernel: [ 36.332165] lock(&hdl->lock); Jul 25 20:02:57 arekh kernel: [ 36.332168] lock(&hdl->lock); Jul 25 20:02:57 arekh kernel: [ 36.332172] Jul 25 20:02:57 arekh kernel: [ 36.332172] *** DEADLOCK *** Jul 25 20:02:57 arekh kernel: [ 36.332173] Jul 25 20:02:57 arekh kernel: [ 36.332175] May be due to missing lock nesting notation Jul 25 20:02:57 arekh kernel: [ 36.332176] Jul 25 20:02:57 arekh kernel: [ 36.332179] 3 locks held by modprobe/680: Jul 25 20:02:57 arekh kernel: [ 36.332181] #0: (&__lockdep_no_validate__){+.+.+.}, at: [<ffffffff8130b500>] __driver_attach+0x3b/0x82 Jul 25 20:02:57 arekh kernel: [ 36.332191] #1: (&__lockdep_no_validate__){+.+.+.}, at: [<ffffffff8130b50e>] __driver_attach+0x49/0x82 Jul 25 20:02:57 arekh kernel: [ 36.332198] #2: (&hdl->lock){+.+.+.}, at: [<ffffffffa02f470d>] v4l2_ctrl_add_handler+0x49/0x8e [videodev] Jul 25 20:02:57 arekh kernel: [ 36.332208] Jul 25 20:02:57 arekh kernel: [ 36.332209] stack backtrace: Jul 25 20:02:57 arekh kernel: [ 36.332212] Pid: 680, comm: modprobe Not tainted 3.0.0-1.fc16.x86_64 #1 Jul 25 20:02:57 arekh kernel: [ 36.332215] Call Trace: Jul 25 20:02:57 arekh kernel: [ 36.332221] [<ffffffff8108a375>] __lock_acquire+0x917/0xcf7 Jul 25 20:02:57 arekh kernel: [ 36.332225] [<ffffffff81087536>] ? trace_hardirqs_off+0xd/0xf Jul 25 20:02:57 arekh kernel: [ 36.332230] [<ffffffff81087a7a>] ? lock_release_holdtime.part.8+0x6b/0x72 Jul 25 20:02:57 arekh kernel: [ 36.332237] [<ffffffffa02f4623>] ? handler_new_ref+0xe9/0x18a [videodev] Jul 25 20:02:57 arekh kernel: [ 36.332241] [<ffffffff8108abe2>] lock_acquire+0xbf/0x103 Jul 25 20:02:57 arekh kernel: [ 36.332248] [<ffffffffa02f4623>] ? handler_new_ref+0xe9/0x18a [videodev] Jul 25 20:02:57 arekh kernel: [ 36.332252] [<ffffffff8108aeb8>] ? mark_held_locks+0x4b/0x6d Jul 25 20:02:57 arekh kernel: [ 36.332258] [<ffffffffa02f4623>] ? handler_new_ref+0xe9/0x18a [videodev] Jul 25 20:02:57 arekh kernel: [ 36.332264] [<ffffffff814dc8f2>] __mutex_lock_common+0x4c/0x361 Jul 25 20:02:57 arekh kernel: [ 36.332270] [<ffffffffa02f4623>] ? handler_new_ref+0xe9/0x18a [videodev] Jul 25 20:02:57 arekh kernel: [ 36.332277] [<ffffffffa02f44ae>] ? kzalloc.constprop.15+0x13/0x15 [videodev] Jul 25 20:02:57 arekh kernel: [ 36.332282] [<ffffffff81129e7a>] ? __kmalloc+0xfa/0x10c Jul 25 20:02:57 arekh kernel: [ 36.332286] [<ffffffff814dcd16>] mutex_lock_nested+0x40/0x45 Jul 25 20:02:57 arekh kernel: [ 36.332293] [<ffffffffa02f4623>] handler_new_ref+0xe9/0x18a [videodev] Jul 25 20:02:57 arekh kernel: [ 36.332299] [<ffffffffa02f4728>] v4l2_ctrl_add_handler+0x64/0x8e [videodev] Jul 25 20:02:57 arekh kernel: [ 36.332306] [<ffffffffa02f188b>] v4l2_device_register_subdev+0xde/0x19e [videodev] Jul 25 20:02:57 arekh kernel: [ 36.332317] [<ffffffffa030af3a>] ivtv_gpio_init+0x14e/0x15c [ivtv] Jul 25 20:02:57 arekh kernel: [ 36.332327] [<ffffffffa0318b9b>] ivtv_probe+0xdc4/0x1662 [ivtv] Jul 25 20:02:57 arekh kernel: [ 36.332331] [<ffffffff8108aeb8>] ? mark_held_locks+0x4b/0x6d Jul 25 20:02:57 arekh kernel: [ 36.332336] [<ffffffff814de410>] ? _raw_spin_unlock_irqrestore+0x4d/0x52 Jul 25 20:02:57 arekh kernel: [ 36.332342] [<ffffffff8125d94e>] local_pci_probe+0x44/0x75 Jul 25 20:02:57 arekh kernel: [ 36.332346] [<ffffffff8125e43e>] pci_device_probe+0xd0/0xff Jul 25 20:02:57 arekh kernel: [ 36.332351] [<ffffffff8130b3e3>] driver_probe_device+0x131/0x213 Jul 25 20:02:57 arekh kernel: [ 36.332355] [<ffffffff8130b523>] __driver_attach+0x5e/0x82 Jul 25 20:02:57 arekh kernel: [ 36.332360] [<ffffffff8130b4c5>] ? driver_probe_device+0x213/0x213 Jul 25 20:02:57 arekh kernel: [ 36.332364] [<ffffffff8130a424>] bus_for_each_dev+0x59/0x8f Jul 25 20:02:57 arekh kernel: [ 36.332369] [<ffffffff8130afb7>] driver_attach+0x1e/0x20 Jul 25 20:02:57 arekh kernel: [ 36.332373] [<ffffffff8130abcf>] bus_add_driver+0xd4/0x22a Jul 25 20:02:57 arekh kernel: [ 36.332382] [<ffffffffa032c000>] ? 0xffffffffa032bfff Jul 25 20:02:57 arekh kernel: [ 36.332386] [<ffffffff8130b9e6>] driver_register+0x98/0x105 Jul 25 20:02:57 arekh kernel: [ 36.332390] [<ffffffffa032c000>] ? 0xffffffffa032bfff Jul 25 20:02:57 arekh kernel: [ 36.332395] [<ffffffff8125ed11>] __pci_register_driver+0x66/0xd1 Jul 25 20:02:57 arekh kernel: [ 36.332399] [<ffffffffa032c000>] ? 0xffffffffa032bfff Jul 25 20:02:57 arekh kernel: [ 36.332408] [<ffffffffa032c078>] module_start+0x78/0x1000 [ivtv] Jul 25 20:02:57 arekh kernel: [ 36.332413] [<ffffffff81002099>] do_one_initcall+0x7f/0x13a Jul 25 20:02:57 arekh kernel: [ 36.332418] [<ffffffffa032c000>] ? 0xffffffffa032bfff Jul 25 20:02:57 arekh kernel: [ 36.332423] [<ffffffff81095c20>] sys_init_module+0x114/0x268 Jul 25 20:02:57 arekh kernel: [ 36.332427] [<ffffffff814e4fc2>] system_call_fastpath+0x16/0x1b I don't keep older logs Seeing this with kernel-3.1.0-0.rc3.git0.0.fc17.x86_64: [ 124.667846] ======================================================= [ 124.667850] [ INFO: possible circular locking dependency detected ] [ 124.667853] 3.1.0-0.rc3.git0.0.fc17.x86_64 #1 [ 124.667854] ------------------------------------------------------- [ 124.667857] rhythmbox/1899 is trying to acquire lock: [ 124.667859] (&sb->s_type->i_mutex_key#13){+.+.+.}, at: [<ffffffff811ae5c2>] ext4_evict_inode+0x76/0x33c [ 124.667869] [ 124.667870] but task is already holding lock: [ 124.667872] (&mm->mmap_sem){++++++}, at: [<ffffffff81116ffa>] sys_munmap+0x3b/0x60 [ 124.667879] [ 124.667879] which lock already depends on the new lock. [ 124.667880] [ 124.667882] [ 124.667883] the existing dependency chain (in reverse order) is: [ 124.667885] [ 124.667885] -> #1 (&mm->mmap_sem){++++++}: [ 124.667889] [<ffffffff8108eff1>] lock_acquire+0xf3/0x13e [ 124.667894] [<ffffffff8110fc37>] might_fault+0x89/0xac [ 124.667898] [<ffffffff81151c5b>] filldir+0x6f/0xc7 [ 124.667903] [<ffffffff811a53a7>] call_filldir+0x96/0xc0 [ 124.667908] [<ffffffff811a56e0>] ext4_readdir+0x1bd/0x548 [ 124.667912] [<ffffffff81151eb0>] vfs_readdir+0x7b/0xb4 [ 124.667915] [<ffffffff81151fcf>] sys_getdents+0x7e/0xd1 [ 124.667919] [<ffffffff8150b102>] system_call_fastpath+0x16/0x1b [ 124.667924] [ 124.667924] -> #0 (&sb->s_type->i_mutex_key#13){+.+.+.}: [ 124.667929] [<ffffffff8108e81e>] __lock_acquire+0xa1a/0xcf7 [ 124.667933] [<ffffffff8108eff1>] lock_acquire+0xf3/0x13e [ 124.667936] [<ffffffff8150263b>] __mutex_lock_common+0x5d/0x39a [ 124.667940] [<ffffffff81502a87>] mutex_lock_nested+0x40/0x45 [ 124.667944] [<ffffffff811ae5c2>] ext4_evict_inode+0x76/0x33c [ 124.667947] [<ffffffff81157d52>] evict+0x99/0x153 [ 124.667951] [<ffffffff81157f9d>] iput+0x191/0x19a [ 124.667954] [<ffffffff81154c55>] dentry_kill+0x123/0x145 [ 124.667957] [<ffffffff81155064>] dput+0xf7/0x107 [ 124.667961] [<ffffffff8114413b>] fput+0x1dd/0x1f5 [ 124.667965] [<ffffffff8111594e>] remove_vma+0x56/0x87 [ 124.667968] [<ffffffff81116b5d>] do_munmap+0x2f2/0x30b [ 124.667971] [<ffffffff81117008>] sys_munmap+0x49/0x60 [ 124.667974] [<ffffffff8150b102>] system_call_fastpath+0x16/0x1b [ 124.667978] [ 124.667979] other info that might help us debug this: [ 124.667979] [ 124.667981] Possible unsafe locking scenario: [ 124.667982] [ 124.667984] CPU0 CPU1 [ 124.667985] ---- ---- [ 124.667987] lock(&mm->mmap_sem); [ 124.667990] lock(&sb->s_type->i_mutex_key); [ 124.667993] lock(&mm->mmap_sem); [ 124.667997] lock(&sb->s_type->i_mutex_key); [ 124.667999] [ 124.668000] *** DEADLOCK *** [ 124.668001] [ 124.668003] 1 lock held by rhythmbox/1899: [ 124.668005] #0: (&mm->mmap_sem){++++++}, at: [<ffffffff81116ffa>] sys_munmap+0x3b/0x60 [ 124.668011] [ 124.668011] stack backtrace: [ 124.668014] Pid: 1899, comm: rhythmbox Not tainted 3.1.0-0.rc3.git0.0.fc17.x86_64 #1 [ 124.668016] Call Trace: [ 124.668021] [<ffffffff814f9bd4>] print_circular_bug+0x1f8/0x209 [ 124.668024] [<ffffffff8108e81e>] __lock_acquire+0xa1a/0xcf7 [ 124.668024] [<ffffffff810152af>] ? native_sched_clock+0x34/0x36 [ 124.668024] [<ffffffff811ae5c2>] ? ext4_evict_inode+0x76/0x33c [ 124.668024] [<ffffffff8108eff1>] lock_acquire+0xf3/0x13e [ 124.668024] [<ffffffff811ae5c2>] ? ext4_evict_inode+0x76/0x33c [ 124.668024] [<ffffffff811ae5c2>] ? ext4_evict_inode+0x76/0x33c [ 124.668024] [<ffffffff8150263b>] __mutex_lock_common+0x5d/0x39a [ 124.668024] [<ffffffff811ae5c2>] ? ext4_evict_inode+0x76/0x33c [ 124.668024] [<ffffffff810801e9>] ? sched_clock_local+0x12/0x75 [ 124.668024] [<ffffffff8108b885>] ? trace_hardirqs_off+0xd/0xf [ 124.668024] [<ffffffff8108bdf0>] ? lock_release_holdtime.part.9+0x59/0x62 [ 124.668024] [<ffffffff81502a87>] mutex_lock_nested+0x40/0x45 [ 124.668024] [<ffffffff811ae5c2>] ext4_evict_inode+0x76/0x33c [ 124.668024] [<ffffffff81157d52>] evict+0x99/0x153 [ 124.668024] [<ffffffff81157f9d>] iput+0x191/0x19a [ 124.668024] [<ffffffff81154c55>] dentry_kill+0x123/0x145 [ 124.668024] [<ffffffff81155064>] dput+0xf7/0x107 [ 124.668024] [<ffffffff8114413b>] fput+0x1dd/0x1f5 [ 124.668024] [<ffffffff8111594e>] remove_vma+0x56/0x87 [ 124.668024] [<ffffffff81116b5d>] do_munmap+0x2f2/0x30b [ 124.668024] [<ffffffff81117008>] sys_munmap+0x49/0x60 [ 124.668024] [<ffffffff8150b102>] system_call_fastpath+0x16/0x1b [ 130.554792] rhythmbox-metad[1983]: segfault at 9 ip 0000000000408275 sp 00007fff9b49d920 error 4 in rhythmbox-metadata[400000+12000] Also seeing this on F16 (same kernel 3.1.0-0.rc3.git0.0.fc16.x86_64). No further side affects noted so far. This happens only once every bootup. 72.355148] ======================================================= [ 72.355152] [ INFO: possible circular locking dependency detected ] [ 72.355154] 3.1.0-0.rc3.git0.0.fc16.x86_64 #1 [ 72.355155] ------------------------------------------------------- [ 72.355157] dconf-service/1625 is trying to acquire lock: [ 72.355159] (&sb->s_type->i_mutex_key#13){+.+.+.}, at: [<ffffffff811ae562>] ext4_evict_inode+0x76/0x33c [ 72.355168] [ 72.355168] but task is already holding lock: [ 72.355170] (&mm->mmap_sem){++++++}, at: [<ffffffff81116f9a>] sys_munmap+0x3b/0x60 [ 72.355175] [ 72.355175] which lock already depends on the new lock. [ 72.355176] [ 72.355177] [ 72.355177] the existing dependency chain (in reverse order) is: [ 72.355179] [ 72.355179] -> #1 (&mm->mmap_sem){++++++}: [ 72.355182] [<ffffffff8108eff1>] lock_acquire+0xf3/0x13e [ 72.355186] [<ffffffff8110fbd7>] might_fault+0x89/0xac [ 72.355190] [<ffffffff81151bfb>] filldir+0x6f/0xc7 [ 72.355194] [<ffffffff811a5347>] call_filldir+0x96/0xc0 [ 72.355198] [<ffffffff811a5680>] ext4_readdir+0x1bd/0x548 [ 72.355201] [<ffffffff81151e50>] vfs_readdir+0x7b/0xb4 [ 72.355204] [<ffffffff81151f6f>] sys_getdents+0x7e/0xd1 [ 72.355206] [<ffffffff8150b082>] system_call_fastpath+0x16/0x1b [ 72.355210] [ 72.355211] -> #0 (&sb->s_type->i_mutex_key#13){+.+.+.}: [ 72.355214] [<ffffffff8108e81e>] __lock_acquire+0xa1a/0xcf7 [ 72.355216] [<ffffffff8108eff1>] lock_acquire+0xf3/0x13e [ 72.355219] [<ffffffff815025db>] __mutex_lock_common+0x5d/0x39a [ 72.355222] [<ffffffff81502a27>] mutex_lock_nested+0x40/0x45 [ 72.355225] [<ffffffff811ae562>] ext4_evict_inode+0x76/0x33c [ 72.355228] [<ffffffff81157cf2>] evict+0x99/0x153 [ 72.355230] [<ffffffff81157f3d>] iput+0x191/0x19a [ 72.355232] [<ffffffff81154bf5>] dentry_kill+0x123/0x145 [ 72.355236] [<ffffffff81155004>] dput+0xf7/0x107 [ 72.355238] [<ffffffff811440db>] fput+0x1dd/0x1f5 [ 72.355242] [<ffffffff811158ee>] remove_vma+0x56/0x87 [ 72.355245] [<ffffffff81116afd>] do_munmap+0x2f2/0x30b [ 72.355248] [<ffffffff81116fa8>] sys_munmap+0x49/0x60 [ 72.355250] [<ffffffff8150b082>] system_call_fastpath+0x16/0x1b [ 72.355253] [ 72.355253] other info that might help us debug this: [ 72.355254] [ 72.355255] Possible unsafe locking scenario: [ 72.355256] [ 72.355257] CPU0 CPU1 [ 72.355258] ---- ---- [ 72.355259] lock(&mm->mmap_sem); [ 72.355261] lock(&sb->s_type->i_mutex_key); [ 72.355264] lock(&mm->mmap_sem); [ 72.355266] lock(&sb->s_type->i_mutex_key); [ 72.355268] [ 72.355269] *** DEADLOCK *** [ 72.355269] [ 72.355271] 1 lock held by dconf-service/1625: [ 72.355272] #0: (&mm->mmap_sem){++++++}, at: [<ffffffff81116f9a>] sys_munmap+0x3b/0x60 [ 72.355277] [ 72.355277] stack backtrace: [ 72.355279] Pid: 1625, comm: dconf-service Not tainted 3.1.0-0.rc3.git0.0.fc16.x86_64 #1 [ 72.355281] Call Trace: [ 72.355286] [<ffffffff814f9b74>] print_circular_bug+0x1f8/0x209 [ 72.355289] [<ffffffff8108e81e>] __lock_acquire+0xa1a/0xcf7 [ 72.355292] [<ffffffff811ae562>] ? ext4_evict_inode+0x76/0x33c [ 72.355295] [<ffffffff8108eff1>] lock_acquire+0xf3/0x13e [ 72.355297] [<ffffffff811ae562>] ? ext4_evict_inode+0x76/0x33c [ 72.355300] [<ffffffff811ae562>] ? ext4_evict_inode+0x76/0x33c [ 72.355304] [<ffffffff815025db>] __mutex_lock_common+0x5d/0x39a [ 72.355308] [<ffffffff811ae562>] ? ext4_evict_inode+0x76/0x33c [ 72.355313] [<ffffffff810152af>] ? native_sched_clock+0x34/0x36 [ 72.355317] [<ffffffff810152ba>] ? sched_clock+0x9/0xd [ 72.355320] [<ffffffff8108b885>] ? trace_hardirqs_off+0xd/0xf [ 72.355324] [<ffffffff8108bdf0>] ? lock_release_holdtime.part.9+0x59/0x62 [ 72.355327] [<ffffffff81502a27>] mutex_lock_nested+0x40/0x45 [ 72.355330] [<ffffffff811ae562>] ext4_evict_inode+0x76/0x33c [ 72.355332] [<ffffffff81157cf2>] evict+0x99/0x153 [ 72.355334] [<ffffffff81157f3d>] iput+0x191/0x19a [ 72.355337] [<ffffffff81154bf5>] dentry_kill+0x123/0x145 [ 72.355340] [<ffffffff81155004>] dput+0xf7/0x107 [ 72.355342] [<ffffffff811440db>] fput+0x1dd/0x1f5 [ 72.355344] [<ffffffff811158ee>] remove_vma+0x56/0x87 [ 72.355346] [<ffffffff81116afd>] do_munmap+0x2f2/0x30b [ 72.355349] [<ffffffff81116fa8>] sys_munmap+0x49/0x60 [ 72.355352] [<ffffffff8150b082>] system_call_fastpath+0x16/0x1b [ 74.661594] lp: driver loaded but no devices found [ 74.716604] ppdev: user-space parallel port driver [ 118.943458] DMA-API: debugging out of memory - disabling Tom and Jurgen, you are seeing bug 732572 not this one. Created attachment 523055 [details]
dmesg output of 3.1.0-0.rc6.git0.0.fc17.x86_64 kernel
Seeing this in kernel 3.1.0-0.rc6.git0.0.fc17.x86_64
[ 235.149579] =======================================================
[ 235.149594] [ INFO: possible circular locking dependency detected ]
[ 235.149606] 3.1.0-0.rc6.git0.0.fc17.x86_64 #1
[ 235.149614] -------------------------------------------------------
[ 235.149624] pulseaudio/1700 is trying to acquire lock:
[ 235.149633] (&dev->lock){+.+.+.}, at: [<ffffffffa017d3a9>] snd_cx231xx_capture_open+0x7a/0x127 [cx231xx_alsa]
[ 235.149664]
[ 235.149667] but task is already holding lock:
[ 235.149675] (&pcm->open_mutex){+.+.+.}, at: [<ffffffffa0247374>] snd_pcm_open+0x15f/0x2d2 [snd_pcm]
[ 235.149713]
[ 235.149715] which lock already depends on the new lock.
[ 235.149719]
Complete dmesg output is attached.
(In reply to comment #6) > Created attachment 523055 [details] > dmesg output of 3.1.0-0.rc6.git0.0.fc17.x86_64 kernel > > Seeing this in kernel 3.1.0-0.rc6.git0.0.fc17.x86_64 > [ 235.149579] ======================================================= > [ 235.149594] [ INFO: possible circular locking dependency detected ] > [ 235.149606] 3.1.0-0.rc6.git0.0.fc17.x86_64 #1 > [ 235.149614] ------------------------------------------------------- > [ 235.149624] pulseaudio/1700 is trying to acquire lock: > [ 235.149633] (&dev->lock){+.+.+.}, at: [<ffffffffa017d3a9>] > snd_cx231xx_capture_open+0x7a/0x127 [cx231xx_alsa] > [ 235.149664] > [ 235.149667] but task is already holding lock: > [ 235.149675] (&pcm->open_mutex){+.+.+.}, at: [<ffffffffa0247374>] > snd_pcm_open+0x15f/0x2d2 [snd_pcm] > [ 235.149713] > [ 235.149715] which lock already depends on the new lock. > [ 235.149719] > > Complete dmesg output is attached. That is a completely different bug. *** Bug 754258 has been marked as a duplicate of this bug. *** All: Thanks for the bug reports. I've send an email to the upstream maintainer of both ivtv and the new v4l2 control framework, which are at the hart of this, pointing him to this bug report. Hopefully he will respond soon. Comment from Hans Verkuil (the upstream vivi and v4l2 ctrls framework maintainer: ### Andy Walls has looked at this before. It is a false report (yes, recursive locking does take place, but never on the same object), so if someone knows how to annotate that, then that's the fix. I don't have time to figure that out (my Christmas vacation starts very soon). Note that this applies to the original bug report/stack trace. I also see other traces (the ones from Tom London, Jurgen Kramer and Peter H. Jones) that are completely unrelated. ### Closing this out per the last comment. |