Description of problem: It happened at the beginning of the first session. Additional info: reporter: libreport-2.3.0 [ INFO: possible recursive locking detected ] 3.20.0-0.rc0.git9.1.fc22.x86_64 #1 Not tainted --------------------------------------------- Xorg/1484 is trying to acquire lock: (&dev->struct_mutex){+.+.+.}, at: [<ffffffffa013fac9>] i915_gem_unmap_dma_buf+0x39/0x110 [i915] but task is already holding lock: (&dev->struct_mutex){+.+.+.}, at: [<ffffffffa002eb12>] drm_gem_object_handle_unreference_unlocked+0xc2/0x130 [drm] other info that might help us debug this: Possible unsafe locking scenario: CPU0 ---- lock(&dev->struct_mutex); lock(&dev->struct_mutex); *** DEADLOCK *** May be due to missing lock nesting notation 1 lock held by Xorg/1484: #0: (&dev->struct_mutex){+.+.+.}, at: [<ffffffffa002eb12>] drm_gem_object_handle_unreference_unlocked+0xc2/0x130 [drm] stack backtrace: CPU: 0 PID: 1484 Comm: Xorg Not tainted 3.20.0-0.rc0.git9.1.fc22.x86_64 #1 Hardware name: ASUSTeK COMPUTER INC. U32VJ/U32VJ, BIOS U32VJ.201 08/29/2012 0000000000000000 000000002dfdbeed ffff8800b67779d8 ffffffff818773cd 0000000000000000 ffffffff82bdbaf0 ffff8800b6777ad8 ffffffff811091c9 00000000b6777ae8 ffff88012a003fc0 ffff880128f90000 0000000000000000 Call Trace: [<ffffffff818773cd>] dump_stack+0x4c/0x65 [<ffffffff811091c9>] __lock_acquire+0x1bb9/0x1e20 [<ffffffff8102f6ef>] ? save_stack_trace+0x2f/0x50 [<ffffffff81109df7>] lock_acquire+0xc7/0x2a0 [<ffffffffa013fac9>] ? i915_gem_unmap_dma_buf+0x39/0x110 [i915] [<ffffffff8187c02d>] mutex_lock_nested+0x7d/0x450 [<ffffffffa013fac9>] ? i915_gem_unmap_dma_buf+0x39/0x110 [i915] [<ffffffffa013fac9>] ? i915_gem_unmap_dma_buf+0x39/0x110 [i915] [<ffffffffa013fac9>] i915_gem_unmap_dma_buf+0x39/0x110 [i915] [<ffffffff815ab2e5>] dma_buf_unmap_attachment+0x55/0x80 [<ffffffffa0049592>] drm_prime_gem_destroy+0x22/0x40 [drm] [<ffffffffa0306461>] nouveau_gem_object_del+0x81/0xf0 [nouveau] [<ffffffffa002e5b7>] drm_gem_object_free+0x27/0x40 [drm] [<ffffffffa002eb30>] drm_gem_object_handle_unreference_unlocked+0xe0/0x130 [drm] [<ffffffffa002ec51>] drm_gem_handle_delete+0xd1/0x150 [drm] [<ffffffffa002f3c0>] drm_gem_close_ioctl+0x20/0x30 [drm] [<ffffffffa002fdab>] drm_ioctl+0x1db/0x640 [drm] [<ffffffff8110385f>] ? lock_release_holdtime.part.29+0xf/0x200 [<ffffffff811071ad>] ? trace_hardirqs_on_caller+0x13d/0x1e0 [<ffffffff8110725d>] ? trace_hardirqs_on+0xd/0x10 [<ffffffffa02fdc62>] nouveau_drm_ioctl+0x72/0xd0 [nouveau] [<ffffffff8128c9a8>] do_vfs_ioctl+0x2e8/0x530 [<ffffffff8128cc71>] SyS_ioctl+0x81/0xa0 [<ffffffff81880969>] system_call_fastpath+0x12/0x17
Created attachment 993974 [details] File: dmesg
*********** MASS BUG UPDATE ************** We apologize for the inconvenience. There is a large number of bugs to go through and several of them have gone stale. Due to this, we are doing a mass bug update across all of the Fedora 22 kernel bugs. Fedora 22 has now been rebased to 4.2.3-200.fc22. Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel. If you have moved on to Fedora 23, and are still experiencing this issue, please change the version to Fedora 23. If you experience different issues, please open a new bug report for those.
*********** MASS BUG UPDATE ************** This bug is being closed with INSUFFICIENT_DATA as there has not been a response in over 4 weeks. If you are still experiencing this issue, please reopen and attach the relevant data from the latest kernel you are running and any data that might have been requested previously.