Created attachment 1241939 [details] dmesg Description of problem: I already wrote about it but with new debug kernels I see more informations about locks held. Does anyone interesting in this information? [ 246.773256] INFO: task pool:2293 blocked for more than 120 seconds. [ 246.773262] Not tainted 4.9.3-200.fc25.x86_64+debug #1 [ 246.773263] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 246.773265] pool D11840 2293 1600 0x00000000 [ 246.773270] ffff9684f4395a00 ffff9684ff7da4d8 ffff9684dafa4000 ffff9684ff7da4c0 [ 246.773273] ffff9684fb2fc000 ffffbc6888b93768 ffffffffad90fa4a 0000000000000000 [ 246.773276] 0000000000000000 ffff9684dafa46b8 ffff9684ff7da4d8 0000000000000000 [ 246.773279] Call Trace: [ 246.773286] [<ffffffffad90fa4a>] ? __schedule+0x2fa/0xab0 [ 246.773287] [<ffffffffad911846>] ? wait_for_completion+0xe6/0x140 [ 246.773289] [<ffffffffad91023d>] schedule+0x3d/0x90 [ 246.773290] [<ffffffffad915752>] schedule_timeout+0x2c2/0x530 [ 246.773293] [<ffffffffad0ed5d7>] ? sched_clock_cpu+0xa7/0xc0 [ 246.773295] [<ffffffffad1115d6>] ? mark_held_locks+0x76/0xa0 [ 246.773297] [<ffffffffad91707c>] ? _raw_spin_unlock_irq+0x2c/0x40 [ 246.773298] [<ffffffffad911846>] ? wait_for_completion+0xe6/0x140 [ 246.773300] [<ffffffffad1116f5>] ? trace_hardirqs_on_caller+0xf5/0x1b0 [ 246.773301] [<ffffffffad911846>] ? wait_for_completion+0xe6/0x140 [ 246.773303] [<ffffffffad911865>] wait_for_completion+0x105/0x140 [ 246.773304] [<ffffffffad0e66a0>] ? wake_up_q+0x80/0x80 [ 246.773342] [<ffffffffc0a445e3>] ? _xfs_buf_read+0x73/0x90 [xfs] [ 246.773370] [<ffffffffc0a44201>] xfs_buf_submit_wait+0xd1/0x440 [xfs] [ 246.773395] [<ffffffffc0a445e3>] _xfs_buf_read+0x73/0x90 [xfs] [ 246.773420] [<ffffffffc0a446de>] xfs_buf_read_map+0xde/0x300 [xfs] [ 246.773449] [<ffffffffc0a88727>] ? xfs_trans_read_buf_map+0x1d7/0x650 [xfs] [ 246.773478] [<ffffffffc0a88727>] xfs_trans_read_buf_map+0x1d7/0x650 [xfs] [ 246.773502] [<ffffffffc0a24461>] xfs_imap_to_bp+0x71/0x110 [xfs] [ 246.773527] [<ffffffffc0a24c47>] xfs_iread+0x87/0x380 [xfs] [ 246.773554] [<ffffffffc0a50f8f>] ? xfs_inode_alloc+0x15f/0x230 [xfs] [ 246.773580] [<ffffffffc0a51955>] xfs_iget+0x595/0x1070 [xfs] [ 246.773605] [<ffffffffc0a5159b>] ? xfs_iget+0x1db/0x1070 [xfs] [ 246.773632] [<ffffffffc0a5dd80>] xfs_lookup+0x140/0x200 [xfs] [ 246.773658] [<ffffffffc0a58d83>] xfs_vn_lookup+0x73/0xb0 [xfs] [ 246.773662] [<ffffffffad2b87b2>] lookup_slow+0x132/0x220 [ 246.773665] [<ffffffffad2bc5cc>] walk_component+0x1ec/0x310 [ 246.773666] [<ffffffffad2bbfb4>] ? path_init+0x644/0x750 [ 246.773668] [<ffffffffad2bce17>] path_lookupat+0x67/0x120 [ 246.773670] [<ffffffffad2be2f1>] filename_lookup+0xb1/0x180 [ 246.773672] [<ffffffffad2a732f>] ? __check_object_size+0xff/0x1d6 [ 246.773674] [<ffffffffad4b04dd>] ? strncpy_from_user+0x4d/0x170 [ 246.773676] [<ffffffffad2be496>] user_path_at_empty+0x36/0x40 [ 246.773678] [<ffffffffad2b1b06>] vfs_fstatat+0x66/0xc0 [ 246.773679] [<ffffffffad1116f5>] ? trace_hardirqs_on_caller+0xf5/0x1b0 [ 246.773681] [<ffffffffad2b2101>] SYSC_newlstat+0x31/0x60 [ 246.773683] [<ffffffffad1116f5>] ? trace_hardirqs_on_caller+0xf5/0x1b0 [ 246.773685] [<ffffffffad00301a>] ? trace_hardirqs_on_thunk+0x1a/0x1c [ 246.773686] [<ffffffffad2b223e>] SyS_newlstat+0xe/0x10 [ 246.773688] [<ffffffffad917981>] entry_SYSCALL_64_fastpath+0x1f/0xc2 [ 246.773690] Showing all locks held in the system: [ 246.773701] 2 locks held by khungtaskd/73: [ 246.773701] #0: (rcu_read_lock){......}, at: [<ffffffffad199cb3>] watchdog+0xa3/0x5e0 [ 246.773707] #1: (tasklist_lock){.+.+..}, at: [<ffffffffad10f43d>] debug_show_all_locks+0x3d/0x1a0 [ 246.773718] 3 locks held by kworker/u16:5/159: [ 246.773718] #0: ("writeback"){.+.+.+}, at: [<ffffffffad0d0aca>] process_one_work+0x1ba/0x6f0 [ 246.773723] #1: ((&(&wb->dwork)->work)){+.+.+.}, at: [<ffffffffad0d0aca>] process_one_work+0x1ba/0x6f0 [ 246.773726] #2: (&type->s_umount_key#59){++++.+}, at: [<ffffffffad2b030b>] trylock_super+0x1b/0x50 [ 246.773774] 2 locks held by pool/2293: [ 246.773775] #0: (&type->i_mutex_dir_key#4){++++++}, at: [<ffffffffad2b8765>] lookup_slow+0xe5/0x220 [ 246.773780] #1: (&(&ip->i_iolock)->mr_lock){++++++}, at: [<ffffffffc0a5c251>] xfs_ilock+0x231/0x2c0 [xfs] [ 246.773812] 1 lock held by tracker-extract/1932: [ 246.773813] #0: (&(&ip->i_iolock)->mr_lock){++++++}, at: [<ffffffffc0a5c251>] xfs_ilock+0x231/0x2c0 [xfs] [ 246.773844] 3 locks held by pool/1951: [ 246.773845] #0: (sb_writers#16){.+.+.+}, at: [<ffffffffad2ac3a3>] vfs_write+0x183/0x1a0 [ 246.773850] #1: (&sb->s_type->i_mutex_key#20){+.+.+.}, at: [<ffffffffc0a4b3ed>] xfs_file_buffered_aio_write+0x5d/0x340 [xfs] [ 246.773880] #2: (&(&ip->i_iolock)->mr_lock){++++++}, at: [<ffffffffc0a5c22d>] xfs_ilock+0x20d/0x2c0 [xfs] [ 246.773921] 4 locks held by Chrome_FileThre/2317: [ 246.773921] #0: (sb_writers#16){.+.+.+}, at: [<ffffffffad2d4674>] mnt_want_write+0x24/0x50 [ 246.773926] #1: (&type->i_mutex_dir_key#4/1){+.+.+.}, at: [<ffffffffad2b848a>] lock_rename+0xda/0x100 [ 246.773931] #2: (sb_internal#2){.+.+.+}, at: [<ffffffffc0a6ece1>] xfs_trans_alloc+0xe1/0x130 [xfs] [ 246.773962] #3: (&xfs_nondir_ilock_class){++++..}, at: [<ffffffffc0a5c205>] xfs_ilock+0x1e5/0x2c0 [xfs] [ 246.773992] 1 lock held by BrowserBlocking/2370: [ 246.773992] #0: (&xfs_nondir_ilock_class){++++..}, at: [<ffffffffc0a5c193>] xfs_ilock+0x173/0x2c0 [xfs] [ 246.774021] 6 locks held by SimpleCacheWork/2404: [ 246.774021] #0: (sb_writers#16){.+.+.+}, at: [<ffffffffad2a990c>] do_sys_ftruncate.constprop.14+0xdc/0x110 [ 246.774025] #1: (&sb->s_type->i_mutex_key#20){+.+.+.}, at: [<ffffffffad2a9595>] do_truncate+0x65/0xc0 [ 246.774029] #2: (&(&ip->i_iolock)->mr_lock){++++++}, at: [<ffffffffc0a5c22d>] xfs_ilock+0x20d/0x2c0 [xfs] [ 246.774055] #3: (&(&ip->i_mmaplock)->mr_lock){+++++.}, at: [<ffffffffc0a5c1e0>] xfs_ilock+0x1c0/0x2c0 [xfs] [ 246.774083] #4: (sb_internal#2){.+.+.+}, at: [<ffffffffc0a6ece1>] xfs_trans_alloc+0xe1/0x130 [xfs] [ 246.774114] #5: (&xfs_nondir_ilock_class){++++..}, at: [<ffffffffc0a5c205>] xfs_ilock+0x1e5/0x2c0 [xfs] [ 246.774142] 6 locks held by SimpleCacheWork/2405: [ 246.774143] #0: (sb_writers#16){.+.+.+}, at: [<ffffffffad2a990c>] do_sys_ftruncate.constprop.14+0xdc/0x110 [ 246.774148] #1: (&sb->s_type->i_mutex_key#20){+.+.+.}, at: [<ffffffffad2a9595>] do_truncate+0x65/0xc0 [ 246.774151] #2: (&(&ip->i_iolock)->mr_lock){++++++}, at: [<ffffffffc0a5c22d>] xfs_ilock+0x20d/0x2c0 [xfs] [ 246.774204] #3: (&(&ip->i_mmaplock)->mr_lock){+++++.}, at: [<ffffffffc0a5c1e0>] xfs_ilock+0x1c0/0x2c0 [xfs] [ 246.774232] #4: (sb_internal#2){.+.+.+}, at: [<ffffffffc0a6ece1>] xfs_trans_alloc+0xe1/0x130 [xfs] [ 246.774261] #5: (&xfs_nondir_ilock_class){++++..}, at: [<ffffffffc0a5c205>] xfs_ilock+0x1e5/0x2c0 [xfs] [ 246.774288] 6 locks held by SimpleCacheWork/2407: [ 246.774288] #0: (sb_writers#16){.+.+.+}, at: [<ffffffffad2a990c>] do_sys_ftruncate.constprop.14+0xdc/0x110 [ 246.774292] #1: (&sb->s_type->i_mutex_key#20){+.+.+.}, at: [<ffffffffad2a9595>] do_truncate+0x65/0xc0 [ 246.774296] #2: (&(&ip->i_iolock)->mr_lock#2){++++++}, at: [<ffffffffc0a5c22d>] xfs_ilock+0x20d/0x2c0 [xfs] [ 246.774323] #3: (&(&ip->i_mmaplock)->mr_lock){+++++.}, at: [<ffffffffc0a5c1e0>] xfs_ilock+0x1c0/0x2c0 [xfs] [ 246.774349] #4: (sb_internal#2){.+.+.+}, at: [<ffffffffc0a6ece1>] xfs_trans_alloc+0xe1/0x130 [xfs] [ 246.774380] #5: (&xfs_nondir_ilock_class){++++..}, at: [<ffffffffc0a5c205>] xfs_ilock+0x1e5/0x2c0 [xfs] [ 246.774408] 6 locks held by SimpleCacheWork/2643: [ 246.774408] #0: (sb_writers#16){.+.+.+}, at: [<ffffffffad2a990c>] do_sys_ftruncate.constprop.14+0xdc/0x110 [ 246.774413] #1: (&sb->s_type->i_mutex_key#20){+.+.+.}, at: [<ffffffffad2a9595>] do_truncate+0x65/0xc0 [ 246.774417] #2: (&(&ip->i_iolock)->mr_lock){++++++}, at: [<ffffffffc0a5c22d>] xfs_ilock+0x20d/0x2c0 [xfs] [ 246.774444] #3: (&(&ip->i_mmaplock)->mr_lock){+++++.}, at: [<ffffffffc0a5c1e0>] xfs_ilock+0x1c0/0x2c0 [xfs] [ 246.774472] #4: (sb_internal#2){.+.+.+}, at: [<ffffffffc0a6ece1>] xfs_trans_alloc+0xe1/0x130 [xfs] [ 246.774500] #5: (&xfs_nondir_ilock_class){++++..}, at: [<ffffffffc0a5c205>] xfs_ilock+0x1e5/0x2c0 [xfs] [ 246.774529] 4 locks held by SimpleCacheWork/2652: [ 246.774530] #0: (sb_writers#16){.+.+.+}, at: [<ffffffffad2a990c>] do_sys_ftruncate.constprop.14+0xdc/0x110 [ 246.774534] #1: (&sb->s_type->i_mutex_key#20){+.+.+.}, at: [<ffffffffad2a9595>] do_truncate+0x65/0xc0 [ 246.774539] #2: (&(&ip->i_iolock)->mr_lock){++++++}, at: [<ffffffffc0a5c22d>] xfs_ilock+0x20d/0x2c0 [xfs] [ 246.774568] #3: (&(&ip->i_mmaplock)->mr_lock){+++++.}, at: [<ffffffffc0a5c1e0>] xfs_ilock+0x1c0/0x2c0 [xfs] [ 246.774636] 3 locks held by SimpleCacheWork/2778: [ 246.774637] #0: (sb_writers#16){.+.+.+}, at: [<ffffffffad2d4674>] mnt_want_write+0x24/0x50 [ 246.774642] #1: (sb_internal#2){.+.+.+}, at: [<ffffffffc0a6ece1>] xfs_trans_alloc+0xe1/0x130 [xfs] [ 246.774671] #2: (&xfs_nondir_ilock_class){++++..}, at: [<ffffffffc0a5c205>] xfs_ilock+0x1e5/0x2c0 [xfs] [ 246.774699] 4 locks held by SimpleCacheWork/2779: [ 246.774700] #0: (sb_writers#16){.+.+.+}, at: [<ffffffffad2a990c>] do_sys_ftruncate.constprop.14+0xdc/0x110 [ 246.774704] #1: (&sb->s_type->i_mutex_key#20){+.+.+.}, at: [<ffffffffad2a9595>] do_truncate+0x65/0xc0 [ 246.774709] #2: (&(&ip->i_iolock)->mr_lock#2){++++++}, at: [<ffffffffc0a5c22d>] xfs_ilock+0x20d/0x2c0 [xfs] [ 246.774736] #3: (&(&ip->i_mmaplock)->mr_lock){+++++.}, at: [<ffffffffc0a5c1e0>] xfs_ilock+0x1c0/0x2c0 [xfs] [ 246.774765] 4 locks held by SimpleCacheWork/3358: [ 246.774765] #0: (sb_writers#16){.+.+.+}, at: [<ffffffffad2a990c>] do_sys_ftruncate.constprop.14+0xdc/0x110 [ 246.774770] #1: (&sb->s_type->i_mutex_key#20){+.+.+.}, at: [<ffffffffad2a9595>] do_truncate+0x65/0xc0 [ 246.774774] #2: (&(&ip->i_iolock)->mr_lock){++++++}, at: [<ffffffffc0a5c22d>] xfs_ilock+0x20d/0x2c0 [xfs] [ 246.774800] #3: (&(&ip->i_mmaplock)->mr_lock){+++++.}, at: [<ffffffffc0a5c1e0>] xfs_ilock+0x1c0/0x2c0 [xfs] [ 246.774829] 4 locks held by SimpleCacheWork/3359: [ 246.774829] #0: (sb_writers#16){.+.+.+}, at: [<ffffffffad2a990c>] do_sys_ftruncate.constprop.14+0xdc/0x110 [ 246.774834] #1: (&sb->s_type->i_mutex_key#20){+.+.+.}, at: [<ffffffffad2a9595>] do_truncate+0x65/0xc0 [ 246.774838] #2: (&(&ip->i_iolock)->mr_lock){++++++}, at: [<ffffffffc0a5c22d>] xfs_ilock+0x20d/0x2c0 [xfs] [ 246.774864] #3: (&(&ip->i_mmaplock)->mr_lock){+++++.}, at: [<ffffffffc0a5c1e0>] xfs_ilock+0x1c0/0x2c0 [xfs] [ 246.774892] 1 lock held by SimpleCacheWork/3360: [ 246.774893] #0: (&(&ip->i_iolock)->mr_lock){++++++}, at: [<ffffffffc0a5c251>] xfs_ilock+0x231/0x2c0 [xfs] [ 246.774991] 1 lock held by gnome-boxes/3549: [ 246.774992] #0: (&(&ip->i_iolock)->mr_lock){++++++}, at: [<ffffffffc0a5c251>] xfs_ilock+0x231/0x2c0 [xfs] [ 246.775049] 2 locks held by bash/4147: [ 246.775049] #0: (&tty->ldisc_sem){++++.+}, at: [<ffffffffad9162a7>] ldsem_down_read+0x37/0x40 [ 246.775055] #1: (&ldata->atomic_read_lock){+.+...}, at: [<ffffffffad5949f7>] n_tty_read+0xc7/0x940 [ 246.775061] 6 locks held by steam/4244: [ 246.775062] #0: (sb_writers#16){.+.+.+}, at: [<ffffffffad2d4674>] mnt_want_write+0x24/0x50 [ 246.775066] #1: (&sb->s_type->i_mutex_key#20){+.+.+.}, at: [<ffffffffad2a9595>] do_truncate+0x65/0xc0 [ 246.775071] #2: (&(&ip->i_iolock)->mr_lock){++++++}, at: [<ffffffffc0a5c22d>] xfs_ilock+0x20d/0x2c0 [xfs] [ 246.775099] #3: (&(&ip->i_mmaplock)->mr_lock){+++++.}, at: [<ffffffffc0a5c1e0>] xfs_ilock+0x1c0/0x2c0 [xfs] [ 246.775128] #4: (sb_internal#2){.+.+.+}, at: [<ffffffffc0a6ece1>] xfs_trans_alloc+0xe1/0x130 [xfs] [ 246.775158] #5: (&xfs_nondir_ilock_class){++++..}, at: [<ffffffffc0a5c205>] xfs_ilock+0x1e5/0x2c0 [xfs] [ 246.775206] 2 locks held by CJobMgr::m_Work/5788: [ 246.775206] #0: (&type->i_mutex_dir_key#4){++++++}, at: [<ffffffffad2b8765>] lookup_slow+0xe5/0x220 [ 246.775211] #1: (&(&ip->i_iolock)->mr_lock){++++++}, at: [<ffffffffc0a5c251>] xfs_ilock+0x231/0x2c0 [xfs] [ 246.775286] 1 lock held by steamwebhelper/6284: [ 246.775287] #0: (&(&ip->i_mmaplock)->mr_lock){+++++.}, at: [<ffffffffc0a5c26a>] xfs_ilock+0x24a/0x2c0 [xfs] [ 246.775318] ============================================= This occurred only with debug kernels
This message is a reminder that Fedora 25 is nearing its end of life. Approximately 4 (four) weeks from now Fedora will stop maintaining and issuing updates for Fedora 25. It is Fedora's policy to close all bug reports from releases that are no longer maintained. At that time this bug will be closed as EOL if it remains open with a Fedora 'version' of '25'. Package Maintainer: If you wish for this bug to remain open because you plan to fix it in a currently maintained version, simply change the 'version' to a later Fedora version. Thank you for reporting this issue and we are sorry that we were not able to fix it before Fedora 25 is end of life. If you would still like to see this bug fixed and are able to reproduce it against a later version of Fedora, you are encouraged change the 'version' to a later Fedora version prior this bug is closed as described in the policy above. Although we aim to fix as many bugs as possible during every release's lifetime, sometimes those efforts are overtaken by events. Often a more recent Fedora release includes newer upstream software that fixes bugs or makes them obsolete.
Fedora 25 changed to end-of-life (EOL) status on 2017-12-12. Fedora 25 is no longer maintained, which means that it will not receive any further security or bug fix updates. As a result we are closing this bug. If you can reproduce this bug against a currently maintained version of Fedora please feel free to reopen this bug against that version. If you are unable to reopen this bug, please file a new report against the current release. If you experience problems, please add a comment to this bug. Thank you for reporting this bug and we are sorry it could not be fixed.