Description of problem: Upon starting sshd, I get a kernel message and backtrace about a possible unsafe locking scenario. Version-Release number of selected component (if applicable): kernel-3.16.0-0.rc5.git1.1.fc21.x86_64 How reproducible: Steps to Reproduce: 1. sshd start sshd 2. 3. Actual results: [ 121.167910] [ INFO: possible circular locking dependency detected ] [ 121.168079] 3.16.0-0.rc5.git1.1.fc21.x86_64 #1 Not tainted And more, in attached dmesg. Expected results: Not this. Additional info: This is everything below the point of the command being issued. Attached dmesg contains the complete dmesg from boot. [ 121.167585] ====================================================== [ 121.167910] [ INFO: possible circular locking dependency detected ] [ 121.168079] 3.16.0-0.rc5.git1.1.fc21.x86_64 #1 Not tainted [ 121.168082] ------------------------------------------------------- [ 121.168082] sshd/1293 is trying to acquire lock: [ 121.168082] (&isec->lock){+.+.+.}, at: [<ffffffff81375265>] inode_doinit_with_dentry+0xc5/0x690 [ 121.168082] but task is already holding lock: [ 121.168082] (&mm->mmap_sem){++++++}, at: [<ffffffff811e44af>] vm_mmap_pgoff+0x8f/0xf0 [ 121.168082] which lock already depends on the new lock. [ 121.168082] the existing dependency chain (in reverse order) is: [ 121.168082] -> #2 (&mm->mmap_sem){++++++}: [ 121.168082] [<ffffffff81102104>] lock_acquire+0xa4/0x1d0 [ 121.168082] [<ffffffff811f2434>] might_fault+0x94/0xc0 [ 121.168082] [<ffffffff81263442>] filldir+0x92/0x120 [ 121.168082] [<ffffffffa0054ba9>] xfs_dir2_block_getdents.isra.12+0x1b9/0x210 [xfs] [ 121.168082] [<ffffffffa0054dff>] xfs_readdir+0x19f/0x250 [xfs] [ 121.168082] [<ffffffffa005749b>] xfs_file_readdir+0x2b/0x40 [xfs] [ 121.168082] [<ffffffff8126321a>] iterate_dir+0x9a/0x140 [ 121.168082] [<ffffffff8126374d>] SyS_getdents+0x9d/0x130 [ 121.168082] [<ffffffff81811ca9>] system_call_fastpath+0x16/0x1b [ 121.168082] -> #1 (&xfs_dir_ilock_class){++++.+}: [ 121.168082] [<ffffffff81102104>] lock_acquire+0xa4/0x1d0 [ 121.168082] [<ffffffff810fabb7>] down_read_nested+0x57/0xa0 [ 121.168082] [<ffffffffa00aa8a2>] xfs_ilock+0xf2/0x1c0 [xfs] [ 121.168082] [<ffffffffa00aa9e4>] xfs_ilock_attr_map_shared+0x34/0x40 [xfs] [ 121.168082] [<ffffffffa0077027>] xfs_attr_get+0xd7/0x190 [xfs] [ 121.168082] [<ffffffffa006f20d>] xfs_xattr_get+0x3d/0x80 [xfs] [ 121.168082] [<ffffffff8127870f>] generic_getxattr+0x4f/0x70 [ 121.168082] [<ffffffff81375312>] inode_doinit_with_dentry+0x172/0x690 [ 121.168082] [<ffffffff81375908>] sb_finish_set_opts+0xd8/0x280 [ 121.168082] [<ffffffff81375d97>] selinux_set_mnt_opts+0x2e7/0x630 [ 121.168082] [<ffffffff81376157>] superblock_doinit+0x77/0xf0 [ 121.168082] [<ffffffff813761e0>] delayed_superblock_init+0x10/0x20 [ 121.168082] [<ffffffff8124fd72>] iterate_supers+0xb2/0x110 [ 121.168082] [<ffffffff81378123>] selinux_complete_init+0x33/0x40 [ 121.168082] [<ffffffff81387e73>] security_load_policy+0x103/0x630 [ 121.168082] [<ffffffff81379ef1>] sel_write_load+0xb1/0x7e0 [ 121.168082] [<ffffffff8124bfca>] vfs_write+0xba/0x200 [ 121.168082] [<ffffffff8124cc3c>] SyS_write+0x5c/0xd0 [ 121.168082] [<ffffffff81811ca9>] system_call_fastpath+0x16/0x1b [ 121.168082] -> #0 (&isec->lock){+.+.+.}: [ 121.168082] [<ffffffff8110163b>] __lock_acquire+0x1abb/0x1ca0 [ 121.168082] [<ffffffff81102104>] lock_acquire+0xa4/0x1d0 [ 121.168082] [<ffffffff8180d085>] mutex_lock_nested+0x85/0x440 [ 121.168082] [<ffffffff81375265>] inode_doinit_with_dentry+0xc5/0x690 [ 121.168082] [<ffffffff8137645c>] selinux_d_instantiate+0x1c/0x20 [ 121.168082] [<ffffffff81369ffb>] security_d_instantiate+0x1b/0x30 [ 121.168082] [<ffffffff812671e0>] d_instantiate+0x50/0x80 [ 121.168082] [<ffffffff811dfab9>] __shmem_file_setup+0xe9/0x270 [ 121.168082] [<ffffffff811e29a8>] shmem_zero_setup+0x28/0x70 [ 121.168082] [<ffffffff811fe241>] mmap_region+0x5b1/0x5f0 [ 121.168082] [<ffffffff811fe5a9>] do_mmap_pgoff+0x329/0x410 [ 121.168082] [<ffffffff811e44d0>] vm_mmap_pgoff+0xb0/0xf0 [ 121.168082] [<ffffffff811fc996>] SyS_mmap_pgoff+0x116/0x2c0 [ 121.168082] [<ffffffff8101fda2>] SyS_mmap+0x22/0x30 [ 121.168082] [<ffffffff81811ca9>] system_call_fastpath+0x16/0x1b [ 121.168082] other info that might help us debug this: [ 121.168082] Chain exists of: &isec->lock --> &xfs_dir_ilock_class --> &mm->mmap_sem [ 121.168082] Possible unsafe locking scenario: [ 121.168082] CPU0 CPU1 [ 121.168082] ---- ---- [ 121.168082] lock(&mm->mmap_sem); [ 121.168082] lock(&xfs_dir_ilock_class); [ 121.168082] lock(&mm->mmap_sem); [ 121.168082] lock(&isec->lock); [ 121.168082] *** DEADLOCK *** [ 121.168082] 1 lock held by sshd/1293: [ 121.168082] #0: (&mm->mmap_sem){++++++}, at: [<ffffffff811e44af>] vm_mmap_pgoff+0x8f/0xf0 [ 121.168082] stack backtrace: [ 121.168082] CPU: 0 PID: 1293 Comm: sshd Not tainted 3.16.0-0.rc5.git1.1.fc21.x86_64 #1 [ 121.168082] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006 [ 121.168082] 0000000000000000 0000000091acc19d ffff88009a12ba90 ffffffff818080d6 [ 121.168082] ffffffff82b5fe10 ffff88009a12bad0 ffffffff818053fc ffff88009a12bb30 [ 121.168082] ffff880099e54da8 ffff880099e54d70 0000000000000001 ffff880099e55910 [ 121.168082] Call Trace: [ 121.168082] [<ffffffff818080d6>] dump_stack+0x4d/0x66 [ 121.168082] [<ffffffff818053fc>] print_circular_bug+0x201/0x20f [ 121.168082] [<ffffffff8110163b>] __lock_acquire+0x1abb/0x1ca0 [ 121.168082] [<ffffffff810e1b3d>] ? sched_clock_local+0x1d/0x90 [ 121.168082] [<ffffffff81102104>] lock_acquire+0xa4/0x1d0 [ 121.168082] [<ffffffff81375265>] ? inode_doinit_with_dentry+0xc5/0x690 [ 121.168082] [<ffffffff8180d085>] mutex_lock_nested+0x85/0x440 [ 121.168082] [<ffffffff81375265>] ? inode_doinit_with_dentry+0xc5/0x690 [ 121.168082] [<ffffffff810242de>] ? native_sched_clock+0x2e/0xb0 [ 121.168082] [<ffffffff81375265>] ? inode_doinit_with_dentry+0xc5/0x690 [ 121.168082] [<ffffffff81024369>] ? sched_clock+0x9/0x10 [ 121.168082] [<ffffffff810e1b3d>] ? sched_clock_local+0x1d/0x90 [ 121.168082] [<ffffffff81375265>] inode_doinit_with_dentry+0xc5/0x690 [ 121.168082] [<ffffffff810fc24f>] ? lock_release_holdtime.part.28+0xf/0x200 [ 121.168082] [<ffffffff8137645c>] selinux_d_instantiate+0x1c/0x20 [ 121.168082] [<ffffffff81369ffb>] security_d_instantiate+0x1b/0x30 [ 121.168082] [<ffffffff812671e0>] d_instantiate+0x50/0x80 [ 121.168082] [<ffffffff811dfab9>] __shmem_file_setup+0xe9/0x270 [ 121.168082] [<ffffffff811e29a8>] shmem_zero_setup+0x28/0x70 [ 121.168082] [<ffffffff811fe241>] mmap_region+0x5b1/0x5f0 [ 121.168082] [<ffffffff811fe5a9>] do_mmap_pgoff+0x329/0x410 [ 121.168082] [<ffffffff811e44d0>] vm_mmap_pgoff+0xb0/0xf0 [ 121.168082] [<ffffffff811fc996>] SyS_mmap_pgoff+0x116/0x2c0 [ 121.168082] [<ffffffff810ff7dd>] ? trace_hardirqs_on+0xd/0x10 [ 121.168082] [<ffffffff8101fda2>] SyS_mmap+0x22/0x30 [ 121.168082] [<ffffffff81811ca9>] system_call_fastpath+0x16/0x1b
Created attachment 919744 [details] dmesg
How reproducible: Always. systemctl start sshd triggers it, as does every boot when sshd.service is enabled, and systemd starts it at boot time. However systemctl restart sshd doesn't trigger it.
*********** MASS BUG UPDATE ************** We apologize for the inconvenience. There are a large number of bugs to go through and several of them have gone stale. Due to this, we are doing a mass bug update across all of the Fedora 21 kernel bugs. Fedora 21 has now been rebased to 3.18.3-201.fc21. Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel. If you experience different issues, please open a new bug report for those.
*********** MASS BUG UPDATE ************** This bug is being closed with INSUFFICIENT_DATA as there has not been a response in over 3 weeks. If you are still experiencing this issue, please reopen and attach the relevant data from the latest kernel you are running and any data that might have been requested previously.