Bug 1121756 - possible circular locking dependency detected when enabling sshd [NEEDINFO]
Summary: possible circular locking dependency detected when enabling sshd
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Fedora
Classification: Fedora
Component: kernel
Version: 21
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
Assignee: Kernel Maintainer List
QA Contact: Fedora Extras Quality Assurance
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-21 19:22 UTC by Chris Murphy
Modified: 2015-02-24 16:13 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-02-24 16:13:36 UTC
jforbes: needinfo?


Attachments (Terms of Use)
dmesg (40.65 KB, text/plain)
2014-07-21 19:22 UTC, Chris Murphy
no flags Details

Description Chris Murphy 2014-07-21 19:22:21 UTC
Description of problem: Upon starting sshd, I get a kernel message and backtrace about a possible unsafe locking scenario.


Version-Release number of selected component (if applicable):
kernel-3.16.0-0.rc5.git1.1.fc21.x86_64

How reproducible:


Steps to Reproduce:
1. sshd start sshd
2.
3.

Actual results:
[  121.167910] [ INFO: possible circular locking dependency detected ]
[  121.168079] 3.16.0-0.rc5.git1.1.fc21.x86_64 #1 Not tainted

And more, in attached dmesg.

Expected results:

Not this.

Additional info:

This is everything below the point of the command being issued. Attached dmesg contains the complete dmesg from boot.

[  121.167585] ======================================================
[  121.167910] [ INFO: possible circular locking dependency detected ]
[  121.168079] 3.16.0-0.rc5.git1.1.fc21.x86_64 #1 Not tainted
[  121.168082] -------------------------------------------------------
[  121.168082] sshd/1293 is trying to acquire lock:
[  121.168082]  (&isec->lock){+.+.+.}, at: [<ffffffff81375265>] inode_doinit_with_dentry+0xc5/0x690
[  121.168082] 
but task is already holding lock:
[  121.168082]  (&mm->mmap_sem){++++++}, at: [<ffffffff811e44af>] vm_mmap_pgoff+0x8f/0xf0
[  121.168082] 
which lock already depends on the new lock.

[  121.168082] 
the existing dependency chain (in reverse order) is:
[  121.168082] 
-> #2 (&mm->mmap_sem){++++++}:
[  121.168082]        [<ffffffff81102104>] lock_acquire+0xa4/0x1d0
[  121.168082]        [<ffffffff811f2434>] might_fault+0x94/0xc0
[  121.168082]        [<ffffffff81263442>] filldir+0x92/0x120
[  121.168082]        [<ffffffffa0054ba9>] xfs_dir2_block_getdents.isra.12+0x1b9/0x210 [xfs]
[  121.168082]        [<ffffffffa0054dff>] xfs_readdir+0x19f/0x250 [xfs]
[  121.168082]        [<ffffffffa005749b>] xfs_file_readdir+0x2b/0x40 [xfs]
[  121.168082]        [<ffffffff8126321a>] iterate_dir+0x9a/0x140
[  121.168082]        [<ffffffff8126374d>] SyS_getdents+0x9d/0x130
[  121.168082]        [<ffffffff81811ca9>] system_call_fastpath+0x16/0x1b
[  121.168082] 
-> #1 (&xfs_dir_ilock_class){++++.+}:
[  121.168082]        [<ffffffff81102104>] lock_acquire+0xa4/0x1d0
[  121.168082]        [<ffffffff810fabb7>] down_read_nested+0x57/0xa0
[  121.168082]        [<ffffffffa00aa8a2>] xfs_ilock+0xf2/0x1c0 [xfs]
[  121.168082]        [<ffffffffa00aa9e4>] xfs_ilock_attr_map_shared+0x34/0x40 [xfs]
[  121.168082]        [<ffffffffa0077027>] xfs_attr_get+0xd7/0x190 [xfs]
[  121.168082]        [<ffffffffa006f20d>] xfs_xattr_get+0x3d/0x80 [xfs]
[  121.168082]        [<ffffffff8127870f>] generic_getxattr+0x4f/0x70
[  121.168082]        [<ffffffff81375312>] inode_doinit_with_dentry+0x172/0x690
[  121.168082]        [<ffffffff81375908>] sb_finish_set_opts+0xd8/0x280
[  121.168082]        [<ffffffff81375d97>] selinux_set_mnt_opts+0x2e7/0x630
[  121.168082]        [<ffffffff81376157>] superblock_doinit+0x77/0xf0
[  121.168082]        [<ffffffff813761e0>] delayed_superblock_init+0x10/0x20
[  121.168082]        [<ffffffff8124fd72>] iterate_supers+0xb2/0x110
[  121.168082]        [<ffffffff81378123>] selinux_complete_init+0x33/0x40
[  121.168082]        [<ffffffff81387e73>] security_load_policy+0x103/0x630
[  121.168082]        [<ffffffff81379ef1>] sel_write_load+0xb1/0x7e0
[  121.168082]        [<ffffffff8124bfca>] vfs_write+0xba/0x200
[  121.168082]        [<ffffffff8124cc3c>] SyS_write+0x5c/0xd0
[  121.168082]        [<ffffffff81811ca9>] system_call_fastpath+0x16/0x1b
[  121.168082] 
-> #0 (&isec->lock){+.+.+.}:
[  121.168082]        [<ffffffff8110163b>] __lock_acquire+0x1abb/0x1ca0
[  121.168082]        [<ffffffff81102104>] lock_acquire+0xa4/0x1d0
[  121.168082]        [<ffffffff8180d085>] mutex_lock_nested+0x85/0x440
[  121.168082]        [<ffffffff81375265>] inode_doinit_with_dentry+0xc5/0x690
[  121.168082]        [<ffffffff8137645c>] selinux_d_instantiate+0x1c/0x20
[  121.168082]        [<ffffffff81369ffb>] security_d_instantiate+0x1b/0x30
[  121.168082]        [<ffffffff812671e0>] d_instantiate+0x50/0x80
[  121.168082]        [<ffffffff811dfab9>] __shmem_file_setup+0xe9/0x270
[  121.168082]        [<ffffffff811e29a8>] shmem_zero_setup+0x28/0x70
[  121.168082]        [<ffffffff811fe241>] mmap_region+0x5b1/0x5f0
[  121.168082]        [<ffffffff811fe5a9>] do_mmap_pgoff+0x329/0x410
[  121.168082]        [<ffffffff811e44d0>] vm_mmap_pgoff+0xb0/0xf0
[  121.168082]        [<ffffffff811fc996>] SyS_mmap_pgoff+0x116/0x2c0
[  121.168082]        [<ffffffff8101fda2>] SyS_mmap+0x22/0x30
[  121.168082]        [<ffffffff81811ca9>] system_call_fastpath+0x16/0x1b
[  121.168082] 
other info that might help us debug this:

[  121.168082] Chain exists of:
  &isec->lock --> &xfs_dir_ilock_class --> &mm->mmap_sem

[  121.168082]  Possible unsafe locking scenario:

[  121.168082]        CPU0                    CPU1
[  121.168082]        ----                    ----
[  121.168082]   lock(&mm->mmap_sem);
[  121.168082]                                lock(&xfs_dir_ilock_class);
[  121.168082]                                lock(&mm->mmap_sem);
[  121.168082]   lock(&isec->lock);
[  121.168082] 
 *** DEADLOCK ***

[  121.168082] 1 lock held by sshd/1293:
[  121.168082]  #0:  (&mm->mmap_sem){++++++}, at: [<ffffffff811e44af>] vm_mmap_pgoff+0x8f/0xf0
[  121.168082] 
stack backtrace:
[  121.168082] CPU: 0 PID: 1293 Comm: sshd Not tainted 3.16.0-0.rc5.git1.1.fc21.x86_64 #1
[  121.168082] Hardware name: innotek GmbH VirtualBox/VirtualBox, BIOS VirtualBox 12/01/2006
[  121.168082]  0000000000000000 0000000091acc19d ffff88009a12ba90 ffffffff818080d6
[  121.168082]  ffffffff82b5fe10 ffff88009a12bad0 ffffffff818053fc ffff88009a12bb30
[  121.168082]  ffff880099e54da8 ffff880099e54d70 0000000000000001 ffff880099e55910
[  121.168082] Call Trace:
[  121.168082]  [<ffffffff818080d6>] dump_stack+0x4d/0x66
[  121.168082]  [<ffffffff818053fc>] print_circular_bug+0x201/0x20f
[  121.168082]  [<ffffffff8110163b>] __lock_acquire+0x1abb/0x1ca0
[  121.168082]  [<ffffffff810e1b3d>] ? sched_clock_local+0x1d/0x90
[  121.168082]  [<ffffffff81102104>] lock_acquire+0xa4/0x1d0
[  121.168082]  [<ffffffff81375265>] ? inode_doinit_with_dentry+0xc5/0x690
[  121.168082]  [<ffffffff8180d085>] mutex_lock_nested+0x85/0x440
[  121.168082]  [<ffffffff81375265>] ? inode_doinit_with_dentry+0xc5/0x690
[  121.168082]  [<ffffffff810242de>] ? native_sched_clock+0x2e/0xb0
[  121.168082]  [<ffffffff81375265>] ? inode_doinit_with_dentry+0xc5/0x690
[  121.168082]  [<ffffffff81024369>] ? sched_clock+0x9/0x10
[  121.168082]  [<ffffffff810e1b3d>] ? sched_clock_local+0x1d/0x90
[  121.168082]  [<ffffffff81375265>] inode_doinit_with_dentry+0xc5/0x690
[  121.168082]  [<ffffffff810fc24f>] ? lock_release_holdtime.part.28+0xf/0x200
[  121.168082]  [<ffffffff8137645c>] selinux_d_instantiate+0x1c/0x20
[  121.168082]  [<ffffffff81369ffb>] security_d_instantiate+0x1b/0x30
[  121.168082]  [<ffffffff812671e0>] d_instantiate+0x50/0x80
[  121.168082]  [<ffffffff811dfab9>] __shmem_file_setup+0xe9/0x270
[  121.168082]  [<ffffffff811e29a8>] shmem_zero_setup+0x28/0x70
[  121.168082]  [<ffffffff811fe241>] mmap_region+0x5b1/0x5f0
[  121.168082]  [<ffffffff811fe5a9>] do_mmap_pgoff+0x329/0x410
[  121.168082]  [<ffffffff811e44d0>] vm_mmap_pgoff+0xb0/0xf0
[  121.168082]  [<ffffffff811fc996>] SyS_mmap_pgoff+0x116/0x2c0
[  121.168082]  [<ffffffff810ff7dd>] ? trace_hardirqs_on+0xd/0x10
[  121.168082]  [<ffffffff8101fda2>] SyS_mmap+0x22/0x30
[  121.168082]  [<ffffffff81811ca9>] system_call_fastpath+0x16/0x1b

Comment 1 Chris Murphy 2014-07-21 19:22:59 UTC
Created attachment 919744 [details]
dmesg

Comment 2 Chris Murphy 2014-07-21 19:26:57 UTC
How reproducible:
Always.

systemctl start sshd triggers it, as does every boot when sshd.service is enabled, and systemd starts it at boot time.

However systemctl restart sshd doesn't trigger it.

Comment 3 Justin M. Forbes 2015-01-27 15:01:00 UTC
*********** MASS BUG UPDATE **************

We apologize for the inconvenience.  There are a large number of bugs to go through and several of them have gone stale.  Due to this, we are doing a mass bug update across all of the Fedora 21 kernel bugs.

Fedora 21 has now been rebased to 3.18.3-201.fc21.  Please test this kernel update (or newer) and let us know if you issue has been resolved or if it is still present with the newer kernel.

If you experience different issues, please open a new bug report for those.

Comment 4 Fedora Kernel Team 2015-02-24 16:13:36 UTC
*********** MASS BUG UPDATE **************
This bug is being closed with INSUFFICIENT_DATA as there has not been a response in over 3 weeks. If you are still experiencing this issue, please reopen and attach the relevant data from the latest kernel you are running and any data that might have been requested previously.


Note You need to log in before you can comment on or make changes to this bug.