Bug 437367 - lockdep warning triggered by NetworkManager
lockdep warning triggered by NetworkManager
Status: CLOSED CURRENTRELEASE
Product: Fedora
Classification: Fedora
Component: kernel (Show other bugs)
9
All Linux
low Severity low
: ---
: ---
Assigned To: Kernel Maintainer List
Fedora Extras Quality Assurance
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2008-03-13 14:27 EDT by Bill Nottingham
Modified: 2014-03-16 23:12 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2009-06-10 11:15:52 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Bill Nottingham 2008-03-13 14:27:39 EDT
Description of problem:

Upon shutting down NetworkManager, I got:

=======================================================
[ INFO: possible circular locking dependency detected ]
2.6.25-0.113.rc5.git2.fc9 #1
-------------------------------------------------------
NetworkManager/2555 is trying to acquire lock:
 (events){--..}, at: [<ffffffff81044da1>] flush_workqueue+0x0/0xa6

but task is already holding lock:
 (rtnl_mutex){--..}, at: [<ffffffff8122627b>] rtnetlink_rcv+0x1a/0x33

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (rtnl_mutex){--..}:
       [<ffffffff81054b58>] __lock_acquire+0xbd3/0xd63
       [<ffffffff81054d46>] lock_acquire+0x5e/0x78
       [<ffffffff8129fa0b>] mutex_lock_nested+0xf7/0x295
       [<ffffffff8122625f>] rtnl_lock+0x12/0x14
       [<ffffffff812271a8>] linkwatch_event+0x9/0x27
       [<ffffffff81044300>] run_workqueue+0xfc/0x203
       [<ffffffff810444e7>] worker_thread+0xe0/0xf1
       [<ffffffff81047aaf>] kthread+0x49/0x76
       [<ffffffff8100cf78>] child_rip+0xa/0x12
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #1 ((linkwatch_work).work){--..}:
       [<ffffffff81054b58>] __lock_acquire+0xbd3/0xd63
       [<ffffffff81054d46>] lock_acquire+0x5e/0x78
       [<ffffffff810442fa>] run_workqueue+0xf6/0x203
       [<ffffffff810444e7>] worker_thread+0xe0/0xf1
       [<ffffffff81047aaf>] kthread+0x49/0x76
       [<ffffffff8100cf78>] child_rip+0xa/0x12
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #0 (events){--..}:
       [<ffffffff81054a5b>] __lock_acquire+0xad6/0xd63
       [<ffffffff81054d46>] lock_acquire+0x5e/0x78
       [<ffffffff81044dfc>] flush_workqueue+0x5b/0xa6
       [<ffffffff81044e57>] flush_scheduled_work+0x10/0x12
       [<ffffffff881f3ed6>] tulip_down+0x2c/0x26f [tulip]
       [<ffffffff881f4b63>] tulip_close+0x32/0x171 [tulip]
       [<ffffffff8121d8cf>] dev_close+0x62/0x83
       [<ffffffff8121d58e>] dev_change_flags+0xaf/0x172
       [<ffffffff81225132>] do_setlink+0x276/0x338
       [<ffffffff81225308>] rtnl_setlink+0x114/0x116
       [<ffffffff8122646c>] rtnetlink_rcv_msg+0x1d8/0x1f9
       [<ffffffff8123660a>] netlink_rcv_skb+0x3e/0xac
       [<ffffffff8122628a>] rtnetlink_rcv+0x29/0x33
       [<ffffffff8123605d>] netlink_unicast+0x1fe/0x26b
       [<ffffffff81236394>] netlink_sendmsg+0x2ca/0x2dd
       [<ffffffff81210523>] sock_sendmsg+0xfd/0x120
       [<ffffffff81210718>] sys_sendmsg+0x1d2/0x23c
       [<ffffffff8100c1c7>] tracesys+0xdc/0xe1
       [<ffffffffffffffff>] 0xffffffffffffffff

other info that might help us debug this:

1 lock held by NetworkManager/2555:
 #0:  (rtnl_mutex){--..}, at: [<ffffffff8122627b>] rtnetlink_rcv+0x1a/0x33

stack backtrace:
Pid: 2555, comm: NetworkManager Not tainted 2.6.25-0.113.rc5.git2.fc9 #1

Call Trace:
 [<ffffffff81053cea>] print_circular_bug_tail+0x70/0x7b
 [<ffffffff81053b02>] ? print_circular_bug_entry+0x48/0x4f
 [<ffffffff81054a5b>] __lock_acquire+0xad6/0xd63
 [<ffffffff810537d6>] ? mark_held_locks+0x5c/0x77
 [<ffffffff812a14b1>] ? _spin_unlock_irq+0x2b/0x30
 [<ffffffff81012106>] ? native_sched_clock+0x50/0x6d
 [<ffffffff81054d46>] lock_acquire+0x5e/0x78
 [<ffffffff81044da1>] ? flush_workqueue+0x0/0xa6
 [<ffffffff81044dfc>] flush_workqueue+0x5b/0xa6
 [<ffffffff81044e57>] flush_scheduled_work+0x10/0x12
 [<ffffffff881f3ed6>] :tulip:tulip_down+0x2c/0x26f
 [<ffffffff81053967>] ? trace_hardirqs_on+0xf1/0x115
 [<ffffffff881f4b63>] :tulip:tulip_close+0x32/0x171
 [<ffffffff8121d8cf>] dev_close+0x62/0x83
 [<ffffffff8121d58e>] dev_change_flags+0xaf/0x172
 [<ffffffff81225132>] do_setlink+0x276/0x338
 [<ffffffff812a144e>] ? _read_unlock+0x26/0x2b
 [<ffffffff81225308>] rtnl_setlink+0x114/0x116
 [<ffffffff8122646c>] rtnetlink_rcv_msg+0x1d8/0x1f9
 [<ffffffff81226294>] ? rtnetlink_rcv_msg+0x0/0x1f9
 [<ffffffff8123660a>] netlink_rcv_skb+0x3e/0xac
 [<ffffffff8122628a>] rtnetlink_rcv+0x29/0x33
 [<ffffffff8123605d>] netlink_unicast+0x1fe/0x26b
 [<ffffffff81236394>] netlink_sendmsg+0x2ca/0x2dd
 [<ffffffff81210523>] sock_sendmsg+0xfd/0x120
 [<ffffffff812103ad>] ? sock_recvmsg+0x10e/0x133
 [<ffffffff81047dc7>] ? autoremove_wake_function+0x0/0x38
 [<ffffffff810545c7>] ? __lock_acquire+0x642/0xd63
 [<ffffffff81012106>] ? native_sched_clock+0x50/0x6d
 [<ffffffff81210f39>] ? move_addr_to_kernel+0x40/0x49
 [<ffffffff81218292>] ? verify_iovec+0x4f/0x91
 [<ffffffff81210718>] sys_sendmsg+0x1d2/0x23c
 [<ffffffff810ab076>] ? do_readv_writev+0x17e/0x193
 [<ffffffff81074096>] ? audit_syscall_entry+0x126/0x15a
 [<ffffffff8101346c>] ? syscall_trace_enter+0xb0/0xb4
 [<ffffffff8100c1c7>] tracesys+0xdc/0xe1

Version-Release number of selected component (if applicable):

2.6.25-0.113.rc5.git2.fc9
Comment 1 Bill Nottingham 2008-03-18 20:36:34 EDT
Different lockdep spew with 121:

=======================================================
[ INFO: possible circular locking dependency detected ]
2.6.25-0.121.rc5.git4.fc9 #1
-------------------------------------------------------
iwl3945/23108 is trying to acquire lock:
 (rtnl_mutex){--..}, at: [<ffffffff8122654b>] rtnl_lock+0x12/0x14

but task is already holding lock:
 (&ifsta->work){--..}, at: [<ffffffff810442b5>] run_workqueue+0xb1/0x203

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #2 (&ifsta->work){--..}:
       [<ffffffff81054b58>] __lock_acquire+0xbd3/0xd63
       [<ffffffff81054d46>] lock_acquire+0x5e/0x78
       [<ffffffff810442fa>] run_workqueue+0xf6/0x203
       [<ffffffff810444e7>] worker_thread+0xe0/0xf1
       [<ffffffff81047aaf>] kthread+0x49/0x76
       [<ffffffff8100cf78>] child_rip+0xa/0x12
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #1 ((name)){--..}:
       [<ffffffff81054b58>] __lock_acquire+0xbd3/0xd63
       [<ffffffff81054d46>] lock_acquire+0x5e/0x78
       [<ffffffff81044dfc>] flush_workqueue+0x5b/0xa6
       [<ffffffff8813f717>] ieee80211_stop+0x323/0x405 [mac80211]
       [<ffffffff8121dbc9>] dev_close+0x62/0x83
       [<ffffffff8121d888>] dev_change_flags+0xaf/0x172
       [<ffffffff8122541e>] do_setlink+0x276/0x338
       [<ffffffff812255f4>] rtnl_setlink+0x114/0x116
       [<ffffffff81226758>] rtnetlink_rcv_msg+0x1d8/0x1f9
       [<ffffffff812368f6>] netlink_rcv_skb+0x3e/0xac
       [<ffffffff81226576>] rtnetlink_rcv+0x29/0x33
       [<ffffffff81236349>] netlink_unicast+0x1fe/0x26b
       [<ffffffff81236680>] netlink_sendmsg+0x2ca/0x2dd
       [<ffffffff81210820>] sock_sendmsg+0xfd/0x120
       [<ffffffff81210a15>] sys_sendmsg+0x1d2/0x23c
       [<ffffffff8100c1c7>] tracesys+0xdc/0xe1
       [<ffffffffffffffff>] 0xffffffffffffffff

-> #0 (rtnl_mutex){--..}:
       [<ffffffff81054a5b>] __lock_acquire+0xad6/0xd63
       [<ffffffff81054d46>] lock_acquire+0x5e/0x78
       [<ffffffff812a0163>] mutex_lock_nested+0xf7/0x295
       [<ffffffff8122654b>] rtnl_lock+0x12/0x14
       [<ffffffff88148894>] ieee80211_associated+0x1a0/0x1ee [mac80211]
       [<ffffffff88148f51>] ieee80211_rx_mgmt_assoc_resp+0x66f/0x681 [mac80211]
       [<ffffffff8814a369>] ieee80211_sta_work+0x706/0x1800 [mac80211]
       [<ffffffff81044300>] run_workqueue+0xfc/0x203
       [<ffffffff810444e7>] worker_thread+0xe0/0xf1
       [<ffffffff81047aaf>] kthread+0x49/0x76
       [<ffffffff8100cf78>] child_rip+0xa/0x12
       [<ffffffffffffffff>] 0xffffffffffffffff

other info that might help us debug this:

2 locks held by iwl3945/23108:
 #0:  ((name)){--..}, at: [<ffffffff810442b5>] run_workqueue+0xb1/0x203
 #1:  (&ifsta->work){--..}, at: [<ffffffff810442b5>] run_workqueue+0xb1/0x203

stack backtrace:
Pid: 23108, comm: iwl3945 Not tainted 2.6.25-0.121.rc5.git4.fc9 #1

Call Trace:
 [<ffffffff81053cea>] print_circular_bug_tail+0x70/0x7b
 [<ffffffff81053b02>] ? print_circular_bug_entry+0x48/0x4f
 [<ffffffff81054a5b>] __lock_acquire+0xad6/0xd63
 [<ffffffff812a1c09>] ? _spin_unlock_irq+0x2b/0x30
 [<ffffffff81054d46>] lock_acquire+0x5e/0x78
 [<ffffffff8122654b>] ? rtnl_lock+0x12/0x14
 [<ffffffff812a0163>] mutex_lock_nested+0xf7/0x295
 [<ffffffff8122654b>] ? rtnl_lock+0x12/0x14
 [<ffffffff81045d19>] ? synchronize_rcu+0x35/0x3c
 [<ffffffff8122654b>] rtnl_lock+0x12/0x14
 [<ffffffff88148894>] :mac80211:ieee80211_associated+0x1a0/0x1ee
 [<ffffffff88148f51>] :mac80211:ieee80211_rx_mgmt_assoc_resp+0x66f/0x681
 [<ffffffff810537d6>] ? mark_held_locks+0x5c/0x77
 [<ffffffff81053967>] ? trace_hardirqs_on+0xf1/0x115
 [<ffffffff8814a369>] :mac80211:ieee80211_sta_work+0x706/0x1800
 [<ffffffff8104ae7c>] ? ktime_get_ts+0x46/0x4b
 [<ffffffff81012106>] ? native_sched_clock+0x50/0x6d
 [<ffffffff81012106>] ? native_sched_clock+0x50/0x6d
 [<ffffffff81012106>] ? native_sched_clock+0x50/0x6d
 [<ffffffff81030c13>] ? hrtick_set+0x8b/0xfc
 [<ffffffff81012106>] ? native_sched_clock+0x50/0x6d
 [<ffffffff81012106>] ? native_sched_clock+0x50/0x6d
 [<ffffffff81012106>] ? native_sched_clock+0x50/0x6d
 [<ffffffff812a1c09>] ? _spin_unlock_irq+0x2b/0x30
 [<ffffffff81044300>] run_workqueue+0xfc/0x203
 [<ffffffff88149c63>] ? :mac80211:ieee80211_sta_work+0x0/0x1800
 [<ffffffff810444e7>] worker_thread+0xe0/0xf1
 [<ffffffff81047dc7>] ? autoremove_wake_function+0x0/0x38
 [<ffffffff81044407>] ? worker_thread+0x0/0xf1
 [<ffffffff81047aaf>] kthread+0x49/0x76
 [<ffffffff8100cf78>] child_rip+0xa/0x12
 [<ffffffff8100c68f>] ? restore_args+0x0/0x30
 [<ffffffff81047a66>] ? kthread+0x0/0x76
 [<ffffffff8100cf6e>] ? child_rip+0x0/0x12
Comment 2 Bug Zapper 2008-05-14 02:02:08 EDT
Changing version to '9' as part of upcoming Fedora 9 GA.
More information and reason for this action is here:
http://fedoraproject.org/wiki/BugZappers/HouseKeeping
Comment 3 Bug Zapper 2009-06-09 19:45:32 EDT
This message is a reminder that Fedora 9 is nearing its end of life.
Approximately 30 (thirty) days from now Fedora will stop maintaining
and issuing updates for Fedora 9.  It is Fedora's policy to close all
bug reports from releases that are no longer maintained.  At that time
this bug will be closed as WONTFIX if it remains open with a Fedora 
'version' of '9'.

Package Maintainer: If you wish for this bug to remain open because you
plan to fix it in a currently maintained version, simply change the 'version' 
to a later Fedora version prior to Fedora 9's end of life.

Bug Reporter: Thank you for reporting this issue and we are sorry that 
we may not be able to fix it before Fedora 9 is end of life.  If you 
would still like to see this bug fixed and are able to reproduce it 
against a later version of Fedora please change the 'version' of this 
bug to the applicable version.  If you are unable to change the version, 
please add a comment here and someone will do it for you.

Although we aim to fix as many bugs as possible during every release's 
lifetime, sometimes those efforts are overtaken by events.  Often a 
more recent Fedora release includes newer upstream software that fixes 
bugs or makes them obsolete.

The process we are following is described here: 
http://fedoraproject.org/wiki/BugZappers/HouseKeeping
Comment 4 Bill Nottingham 2009-06-10 11:15:52 EDT
Long gone.

Note You need to log in before you can comment on or make changes to this bug.