Description of problem:Ran into this in my log this morning. Java_VM lock and backtrace from my kernel Version-Release number of selected component (if applicable):2.6.17-1.2647.fc6 How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: Sep 17 23:12:10 scrappy kernel: Sep 17 23:12:10 scrappy kernel: ============================================= Sep 17 23:12:10 scrappy kernel: [ INFO: possible recursive locking detected ] Sep 17 23:12:10 scrappy kernel: 2.6.17-1.2647.fc6 #1 Sep 17 23:12:10 scrappy kernel: --------------------------------------------- Sep 17 23:12:10 scrappy kernel: java_vm/2649 is trying to acquire lock: Sep 17 23:12:10 scrappy kernel: (slock-AF_INET6){-+..}, at: [<c05b392e>] sk_clone+0xd4/0x2d8 Sep 17 23:12:10 scrappy kernel: Sep 17 23:12:10 scrappy kernel: but task is already holding lock: Sep 17 23:12:10 scrappy kernel: (slock-AF_INET6){-+..}, at: [<f8a244c9>] tcp_v6_rcv+0x327/0x736 [ipv6] Sep 17 23:12:10 scrappy kernel: Sep 17 23:12:10 scrappy kernel: other info that might help us debug this: Sep 17 23:12:10 scrappy kernel: 1 lock held by java_vm/2649: Sep 17 23:12:10 scrappy kernel: #0: (slock-AF_INET6){-+..}, at: [<f8a244c9>] tcp_v6_rcv+0x327/0x736 [ipv6] Sep 17 23:12:10 scrappy kernel: Sep 17 23:12:10 scrappy kernel: stack backtrace: Sep 17 23:12:10 scrappy kernel: [<c04051ee>] show_trace_log_lvl+0x58/0x171 Sep 17 23:12:10 scrappy kernel: [<c0405802>] show_trace+0xd/0x10 Sep 17 23:12:10 scrappy kernel: [<c040591b>] dump_stack+0x19/0x1b Sep 17 23:12:10 scrappy kernel: [<c043b9e1>] __lock_acquire+0x778/0x99c Sep 17 23:12:10 scrappy kernel: [<c043c176>] lock_acquire+0x4b/0x6d Sep 17 23:12:10 scrappy kernel: [<c061539b>] _spin_lock+0x19/0x28 Sep 17 23:12:10 scrappy kernel: [<c05b392e>] sk_clone+0xd4/0x2d8 Sep 17 23:12:10 scrappy kernel: [<c05dc49b>] inet_csk_clone+0xf/0x72 Sep 17 23:12:10 scrappy kernel: [<c05ed2d9>] tcp_create_openreq_child+0x1b/0x3a1 Sep 17 23:12:10 scrappy kernel: [<f8a23155>] tcp_v6_syn_recv_sock+0x271/0x5b3 [ipv6] Sep 17 23:12:10 scrappy kernel: [<c05ed834>] tcp_check_req+0x1d5/0x2e9 Sep 17 23:12:10 scrappy kernel: [<f8a22441>] tcp_v6_do_rcv+0x142/0x340 [ipv6] Sep 17 23:12:10 scrappy kernel: [<f8a24883>] tcp_v6_rcv+0x6e1/0x736 [ipv6] Sep 17 23:12:10 scrappy kernel: [<f8a0aa6f>] ip6_input+0x1c3/0x296 [ipv6] Sep 17 23:12:10 scrappy kernel: [<f8a0afdf>] ipv6_rcv+0x1d2/0x21f [ipv6] Sep 17 23:12:10 scrappy kernel: [<c05b9ab6>] netif_receive_skb+0x2e2/0x366 Sep 17 23:12:10 scrappy kernel: [<c05bb42f>] process_backlog+0x99/0xfa Sep 17 23:12:10 scrappy kernel: [<c05bb612>] net_rx_action+0x9d/0x196 Sep 17 23:12:10 scrappy kernel: [<c04293bf>] __do_softirq+0x78/0xf2 Sep 17 23:12:10 scrappy kernel: [<c040668b>] do_softirq+0x5a/0xbe Sep 17 23:12:10 scrappy kernel: [<c04291b6>] local_bh_enable_ip+0xa9/0xcf Sep 17 23:12:10 scrappy kernel: [<c0615339>] _spin_unlock_bh+0x25/0x28 Sep 17 23:12:10 scrappy kernel: [<c05b272f>] release_sock+0xb0/0xb8 Sep 17 23:12:10 scrappy kernel: [<c05f5552>] inet_stream_connect+0x113/0x206 Sep 17 23:12:10 scrappy kernel: [<c05b1692>] sys_connect+0x67/0x84 Sep 17 23:12:10 scrappy kernel: [<c05b1d04>] sys_socketcall+0x8c/0x186 Sep 17 23:12:10 scrappy kernel: [<c0403faf>] syscall_call+0x7/0xb Sep 17 23:12:10 scrappy kernel: DWARF2 unwinder stuck at syscall_call+0x7/0xb Sep 17 23:12:10 scrappy kernel: Leftover inexact backtrace: Expected results: Additional info:
I think this one might be fixed in the current tree (if you could retry with one of the 27xx kernels when they start appearing in rawhide that would be great). I'll add this to the lockdep tracker just in case.
I have not seen this problem since I reported it, and have been running the updated kernels pretty much as they come out, from then till today. I am running the most updated kernel as of today and still no problems. Odd thing is, I don't know how to "make it happen", as i am guessing it happened while browsing a web page that used java or something. So if you need to close it, go ahead and i'll reopen it if it happens again and will change pertitant info as per which FC version, kernel, etc..
*** This bug has been marked as a duplicate of 205487 ***