Bug 206463 - kernel oops in dlm:in_nodes_gone
kernel oops in dlm:in_nodes_gone
Status: CLOSED CURRENTRELEASE
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: dlm (Show other bugs)
4
All Linux
medium Severity medium
: ---
: ---
Assigned To: David Teigland
Cluster QE
:
Depends On:
Blocks: 207487
  Show dependency treegraph
 
Reported: 2006-09-14 11:05 EDT by Corey Marthaler
Modified: 2009-04-16 16:31 EDT (History)
4 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2008-08-05 17:36:01 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Corey Marthaler 2006-09-14 11:05:46 EDT
Description of problem:
While mounting and unmounting 20 GFS on a ten node cluster, link-02 paniced.

dlm: 8: request during recovery from 2
Unable to handle kernel paging request at 0000000000100100 RIP:
<ffffffffa01e1c4a>{:dlm:in_nodes_gone+4}
PML4 3477b067 PGD 3477c067 PMD 0
Oops: 0000 [1] SMP
CPU 0
Modules linked in: lock_dlm(U) gfs(U) lock_harness(U) parport_pc lp parport
autofs4 i2c_dev i2c_core dlm(U) cman(U) md5 ipv6 sunrpc ds yenta_socket
pcmcia_core button battery ac ohci_hcd hw_random tg3 floppy dm_snapshot dm_zero
dm_mirror ext3 jbd dm_mod qla2300 qla2xxx scsi_transport_fc sd_mod scsi_mod
Pid: 2521, comm: dlm_recvd Not tainted 2.6.9-42.0.2.ELsmp
RIP: 0010:[<ffffffffa01e1c4a>] <ffffffffa01e1c4a>{:dlm:in_nodes_gone+4}
RSP: 0018:000001003afd5cc0  EFLAGS: 00010293
RAX: 000001003be0d820 RBX: 0000000000000072 RCX: 0000000000000001
RDX: 0000000000100100 RSI: 0000000000000007 RDI: 0000010034873800
RBP: 00000100320ff000 R08: 0000000000001000 R09: 0000000000000000
R10: 000001003271f000 R11: 0000000000000246 R12: 0000010034873800
R13: 0000000000000007 R14: 00000100320ff000 R15: 0000010034873800
FS:  0000002a95562b00(0000) GS:ffffffff804e5180(0000) knlGS:0000000000000000
CS:  0010 DS: 0018 ES: 0018 CR0: 000000008005003b
CR2: 0000000000100100 CR3: 0000000000101000 CR4: 00000000000006e0
Process dlm_recvd (pid: 2521, threadinfo 000001003afd4000, task 000001003af7d7f0)
Stack: ffffffffa01dc4a8 0000000000000000 00000100320ff000 0000000000000000
       00000100320ff000 0000000000001000 ffffffffa01dcfe0 000001003271f000
       000000003afd5dc8 0000000700000000
Call Trace:<ffffffffa01dc4a8>{:dlm:add_to_requestqueue+84}
<ffffffffa01dcfe0>{:dlm:process_cluster_request+131}
       <ffffffffa01e58b0>{:dlm:rcom_process_message+1069}
       <ffffffffa01e14f7>{:dlm:midcomms_process_incoming_buffer+573}
       <ffffffff80135752>{autoremove_wake_function+0}
<ffffffffa01df633>{:dlm:receive_from_sock+640}
       <ffffffff8030b245>{_spin_lock_bh+1}
<ffffffff8014b4f0>{keventd_create_kthread+0}
       <ffffffffa01dfc30>{:dlm:dlm_recvd+289} <ffffffffa01dfb0f>{:dlm:dlm_recvd+0}
       <ffffffff8014b4c7>{kthread+200} <ffffffff80110f47>{child_rip+8}
       <ffffffff8014b4f0>{keventd_create_kthread+0} <ffffffff8014b3ff>{kthread+0}
       <ffffffff80110f3f>{child_rip+0}

Code: 48 8b 02 0f 18 08 48 8d 47 68 48 39 c2 74 14 48 8b 42 10 39
RIP <ffffffffa01e1c4a>{:dlm:in_nodes_gone+4} RSP <000001003afd5cc0>
CR2: 0000000000100100
 <0>Kernel panic - not syncing: Oops


Version-Release number of selected component (if applicable):
[root@link-02 ~]# uname -ar
Linux link-02 2.6.9-42.0.2.ELsmp #1 SMP Thu Aug 17 17:57:31 EDT 2006 x86_64
x86_64 x86_64 GNU/Linux
[root@link-02 ~]# rpm -q dlm
dlm-1.0.1-1
Comment 1 David Teigland 2006-09-14 12:23:46 EDT
I think that putting a spinlock around access to the ls->ls_nodes_gone
list will fix this.  That list is modified during mounting/unmounting
and traversed by dlm_recvd when a request is received; that means we
need a lock around it.
Comment 2 David Teigland 2006-09-14 17:45:20 EDT
Added the spinlock around ls_nodes_gone list.  Required some other
minor code changes to avoid doing other things under the new spinlock.

cvs commit: Examining .
Checking in dlm_internal.h;
/cvs/cluster/cluster/dlm-kernel/src/Attic/dlm_internal.h,v  <--  dlm_internal.h
new revision: 1.36.2.6; previous revision: 1.36.2.5
done
Checking in lockspace.c;
/cvs/cluster/cluster/dlm-kernel/src/Attic/lockspace.c,v  <--  lockspace.c
new revision: 1.19.2.10; previous revision: 1.19.2.9
done
Checking in nodes.c;
/cvs/cluster/cluster/dlm-kernel/src/Attic/nodes.c,v  <--  nodes.c
new revision: 1.10.2.3; previous revision: 1.10.2.2
done
Checking in recoverd.c;
/cvs/cluster/cluster/dlm-kernel/src/Attic/recoverd.c,v  <--  recoverd.c
new revision: 1.19.2.5; previous revision: 1.19.2.4
done
Comment 3 Kiersten (Kerri) Anderson 2006-09-14 17:59:07 EDT
Need this one in 4.5 (or sooner), so updated the flags.
Comment 5 Lenny Maiorani 2007-01-04 11:22:55 EST
I merged this patch into my RHEL4 U3 kernel and now get kernel panics when DLM
goes into "emergency shutdown". Fairly reproducible. Do you think this patch is
causing it? Here is some info from KDB 'dmesg' and 'bt'. I can probably
reproduce this if you need any other info.

<6>CMAN: Being told to leave the cluster by node 2
<6>CMAN: we are leaving the cluster. 
<4>WARNING: dlm_emergency_shutdown
<4>dlm: dlm_unlock: lkid 17090232 lockspace not found
<4>jvol1 move flags 1,0,0 ids 15,15,15
<4>crosswalk move flags 1,0,0 ids 11,11,11
<4>crosswalk move flags 0,1,0 ids 11,32,11
[1]more>  
Only 'q' or 'Q' are processed at more prompt, input ignored
<4>crosswalk move use event 32
<4>crosswalk recover event 32
<4>crosswalk remove node 2
<4>crosswalk total nodes 2
<4>crosswalk rebuild resource directory
<4>crosswalk rebuilt 41 resources
<4>crosswalk purge requests
<4>crosswalk purged 0 requests
<4>crosswalk mark waiting requests
<4>crosswalk marked 0 requests
<4>crosswalk purge locks of departed nodes
<4>crosswalk purged 1 locks
<4>crosswalk update remastered resources
<4>crosswalk updated 5 resources
<4>crosswalk rebuild locks
<4>crosswalk rebuilt 0 locks
<4>crosswalk recover event 32 done
<4>crosswalk move flags 0,0,1 ids 11,32,32
<4>crosswalk process held requests
<4>crosswalk processed 0 requests
<4>crosswalk resend marked requests
<4>crosswalk resent 0 requests
[1]more>  
Only 'q' or 'Q' are processed at more prompt, input ignored
<4>crosswalk recover event 32 finished
<4>crosswalk move flags 1,0,0 ids 32,32,32
<4>crosswalk add_to_requestq cmd 5 fr 3
<4>crosswalk add_to_requestq cmd 3 fr 3
<4>crosswalk move flags 0,1,0 ids 32,34,32
<4>crosswalk move use event 34
<4>crosswalk recover event 34
<4>27123 pr_start last_stop 0 last_start 8 last_finish 0
<4>27123 pr_start count 1 type 2 event 8 flags 250
<4>27123 claim_jid 0
<4>27123 pr_start 8 done 1
<4>27123 pr_finish flags 5b
<4>27104 recovery_done jid 0 msg 309 b
<4>27104 recovery_done nodeid 1 flg 18
<4>27123 pr_start last_stop 8 last_start 10 last_finish 8
<4>27123 pr_start count 2 type 2 event 10 flags 21b
<4>27104 recovery_done jid 1 msg 309 21b
<4>27104 recovery_done jid 2 msg 309 21b
<4>27104 recovery_done jid 3 msg 309 21b
<4>27104 recovery_done jid 4 msg 309 21b
<4>27104 recovery_done jid 5 msg 309 21b
<4>27104 recovery_done jid 6 msg 309 21b
[1]more>  
Only 'q' or 'Q' are processed at more prompt, input ignored
<4>27104 recovery_done jid 7 msg 309 21b
<4>27104 recovery_done jid 8 msg 309 21b
<4>27104 recovery_done jid 9 msg 309 21b
<4>27104 recovery_done jid 10 msg 309 21b
<4>27104 recovery_done jid 11 msg 309 21b
<4>27104 recovery_done jid 12 msg 309 21b
<4>27104 recovery_done jid 13 msg 309 21b
<4>27123 pr_start delay done before omm 21b
<4>27123 pr_start 10 done 0
<4>27104 recovery_done jid 14 msg 309 101b
<4>27104 recovery_done jid 15 msg 309 101b
<4>27104 others_may_mount 101b
<4>27104 others_may_mount start_done 10 181b
<4>27122 pr_finish flags 181b
<4>27122 pr_start last_stop 10 last_start 12 last_finish 10
<4>27122 pr_start count 3 type 2 event 12 flags 1a1b
<4>27122 pr_start 12 done 1
<4>27122 pr_finish flags 181b
<4>27281 en plock 7,20168
<4>27281 req 7,20168 ex 0-0 lkf 2000 wait 1
<4>27281 ex plock 0
<4>27281 en punlock 7,20168
[1]more>  
Only 'q' or 'Q' are processed at more prompt, input ignored
<4>27281 remove 7,20168
<4>27281 ex punlock 0
<4>27281 en plock 7,20168
<4>27281 req 7,20168 sh 2ac-2ac lkf 2000 wait 1
<4>27281 ex plock 0
<4>27281 en punlock 7,20168
<4>27281 remove 7,20168
<4>27281 ex punlock 0
<4>27281 en plock 7,20168
<4>27281 req 7,20168 ex 0-0 lkf 2000 wait 1
<4>27281 ex plock 0
<4>27281 en punlock 7,20168
<4>27281 remove 7,20168
<4>27281 ex punlock 0
<4>27281 en plock 7,20168
<4>27281 req 7,20168 sh 2ac-2ac lkf 2000 wait 1
<4>27281 ex plock 0
<4>27281 en punlock 7,20168
<4>27281 remove 7,20168
<4>27281 ex punlock 0
<4>27281 en plock 7,20168
<4>27281 req 7,20168 sh 1b8-1b8 lkf 2000 wait 1
[1]more>  
Only 'q' or 'Q' are processed at more prompt, input ignored
<4>27281 ex plock 0
<4>27281 en punlock 7,20168
<4>27281 remove 7,20168
<4>27281 ex punlock 0
<4>27281 en plock 7,20168
<4>27281 req 7,20168 sh 280-280 lkf 2000 wait 1
<4>27281 ex plock 0
<4>27281 en punlock 7,20168
<4>27281 remove 7,20168
<4>27281 ex punlock 0
<4>28513 pr_start last_stop 0 last_start 14 last_finish 0
<4>28513 pr_start count 2 type 2 event 14 flags 250
<4>28513 claim_jid 1
<4>28513 pr_start 14 done 1
<4>28513 pr_finish flags 5a
<4>28499 recovery_done jid 1 msg 309 a
<4>28499 recovery_done nodeid 1 flg 18
<4>28513 pr_start last_stop 14 last_start 17 last_finish 14
<4>28513 pr_start count 3 type 2 event 17 flags 21a
<4>28513 pr_start 17 done 1
<4>28513 pr_finish flags 1a
<4>28568 pr_start last_stop 0 last_start 18 last_finish 0
[1]more>  
Only 'q' or 'Q' are processed at more prompt, input ignored
<4>28568 pr_start count 2 type 2 event 18 flags 250
<4>28568 claim_jid 1
<4>28568 pr_start 18 done 1
<4>28568 pr_finish flags 5a
<4>28545 recovery_done jid 1 msg 309 a
<4>28545 recovery_done nodeid 1 flg 18
<4>28568 pr_start last_stop 18 last_start 20 last_finish 18
<4>28568 pr_start count 3 type 2 event 20 flags 21a
<4>28568 pr_start 20 done 1
<4>28568 pr_finish flags 1a
<4>28568 pr_start last_stop 20 last_start 25 last_finish 20
<4>28568 pr_start count 2 type 3 event 25 flags 21a
<4>28568 pr_start 25 done 1
<4>28568 pr_finish flags 1a
<4>28568 pr_start last_stop 25 last_start 27 last_finish 25
<4>28568 pr_start count 1 type 3 event 27 flags 21a
<4>28568 pr_start 27 done 1
<4>28568 pr_finish flags 1a
<4>31295 unmount flags a
<4>31295 release_mountgroup flags a
<4>27122 pr_start last_stop 12 last_start 31 last_finish 12
<4>27122 pr_start count 2 type 3 event 31 flags 1a1b
[1]more>  
Only 'q' or 'Q' are processed at more prompt, input ignored
<4>27122 pr_start 31 done 1
<4>27123 pr_finish flags 181b
<4>27123 pr_start last_stop 31 last_start 33 last_finish 31
<4>27123 pr_start count 1 type 3 event 33 flags 1a1b
<4>27123 pr_start 33 done 1
<4>27123 pr_finish flags 181b
<4>
<4>lock_dlm:  Assertion failed on line 357 of file fs/gfs_locking/lock_dlm/lock.c
<4>lock_dlm:  assertion:  "!error"
<4>lock_dlm:  time = 4320782924
<4>snapvol: error=-22 num=5,9f9461 lkf=0 flags=84
<4>
<6>tg3: eth10: Link is down.
<6>tg3: eth11: Link is down.
<4>----------- [cut here ] --------- [please bite here ] ---------
<1>Kernel BUG at lock:357
<0>invalid operand: 0000 [1] SMP 


[1]kdb> bt
Stack traceback for pid 31410
0x00000100df128030    31410    31216  1    1   R  0x00000100df128430 *umount
RSP           RIP                Function (args)
0x101afe5bc90 0xffffffffa025cc95 [lock_dlm]do_dlm_unlock+0xa7 (0x1,
0xffffffffa0222941, 0x1, 0xffffffffa02193c3, 0x10079a47f80)
0x101afe5bcd8 0xffffffffa025d012 [lock_dlm]lm_dlm_unlock+0x15
(0xffffff0000d676d0, 0x100985832b0, 0x1, 0xffffffff80134304, 0xffffff0000d676c8)
0x101afe5bdb8 0xffffffffa0219bfd [gfs]examine_bucket+0x94 (0x56,
0xffffff0000d9f600, 0xffffff0000d67000, 0xffffff0000d9f620, 0x100da7e5800)
0x101afe5bdf8 0xffffffffa021afb9 [gfs]gfs_gl_hash_clear+0x3b
(0xffffff0000d97478, 0x0, 0x100da7e5800, 0xffffffffa02595a0, 0x100c75cc940)
0x101afe5be38 0xffffffffa02315d0 [gfs]gfs_put_super+0x31e (0x100da7e5800,
0x100c75cc040, 0x1000)
0x101afe5be68 0xffffffff8017dcd5 generic_shutdown_super+0xca
(0xffffffff801cd92c, 0x100da7e5878, 0x100da7e5800, 0xffffffffa02592e0,
0x100da7e5800)
0x101afe5be88 0xffffffffa022f029 [gfs]gfs_kill_sb+0x3a
0x101afe5bf58 0xffffffff80110c61 error_exit
Comment 6 David Teigland 2007-01-04 12:05:11 EST
The new spinlock around ls_nodes_gone is not related to the problem
you're seeing.  Your "emergency shutdown" problem is caused by the
node being kicked out of the cluster by cman.  That's usually from a
network connectivity problem.
Comment 7 Lenny Maiorani 2007-01-04 13:53:30 EST
Ignore comment #5, I am applying patch from attachment #134957 [details]
Comment 8 David Teigland 2007-01-04 13:56:32 EST
You should not use attachment #134957 [details], it was only some temporary
debugging code for tracking another bug (that has since been fixed).
Comment 9 Corey Marthaler 2007-11-05 11:18:15 EST
Marking verified.
Comment 10 Chris Feist 2008-08-05 17:36:01 EDT
Bug fixed in latest release.

Note You need to log in before you can comment on or make changes to this bug.