Bug 459820 - gfs mount attempt dead locks after recovery
gfs mount attempt dead locks after recovery
Status: CLOSED INSUFFICIENT_DATA
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: gfs-kmod (Show other bugs)
5.3
All Linux
medium Severity medium
: rc
: ---
Assigned To: Robert Peterson
Cluster QE
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2008-08-22 13:33 EDT by Corey Marthaler
Modified: 2010-01-11 22:29 EST (History)
5 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2009-03-05 16:30:29 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)
log from taft-01 (263.29 KB, text/plain)
2008-08-22 15:18 EDT, Corey Marthaler
no flags Details
log from taft-02 (294.08 KB, text/plain)
2008-08-22 15:19 EDT, Corey Marthaler
no flags Details
log from taft-03 (213.67 KB, text/plain)
2008-08-22 15:20 EDT, Corey Marthaler
no flags Details
log from taft-04 (293.56 KB, text/plain)
2008-08-22 15:21 EDT, Corey Marthaler
no flags Details
2nd log from taft-01 (321.30 KB, text/plain)
2008-08-25 13:15 EDT, Corey Marthaler
no flags Details
2nd log from taft-02 (317.58 KB, text/plain)
2008-08-25 13:16 EDT, Corey Marthaler
no flags Details
2nd log from taft-03 (312.17 KB, text/plain)
2008-08-25 13:17 EDT, Corey Marthaler
no flags Details
2nd log from taft-04 (268.03 KB, text/plain)
2008-08-25 13:18 EDT, Corey Marthaler
no flags Details

  None (edit)
Description Corey Marthaler 2008-08-22 13:33:10 EDT
Description of problem:
While running revolver on the 4 node taft cluster with 2 GFS filesystems on top of cmirrors, the first mount attempt hung.

================================================================================
Senario iteration 0.1 started at Fri Aug 22 12:07:01 CDT 2008
Sleeping 2 minute(s) to let the I/O get its lock count up...
Senario: DLM kill lowest nodeid (cmirror server)

Those picked to face the revolver... taft-01
Feeling lucky taft-01? Well do ya? Go'head make my day...
Didn't receive heartbeat for 2 seconds

Verify that taft-01 has been removed from cluster on remaining nodes
Verifying that the dueler(s) are alive
still not all alive, sleeping another 10 seconds
still not all alive, sleeping another 10 seconds
still not all alive, sleeping another 10 seconds
still not all alive, sleeping another 10 seconds
<ignore name="taft-01_0" pid="21551" time="Fri Aug 22 12:11:11 2008" type="cmd" duration="252" ec="127" />
<ignore name="taft-01_1" pid="21553" time="Fri Aug 22 12:11:11 2008" type="cmd" duration="252" ec="127" />
still not all alive, sleeping another 10 seconds
still not all alive, sleeping another 10 seconds
All killed nodes are back up (able to be pinged), making sure they're qarshable...
All killed nodes are now qarshable

Verifying that recovery properly took place (on the nodes that stayed in the cluster)
checking that all of the cluster nodes are now/still cman members...
checking fence recovery (state of each service)...
checking dlm recovery (state of each service)...
checking gfs recovery (state of each service)...
checking gfs2 recovery (state of each service)...
checking fence recovery (node membership of each service)...
checking dlm recovery (node membership of each service)...
checking gfs recovery (node membership of each service)...
checking gfs2 recovery (node membership of each service)...

Verifying that clvmd was started properly on the dueler(s)

mounting /dev/mapper/taft-mirror1 on /mnt/taft1 on taft-01


Version-Release number of selected component (if applicable):
2.6.18-98.el5
cman-2.0.87-1.el5.test.plock.3
cmirror-1.1.22-1.el5
gfs-utils-0.1.17-1.el5
kmod-gfs-0.1.23-5.el5


Trace of the stuck mount:
Aug 22 12:21:11 taft-01 kernel: mount.gfs     D ffffffff801436eb     0  6812   6811                     (NOTLB)
Aug 22 12:21:11 taft-01 kernel:  ffff81021c1a58b8 0000000000000086 ffff81000101d480 0000000010000000
Aug 22 12:21:11 taft-01 kernel:  ffff81021c1a58d8 0000000000000007 ffff81021b8a2100 ffff8101fff15100
Aug 22 12:21:11 taft-01 kernel:  00000029e0e5e3e5 0000000000003c7d ffff81021b8a22e8 00000001ffd6c0c0
Aug 22 12:21:11 taft-01 kernel: Call Trace:
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff80064654>] __down_read+0x7a/0x92
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff884b7cea>] :dlm:dlm_lock+0x4b/0x129
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff8009de2a>] keventd_create_kthread+0x0/0xc4
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff8855a8d3>] :lock_dlm:gdlm_do_lock+0x6c/0xd7
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff8855a556>] :lock_dlm:gdlm_ast+0x0/0x311
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff8855a2c1>] :lock_dlm:gdlm_bast+0x0/0x8d
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff885f7fb1>] :gfs:gfs_glock_xmote_th+0x14e/0x1cc
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff885f66b8>] :gfs:run_queue+0x276/0x35d
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff885f6dda>] :gfs:gfs_glock_nq+0x37f/0x3d6
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff886096d4>] :gfs:fill_super+0x0/0x624
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff885f6e47>] :gfs:gfs_glock_nq_init+0x16/0x2a
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff885f83d5>] :gfs:gfs_glock_nq_num+0x3b/0x90
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff88608dfb>] :gfs:init_locking+0x134/0x207
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff88609af0>] :gfs:fill_super+0x41c/0x624
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff800dc0d6>] get_sb_bdev+0x10a/0x164
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff800dba73>] vfs_kern_mount+0x93/0x11a
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff800dbb3c>] do_kern_mount+0x36/0x4d
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff800e5253>] do_mount+0x68c/0x6fc
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff8009e042>] autoremove_wake_function+0x0/0x2e
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff800455b5>] do_sock_read+0xc0/0xcb
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff8020cf79>] sock_aio_read+0x4f/0x5e
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff8000caa4>] do_sync_read+0xc7/0x104
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff8002e2a5>] __wake_up+0x38/0x4f
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff800c5ea6>] zone_statistics+0x3e/0x6d
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff8000f083>] __alloc_pages+0x65/0x2ce
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff8004bd63>] sys_mount+0x8a/0xcd
Aug 22 12:21:11 taft-01 kernel:  [<ffffffff8005d28d>] tracesys+0xd5/0xe0


I'll attach full kern dump and messages...
Comment 1 Corey Marthaler 2008-08-22 15:18:03 EDT
Created attachment 314832 [details]
log from taft-01
Comment 2 Corey Marthaler 2008-08-22 15:19:32 EDT
Created attachment 314833 [details]
log from taft-02
Comment 3 Corey Marthaler 2008-08-22 15:20:08 EDT
Created attachment 314834 [details]
log from taft-03
Comment 4 Corey Marthaler 2008-08-22 15:21:06 EDT
Created attachment 314835 [details]
log from taft-04
Comment 5 Corey Marthaler 2008-08-25 12:33:58 EDT
Reproduced this again while running w/ cmirrors. This shouldn't block beta, but should probably be fixed for 5.3 rc. I'll once again attach the kern dumps.

================================================================================
Senario iteration 8.3 started at Fri Aug 22 18:14:21 CDT 2008
Sleeping 2 minute(s) to let the I/O get its lock count up...
Senario: DLM kill Quorum minus one

Those picked to face the revolver... taft-04
Feeling lucky taft-04? Well do ya? Go'head make my day...
Didn't receive heartbeat for 2 seconds

Verify that taft-04 has been removed from cluster on remaining nodes
Verifying that the dueler(s) are alive
still not all alive, sleeping another 10 seconds
still not all alive, sleeping another 10 seconds
still not all alive, sleeping another 10 seconds
still not all alive, sleeping another 10 seconds
<ignore name="taft-04_0" pid="17464" time="Fri Aug 22 18:18:29 2008" type="cmd" duration="871" ec="127" />
<ignore name="taft-04_1" pid="17468" time="Fri Aug 22 18:18:29 2008" type="cmd" duration="871" ec="127" />
still not all alive, sleeping another 10 seconds
still not all alive, sleeping another 10 seconds
All killed nodes are back up (able to be pinged), making sure they're qarshable...
All killed nodes are now qarshable

Verifying that recovery properly took place (on the nodes that stayed in the cluster)
checking that all of the cluster nodes are now/still cman members...
checking fence recovery (state of each service)...
checking dlm recovery (state of each service)...
checking gfs recovery (state of each service)...
checking gfs2 recovery (state of each service)...
checking fence recovery (node membership of each service)...
checking dlm recovery (node membership of each service)...
checking gfs recovery (node membership of each service)...
checking gfs2 recovery (node membership of each service)...

Verifying that clvmd was started properly on the dueler(s)

mounting /dev/mapper/taft-mirror1 on /mnt/taft1 on taft-04
mounting /dev/mapper/taft-mirror2 on /mnt/taft2 on taft-04
Comment 6 Corey Marthaler 2008-08-25 13:15:58 EDT
Created attachment 314933 [details]
2nd log from taft-01
Comment 7 Corey Marthaler 2008-08-25 13:16:31 EDT
Created attachment 314934 [details]
2nd log from taft-02
Comment 8 Corey Marthaler 2008-08-25 13:17:00 EDT
Created attachment 314935 [details]
2nd log from taft-03
Comment 9 Corey Marthaler 2008-08-25 13:18:17 EDT
Created attachment 314936 [details]
2nd log from taft-04
Comment 10 Robert Peterson 2008-08-26 14:22:44 EDT
It looks to me like mount is waiting for gfs, and gfs is waiting for
dlm.  I'm not sure what dlm is waiting for.  Dave: Can you take a look
at the the first call trace for taft-01 and the last call trace for
taft-04?  They look very similar.
Comment 11 David Teigland 2008-08-26 16:08:04 EDT
The dlm_recoverd threads are waiting for recovery messages in most of the logs.
This means everything that uses the dlm will be blocked waiting for dlm recovery
to finish.  We should reproduce this after adding <dlm log_debug="1"/> to cluster.conf. Let me know when it gets stuck so I can give it a look before any sysrq.
Comment 12 Robert Peterson 2008-08-28 11:36:44 EDT
Setting needinfo flag to collect the log_debug data suggested by Dave.
Comment 14 Corey Marthaler 2008-09-04 15:54:43 EDT
Just a note that i'm now having trouble reproducing this issue. :(
Comment 15 Corey Marthaler 2008-09-09 15:30:49 EDT
Taking off the proposed blocker list until this can be reproduced reliably.
Comment 16 Dean Jansa 2009-03-05 14:57:52 EST
I have been able to hit this a few times while testing the latest 5.2.Z.  I will attempt to repo with dlm log_debug.
Comment 17 Robert Peterson 2009-03-05 16:30:29 EST
Dave T. Looked at Dean's problem and determined that it's not the same
thing as this bug.  This bug has been in NEEDINFO for six months now,
so I'm going to close it. A new bug will likely be opened for Dean's
new issue.
Comment 18 Robert Peterson 2009-03-05 16:41:37 EST
Additional info: The problem Dean hit is apparently this one:

Bug 442451 -  GFS does not recover the journal when one node does a withdraw of the same filesystem

The fix is in 5.3 but Dean was testing 5.2.z.

Note You need to log in before you can comment on or make changes to this bug.