Bug 127032 - lock_dlm panics when running bonnie++ on both nodes
lock_dlm panics when running bonnie++ on both nodes
Status: CLOSED WORKSFORME
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: dlm (Show other bugs)
4
All Linux
medium Severity medium
: ---
: ---
Assigned To: David Teigland
Cluster QE
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2004-06-30 16:09 EDT by Brian Jackson
Modified: 2009-04-16 16:29 EDT (History)
2 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2005-01-10 12:37:07 EST
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Brian Jackson 2004-06-30 16:09:14 EDT
From Bugzilla Helper:
User-Agent: Mozilla/5.0 (compatible; Konqueror/3.2; Linux) (KHTML, like Gecko)

Description of problem:
I setup 2 vmware guest nodes. One of them has an extra disk that is exported via gnbd. I run bonnie++ on both nodes. The node that imports the gnbd device gets about 80% of the way through a bonnie++ run, then panics. The other node is fine, and does the fs recovery as expected. I get this message on the console:
Kernel panic: lock_dlm:  Record message above and reboot.

And in syslog I can see:
Jun 30 09:19:43 gfs-test-2 gnbd0: request c48cff4c still in use (2), waiting
Jun 30 09:19:44 gfs-test-2 gnbd0: request c48cff4c still in use (2), waiting
Jun 30 09:26:46 gfs-test-2 gnbd0: request ce68d68c still in use (2), waiting
Jun 30 09:29:36 gfs-test-2 gnbd0: request cc81fc2c still in use (2), waiting
Jun 30 09:44:48 gfs-test-2 gnbd0: request c6645a4c still in use (2), waiting
Jun 30 09:45:22 gfs-test-2 gnbd0: request c6645eac still in use (2), waiting
Jun 30 09:45:51 gfs-test-2 gnbd0: request cf48886c still in use (2), waiting
Jun 30 09:50:50 gfs-test-2 CMAN: Being told to leave the cluster by node 1
Jun 30 09:50:50 gfs-test-2 CMAN: we are leaving the cluster
Jun 30 09:50:50 gfs-test-2 SM: 00000001 sm_stop: SG still joined
Jun 30 09:50:50 gfs-test-2 SM: 01000002 sm_stop: SG still joined
Jun 30 09:50:50 gfs-test-2 SM: 02000004 sm_stop: SG still joined
Jun 30 09:50:50 gfs-test-2 gnbd_monitor[5935]: ERROR [gnbd_monitor.c:308] lost connection to cluster manager
Jun 30 09:50:51 gfs-test-2 start c 1 type 2 e 5
Jun 30 09:50:51 gfs-test-2 claim_jid 0
Jun 30 09:50:51 gfs-test-2 recovery_done jid 0 msg 309
Jun 30 09:50:51 gfs-test-2 recovery_done 0,2 f 18
Jun 30 09:50:51 gfs-test-2 recovery_done jid 1 msg 309
Jun 30 09:50:51 gfs-test-2 start c 2 type 2 e 7
Jun 30 09:50:51 gfs-test-2
Jun 30 09:50:51 gfs-test-2 lock_dlm:  Assertion failed on line 363 of file fs/gfs_locking/lock_dlm/lock.c
Jun 30 09:50:51 gfs-test-2 lock_dlm:  assertion:  "!error"
Jun 30 09:50:51 gfs-test-2 lock_dlm:  time = 11966493
Jun 30 09:50:51 gfs-test-2 gfs1: num=8,0 err=-22 cur=0 req=5 lkf=1c
Jun 30 09:50:51 gfs-test-2
Jun 30 09:50:51 gfs-test-2 Kernel panic: lock_dlm:  Record message above and reboot.
Jun 30 09:50:51 gfs-test-2
Jun 30 09:50:55 gfs-test-2 start c 1 type 2 e 5
Jun 30 09:50:55 gfs-test-2 claim_jid 0
Jun 30 09:50:55 gfs-test-2 recovery_done jid 0 msg 309
Jun 30 09:50:55 gfs-test-2 recovery_done 0,2 f 18
Jun 30 09:50:55 gfs-test-2 recovery_done jid 1 msg 309
Jun 30 09:50:55 gfs-test-2 start c 2 type 2 e 7
Jun 30 09:50:55 gfs-test-2
Jun 30 09:50:55 gfs-test-2 lock_dlm:  Assertion failed on line 363 of file fs/gfs_locking/lock_dlm/lock.c
Jun 30 09:50:55 gfs-test-2 lock_dlm:  assertion:  "!error"
Jun 30 09:50:55 gfs-test-2 lock_dlm:  time = 11970465
Jun 30 09:50:55 gfs-test-2 gfs1: num=2,5e2b7 err=-22 cur=5 req=0 lkf=4
Jun 30 09:50:55 gfs-test-2
Jun 30 09:50:55 gfs-test-2 Kernel panic: lock_dlm:  Record message above and reboot.
Jun 30 09:50:55 gfs-test-2
Jun 30 09:51:48 gfs-test-2 gnbd0: request ce68d04c still in use (2), waiting
Jun 30 09:51:53 gfs-test-2 gnbd0: request ce7edccc still in use (2), waiting
Jun 30 09:52:10 gfs-test-2 start c 1 type 2 e 5
Jun 30 09:52:10 gfs-test-2 claim_jid 0
Jun 30 09:52:10 gfs-test-2 recovery_done jid 0 msg 309
Jun 30 09:52:10 gfs-test-2 recovery_done 0,2 f 18
Jun 30 09:52:10 gfs-test-2 recovery_done jid 1 msg 309
Jun 30 09:52:10 gfs-test-2 start c 2 type 2 e 7
Jun 30 09:52:10 gfs-test-2
Jun 30 09:52:10 gfs-test-2 lock_dlm:  Assertion failed on line 363 of file fs/gfs_locking/lock_dlm/lock.c
Jun 30 09:52:10 gfs-test-2 lock_dlm:  assertion:  "!error"
Jun 30 09:52:10 gfs-test-2 lock_dlm:  time = 12045935
Jun 30 09:52:10 gfs-test-2 gfs1: num=3,87ffa err=-22 cur=-1 req=5 lkf=9
Jun 30 09:52:10 gfs-test-2
Jun 30 09:52:10 gfs-test-2 Kernel panic: lock_dlm:  Record message above and reboot.
Jun 30 09:52:10 gfs-test-2
Jun 30 09:52:31 gfs-test-2 gnbd0: request cf846b8c still in use (2), waiting
Jun 30 09:53:08 gfs-test-2 gnbd0: request cfc9b04c still in use (2), waiting
Jun 30 09:54:23 gfs-test-2 gnbd0: request ce68db8c still in use (2), waiting



Version-Release number of selected component (if applicable):
cvs from ~2004-06-29

How reproducible:
Always

Steps to Reproduce:
1. Setup 2 nodes in vmware
2. join the cluster/mount the fs
3. run bonnie++ on both nodes at the same time
    

Additional info:
Comment 1 Corey Marthaler 2004-06-30 16:25:31 EDT
this looks like it could be bug 127008 
Comment 2 David Teigland 2004-08-19 00:02:08 EDT
not sure what to say about this one; I haven't had any trouble with
bonnie++.  a lot of changes since it was reported, may be nice to try
again.
Comment 3 Kiersten (Kerri) Anderson 2004-11-04 10:16:22 EST
Updates with the proper version and component name.
Comment 4 Derek Anderson 2005-01-10 12:37:07 EST
I haven't been able to reproduce this either.  Please reopen if you
see this again with the latest code from cvs.

Note You need to log in before you can comment on or make changes to this bug.