Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 183383 - mount deadlock after recovery during regression tests (2)
mount deadlock after recovery during regression tests (2)
Product: Red Hat Cluster Suite
Classification: Retired
Component: gulm (Show other bugs)
All Linux
medium Severity medium
: ---
: ---
Assigned To: Chris Feist
Cluster QE
Depends On:
  Show dependency treegraph
Reported: 2006-02-28 14:18 EST by Chris Feist
Modified: 2009-04-16 16:02 EDT (History)
3 users (show)

See Also:
Fixed In Version: RHBA-2007-0145
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2007-05-10 17:27:52 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2007:0145 normal SHIPPED_LIVE gulm bug fix update 2007-05-10 17:27:31 EDT

  None (edit)
Description Chris Feist 2006-02-28 14:18:53 EST
Description of problem:
Cluster still locks up on recovery after several rounds of killing master and
slave gulm servers.

Version-Release number of selected component (if applicable):

How reproducible:
Takes awhile.

Steps to Reproduce:
1.  Kill Master and Slave lots of times.
Actual results:
Cluster eventually hangs.

Expected results:
Cluster recovers successfully.

Additional info:
Comment 1 Kiersten (Kerri) Anderson 2006-05-04 11:37:25 EDT
Taking off blocker list, some of the issues have been fixed, but there still
might be problems outstanding.
Comment 2 Nate Straz 2006-05-23 18:05:27 EDT
I hit this today during RHEL4-U3 errata testing.  I was running gulm-1.0.6-0.
2 of 3 server nodes were shot.  It doesn't appear that the server that rejoined
to form quorum expired the locks it had prior to being shot.
Comment 3 Nate Straz 2006-06-14 08:33:55 EDT
I'm still hitting this in RHEL4-U4 testing.
Comment 4 Chris Feist 2006-07-26 12:09:31 EDT
Problem occurs if you kill enough masters for the remaining gulm server to lose
quorum.  It then may not fence all of the killed gulm servers resulting in an
inconsistent lock state.  The problem can be easily fixed by fencing the lock
servers that were killed but not fence previously.  I'm working on a solution.
Comment 5 Dean Jansa 2006-07-26 12:12:54 EDT
I'm still hitting this in RHEL4-U4 testing.  x86 cluster.
Comment 6 Corey Marthaler 2006-08-07 09:49:30 EDT
Hit this over the weekend on x86_64 during the "GULM kill Master and all but one
Slave" revolver senario.
Comment 7 Kiersten (Kerri) Anderson 2006-09-22 12:52:04 EDT
Devel ACK.
Comment 8 Chris Feist 2007-01-29 18:29:55 EST
Ok, so it appears that gulm was not properly propagating all of the
slaves/clients to the slaves.  This should fix one type of lockup, and hopefully
the lockup that was occurring in this bug.

The fix is built in gulm-1.0.9-2.
Comment 11 Red Hat Bugzilla 2007-05-10 17:27:53 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.