Bug 303981 - clurgmgr sefaults upon startup after cluster is stopped
clurgmgr sefaults upon startup after cluster is stopped
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: rgmanager (Show other bugs)
x86_64 Linux
low Severity high
: ---
: ---
Assigned To: Lon Hohberger
Cluster QE
Depends On:
  Show dependency treegraph
Reported: 2007-09-24 16:39 EDT by Chris Harms
Modified: 2009-04-16 18:55 EDT (History)
1 user (show)

See Also:
Fixed In Version: RHBA-2008-0353
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2008-05-21 10:30:36 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)
core dump of rgmanager (76.91 KB, application/octet-stream)
2007-09-24 17:37 EDT, Chris Harms
no flags Details
Patch (880 bytes, patch)
2007-09-28 15:33 EDT, Lon Hohberger
no flags Details | Diff

  None (edit)
Description Chris Harms 2007-09-24 16:39:51 EDT
Description of problem:

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. Stop cluster via Luci
2. Start cluster via Luci
Actual results:
clurgmgr segfaults on one of my two nodes (the same node each time).

Expected results:
normal startup

Additional info:
Running 5.1 Beta 1 of all software.  The node that crashes appears differently
when the cluster is viewed in Luci.  It is grayed out and the listed operations
one can perform is fence or force deletion of node, whereas the other node has
all available options listed in the drop-down.
Comment 1 Lon Hohberger 2007-09-24 17:11:20 EDT
Which node (node ID 1 or 2) ?
Comment 2 Lon Hohberger 2007-09-24 17:14:24 EDT
Actually - the easiest thing to do is create /etc/sysconfig/cluster w/ the
following contents:


This will cause clurgmgrd to produce a core file in the root directory -- could
you attach the core and your cluster configuration?
Comment 3 Lon Hohberger 2007-09-24 17:15:58 EDT
Fixing product
Comment 4 Chris Harms 2007-09-24 17:37:38 EDT
Created attachment 204601 [details]
core dump of rgmanager

core dump of clurgmgr on cluster startup
Comment 5 Chris Harms 2007-09-24 17:39:13 EDT
(In reply to comment #1)
> Which node (node ID 1 or 2) ?

Node 2
Comment 6 Lon Hohberger 2007-09-28 14:32:48 EDT
Wow... thanks for the core. :)
Comment 7 Lon Hohberger 2007-09-28 15:27:35 EDT
Ok, so...

We received a VF_VIEW_FORMED message during for a transaction we did not have
recorded.  The transaction was allegedly from node 1, transaction ID 1, and came
immediately after node 2 had received the PORTOPENED status from node 1.

What normally happens is nodes request current states of distributed data when
they access it.  This means that it's safe to just throw away messages for
pieces of data we don't have.

This bug is restricted to RHEL5 because RHEL4 doesn't use CMAN's excellent
multicast capabilities.  This means that in the same situation on RHEL4, the
socket with the unwanted data would not have been opened at this point.

This is rather easy to fix.
Comment 8 Lon Hohberger 2007-09-28 15:33:23 EDT
Created attachment 210861 [details]
Comment 9 Lon Hohberger 2007-09-28 15:45:59 EDT
All the other parts of vf_process_msg() seem to correctly ignore messages for
which there is no key node associated.
Comment 12 RHEL Product and Program Management 2007-11-14 12:04:26 EST
This request was evaluated by Red Hat Product Management for inclusion in a Red
Hat Enterprise Linux maintenance release.  Product Management has requested
further review of this request by Red Hat Engineering, for potential
inclusion in a Red Hat Enterprise Linux Update release for currently deployed
products.  This request is not yet committed for inclusion in an Update
Comment 15 errata-xmlrpc 2008-05-21 10:30:36 EDT
An advisory has been issued which should help the problem
described in this bug report. This report is therefore being
closed with a resolution of ERRATA. For more information
on the solution and/or where to find the updated files,
please follow the link below. You may reopen this bug report
if the solution does not work for you.


Note You need to log in before you can comment on or make changes to this bug.