Bug 198029 - when lock_gulm slave node poweroff suddenly, slave node cann't normally join
Summary: when lock_gulm slave node poweroff suddenly, slave node cann't normally join
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Red Hat Cluster Suite
Classification: Retired
Component: gfs
Version: 4
Hardware: All
OS: Linux
medium
high
Target Milestone: ---
Assignee: Chris Feist
QA Contact: GFS Bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2006-07-08 08:23 UTC by Nicholas.ni
Modified: 2010-01-12 03:11 UTC (History)
0 users

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2006-11-28 23:49:49 UTC
Embargoed:


Attachments (Terms of Use)

Description Nicholas.ni 2006-07-08 08:23:27 UTC
Description of problem:
On my 2-nodes cluster, nodes named gfs1 and gfs2 aspectly. gfs1 is the 
lock_gulm server,gfs2 is slave node. when gfs2 powers off suddenly, it can't 
normally join the cluster and can't mount the gfs partitons after the node 
gfs2 reboots normally. 
when i run gulm_tool nodelist gfs1, the lock_gulm status of gfs2 alway expired 
status althought i have restarted the lock_gulmd in the gfs2.

Version-Release number of selected component (if applicable):
GFS6.1 on my nodes (2 nodes are all i686)

How reproducible:
You run the 'reboot -f' command in the lock_gulmd client, When it reboots and 
runs lock_gulmd normaly again, you will get it.

Steps to Reproduce:
1.start lock_gulmd on 2 nodes
2.start gfs on 2 nodes
3.reboot -f or press the reset button on the lock_gulm client node.
  
Actual results:
the lock_gulm client node can't normaly join the cluster and can't use gfs 
normal.

Expected results:
the lock_gulm client can normaly use gfs.

Additional info:

Comment 1 Kiersten (Kerri) Anderson 2006-07-10 02:56:59 UTC
Please provide your system logs, cluster configuration files and what fencing
method you are using.

Comment 2 Chris Feist 2006-11-28 23:49:49 UTC
Closed due to inactivity.


Note You need to log in before you can comment on or make changes to this bug.