Red Hat Bugzilla – Bug 198029
when lock_gulm slave node poweroff suddenly, slave node cann't normally join
Last modified: 2010-01-11 22:11:53 EST
Description of problem:
On my 2-nodes cluster, nodes named gfs1 and gfs2 aspectly. gfs1 is the
lock_gulm server,gfs2 is slave node. when gfs2 powers off suddenly, it can't
normally join the cluster and can't mount the gfs partitons after the node
gfs2 reboots normally.
when i run gulm_tool nodelist gfs1, the lock_gulm status of gfs2 alway expired
status althought i have restarted the lock_gulmd in the gfs2.
Version-Release number of selected component (if applicable):
GFS6.1 on my nodes (2 nodes are all i686)
You run the 'reboot -f' command in the lock_gulmd client, When it reboots and
runs lock_gulmd normaly again, you will get it.
Steps to Reproduce:
1.start lock_gulmd on 2 nodes
2.start gfs on 2 nodes
3.reboot -f or press the reset button on the lock_gulm client node.
the lock_gulm client node can't normaly join the cluster and can't use gfs
the lock_gulm client can normaly use gfs.
Please provide your system logs, cluster configuration files and what fencing
method you are using.
Closed due to inactivity.