Bug 515615 - After update to cman-2.0.98-1.el5_3.7, node does not join cluster
Summary: After update to cman-2.0.98-1.el5_3.7, node does not join cluster
Status: CLOSED DUPLICATE of bug 487397
Alias: None
Product: Red Hat Enterprise Linux 5
Classification: Red Hat
Component: cman
Version: 5.3
Hardware: x86_64
OS: Linux
Target Milestone: rc
: ---
Assignee: Christine Caulfield
QA Contact: Cluster QE
Depends On:
TreeView+ depends on / blocked
Reported: 2009-08-04 23:43 UTC by Arwin Tugade
Modified: 2009-08-05 06:57 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2009-08-05 06:57:02 UTC
Target Upstream Version:

Attachments (Terms of Use)

Description Arwin Tugade 2009-08-04 23:43:27 UTC
Description of problem:
sudo yum update cman

Error on start up:
/usr/sbin/cman_tool: aisexec daemon didn't start

Version-Release number of selected component (if applicable):

How reproducible:
Cluster is running fine with version cman-2.0.98-1.el5_3.1

Steps to Reproduce:
1. update from cman-2.0.98-1.el5_3.1
2. restart that node
Actual results:
Node errors when starting cman with "aisexec daemon didn't start"

Expected results:
Node rejoin cluster

Additional info:
hostname in /etc/sysconfig/network is fully qualified

<?xml version="1.0"?>
<cluster alias="torrid" config_version="35" name="torrid">
	<fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
		<clusternode name="oilfish.csun.edu" nodeid="1" votes="1">
				<method name="1">
					<device name="OILFISH_DRAC"/>
		<clusternode name="coley.csun.edu" nodeid="2" votes="1">
				<method name="1">
					<device name="COLEY_DRAC"/>
		<clusternode name="wrasse.csun.edu" nodeid="3" votes="1">
				<method name="1">
					<device name="WRASSE_DRAC"/>
		<fencedevice agent="fence_drac5" ipaddr="" login="root" name="COLEY_DRAC" passwd="********" secure="1"/>
		<fencedevice agent="fence_drac5" ipaddr="" login="root" name="OILFISH_DRAC" passwd="********" secure="1"/>
		<fencedevice agent="fence_drac5" ipaddr="" login="root" name="WRASSE_DRAC" passwd="********" secure="1"/>
			<failoverdomain name="oilfish-only" nofailback="0" ordered="0" restricted="1">
				<failoverdomainnode name="oilfish.csun.edu" priority="1"/>
			<failoverdomain name="wrasse-only" nofailback="0" ordered="0" restricted="1">
				<failoverdomainnode name="wrasse.csun.edu" priority="1"/>
			<failoverdomain name="coley-only" nofailback="0" ordered="0" restricted="1">
				<failoverdomainnode name="coley.csun.edu" priority="1"/>
			<failoverdomain name="file-services" nofailback="1" ordered="0" restricted="1">
				<failoverdomainnode name="oilfish.csun.edu" priority="1"/>
				<failoverdomainnode name="coley.csun.edu" priority="1"/>
				<failoverdomainnode name="wrasse.csun.edu" priority="1"/>
			<script file="/etc/init.d/httpd" name="web"/>
			<clusterfs device="/dev/mapper/vg00-lv00" force_unmount="0" fsid="42848" fstype="gfs" mountpoint="/web" name="webdata" self_fence="0"/>
		<service autostart="0" domain="oilfish-only" exclusive="0" max_restarts="0" name="oilfish-web" recovery="restart" restart_expire_time="0">
			<script ref="web"/>
		<service autostart="0" domain="wrasse-only" exclusive="0" max_restarts="0" name="wrasse-web" recovery="restart" restart_expire_time="0">
			<script ref="web"/>
		<service autostart="0" domain="coley-only" exclusive="0" max_restarts="0" name="coley-web" recovery="restart" restart_expire_time="0">
			<script ref="web"/>
		<service autostart="0" domain="file-services" exclusive="0" name="samba" recovery="relocate">
			<ip address="" monitor_link="1">
				<smb name="smbtest" workgroup="csun.edu"/>
		<service autostart="0" domain="file-services" exclusive="0" name="nfs" recovery="relocate">
			<ip address="" monitor_link="1">
				<clusterfs ref="webdata">
					<nfsexport name="data">
						<nfsclient allow_recover="1" name="subnet246" options="rw" target=""/>
						<nfsclient allow_recover="1" name="subnet5" options="rw" target=""/>

Comment 1 Arwin Tugade 2009-08-04 23:44:16 UTC
I forgot to mention that if I rollback to cman-2.0.98-1.el5_3.1 the node starts up fine.

Comment 2 Christine Caulfield 2009-08-05 06:57:02 UTC

*** This bug has been marked as a duplicate of bug 487397 ***

Note You need to log in before you can comment on or make changes to this bug.