Description of problem: sudo yum update cman Error on start up: /usr/sbin/cman_tool: aisexec daemon didn't start Version-Release number of selected component (if applicable): cman-2.0.98-1.el5_3.7 How reproducible: Cluster is running fine with version cman-2.0.98-1.el5_3.1 Steps to Reproduce: 1. update from cman-2.0.98-1.el5_3.1 2. restart that node 3. Actual results: Node errors when starting cman with "aisexec daemon didn't start" Expected results: Node rejoin cluster Additional info: hostname in /etc/sysconfig/network is fully qualified <?xml version="1.0"?> <cluster alias="torrid" config_version="35" name="torrid"> <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/> <clusternodes> <clusternode name="oilfish.csun.edu" nodeid="1" votes="1"> <fence> <method name="1"> <device name="OILFISH_DRAC"/> </method> </fence> </clusternode> <clusternode name="coley.csun.edu" nodeid="2" votes="1"> <fence> <method name="1"> <device name="COLEY_DRAC"/> </method> </fence> </clusternode> <clusternode name="wrasse.csun.edu" nodeid="3" votes="1"> <fence> <method name="1"> <device name="WRASSE_DRAC"/> </method> </fence> </clusternode> </clusternodes> <cman/> <fencedevices> <fencedevice agent="fence_drac5" ipaddr="172.31.37.7" login="root" name="COLEY_DRAC" passwd="********" secure="1"/> <fencedevice agent="fence_drac5" ipaddr="172.31.37.3" login="root" name="OILFISH_DRAC" passwd="********" secure="1"/> <fencedevice agent="fence_drac5" ipaddr="172.31.37.5" login="root" name="WRASSE_DRAC" passwd="********" secure="1"/> </fencedevices> <rm> <failoverdomains> <failoverdomain name="oilfish-only" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="oilfish.csun.edu" priority="1"/> </failoverdomain> <failoverdomain name="wrasse-only" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="wrasse.csun.edu" priority="1"/> </failoverdomain> <failoverdomain name="coley-only" nofailback="0" ordered="0" restricted="1"> <failoverdomainnode name="coley.csun.edu" priority="1"/> </failoverdomain> <failoverdomain name="file-services" nofailback="1" ordered="0" restricted="1"> <failoverdomainnode name="oilfish.csun.edu" priority="1"/> <failoverdomainnode name="coley.csun.edu" priority="1"/> <failoverdomainnode name="wrasse.csun.edu" priority="1"/> </failoverdomain> </failoverdomains> <resources> <script file="/etc/init.d/httpd" name="web"/> <clusterfs device="/dev/mapper/vg00-lv00" force_unmount="0" fsid="42848" fstype="gfs" mountpoint="/web" name="webdata" self_fence="0"/> </resources> <service autostart="0" domain="oilfish-only" exclusive="0" max_restarts="0" name="oilfish-web" recovery="restart" restart_expire_time="0"> <script ref="web"/> </service> <service autostart="0" domain="wrasse-only" exclusive="0" max_restarts="0" name="wrasse-web" recovery="restart" restart_expire_time="0"> <script ref="web"/> </service> <service autostart="0" domain="coley-only" exclusive="0" max_restarts="0" name="coley-web" recovery="restart" restart_expire_time="0"> <script ref="web"/> </service> <service autostart="0" domain="file-services" exclusive="0" name="samba" recovery="relocate"> <ip address="130.166.246.105" monitor_link="1"> <smb name="smbtest" workgroup="csun.edu"/> </ip> </service> <service autostart="0" domain="file-services" exclusive="0" name="nfs" recovery="relocate"> <ip address="130.166.246.103" monitor_link="1"> <clusterfs ref="webdata"> <nfsexport name="data"> <nfsclient allow_recover="1" name="subnet246" options="rw" target="130.166.246.0/24"/> <nfsclient allow_recover="1" name="subnet5" options="rw" target="130.166.5.0/24"/> </nfsexport> </clusterfs> </ip> </service> </rm> </cluster>
I forgot to mention that if I rollback to cman-2.0.98-1.el5_3.1 the node starts up fine.
*** This bug has been marked as a duplicate of bug 487397 ***