Red Hat Bugzilla – Bug 136553
Unable to bind cluster to a IP and nodes reboot
Last modified: 2009-04-16 16:15:36 EDT
From Bugzilla Helper:
User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; .NET
Description of problem:
I have a three node cluster running on subnet one. I add 2 additional
IP subnets for ISCSI to connect storage. When the system is rebooted
the cluster appears to try and make qurom connections through one of
ISCSI IP subnets. I am not able to bind the cluster to the subnet
assigned to the cluster. When the subnets assigned to ISCSI are
enabled and any of the cluster nodes are started the node will
reboot. Microsoft's cluster software had the same problem and a new
feature to bind the cluster to a subnet resolved the issue.
Version-Release number of selected component (if applicable):
Steps to Reproduce:
1.Create a 3 node cluster with a NFS service
2.Add 2 Gig-e adapters
3.Add the ISCSI (Cisco package) RPM
4.Configure the ISCSI discovery in /etc/iscsi.config
5. restart the cluster nodes
6. Steps 3 and 4 are most likely not needed to recreate this problem.
Actual Results: The cluster node that was just started rebooted.
Expected Results: The cluster should have been able to establish the
qurom with the other two nodes.
1.2.9 is *very* old. Please upgrade to 1.2.16 in the Cluster Suite
channel and reproduce.
Furthermore, iSCSI isn't supported yet.
Binding the cluster to a given subnet can be accomplished by adding
/etc/hosts entries for the IP addresses on the subnet your cluster is
intended to communicate over and using those hostnames for your
cluster member names.
When using broadcast heartbeat mode, your cluster will send packets on
several interfaces - regardless of whether or not they are the same
interfaces used by the cluster for actual communication.
How does this work on 1.2.22, which was released with U4?
All of my licenses expired on Dec 22, 2004 so I need to work this out
before I can try the upgrade.
I'm pretty sure this is a duplicate of the following bug:
Basically, clumanager 1.2.9 reboots if it fails to join a multicast
group after a few seconds. You should be able to fix this by changing
clumanager to use 'broadcast' instead of multicast heartbeating in the
*** This bug has been marked as a duplicate of 114653 ***
Changed to 'CLOSED' state since 'RESOLVED' has been deprecated.