From Bugzilla Helper: User-Agent: Mozilla/4.0 Description of problem: In a 2-member cluster using IP tiebreaker. When network communication between the two members is interupted, they ping the tiebreaker. The "clusvcmgrd" daemon on the member who cannot reach the tiebreaker succesfully stops services and exits. However, "cluquorumd" believes "clusvcmgrd" has crashed, and starts it back up. The continued operation of services on the "down" member, and continued updates to the shared storage, result in a permanent PANIC state. The "up" member, who won the tiebreaker, will not actually shoot a member that is in PANIC mode. The "up" member also will not take over services, because the "down" member is still reporting itself as "UP" on shared storage. Version-Release number of selected component (if applicable): clumanager-1.2.3-1 How reproducible: Always Steps to Reproduce: 1. Configure a 2-member cluster with NFS and IP tiebreaker. 2. Unplug the single network cable from the member running NFS. Actual Results: The member who cannot ping the tiebreaker does not remove itself from the cluster. The member who can ping the tiebreaker cannot take over services. No failover. Expected Results: The member who cannot ping the tiebreaker should stop all services, and report itself as DOWN on the disk, or shutdown clustering entirely. The member who is up should then take over services. Additional info: We can cause a failover by changing the behavior of "cluquorumd". This involved patching it with code to continue exiting even if "clusvcmgrd" has a non-zero exit. However, it breaks the ability of "cluquorumd" to restart the service manager in the event of a real software crash. Please see attachment for detailed log entries.
Um. There is no attachment...
This is because of a bug which causes the quorum daemon to start and check the disk tie breaker information even when the IP-based one is enabled. Try using the Update 1 beta code from RHN (1.2.6-1). If it is not available to you via RHN, you can try this one (which is basically the same, save for version and the fact that the following is unofficial): http://people.redhat.com/lhh/clumanager-1.2.6-0.1.89.2.13.i386.rpm http://people.redhat.com/lhh/clumanager-1.2.6-0.1.89.2.13.src.rpm This should solve the problem. Note - if you are not using power switches, members in the minority set will reboot immediately (this is what it should have done) to try and preserve data integrity. The "PANIC" state should only occur in two scenarios: (1) Disk tie-breaker in use. Disk reports member as 'up' even though the network membership reports as down. This is a broken cluster, but no STONITH action takes place unless the member stops updating its timestamp on shared storage. (2) Failure to power-cycle a member after it is seen to be out of the majority. This only happens on members which have power controllers.
Created attachment 96567 [details] Includes before and after logs, and small code patch
Thank you for the incredibly fast response (even before the attachment was posted)! We will try patching to the updated code.
Fixing product name. Clumanager on RHEL3 was part of RHCS3, not RHEL3