I did some preliminary testing for using qdiskd in a non-voting way. This basically makes qdiskd in to a storage monitor, while completely removing it as a quorum determinant. You might think of it as 'qdiskd without the q'. It does still kill nodes via the cluster software when timeouts are reached, however. To use this mode in a two node cluster: <cman two_node="1" expected_votes="1"/> ... <quorumd master_wins="1" votes="0" label="mylabel" stop_cman="1"/> ^^^^^^^^^ ... On a four node cluster: <cman expected_votes="4"/> ... <quorumd master_wins="1" votes="0" label="mylabel" stop_cman="1"/> ^^^^^^^^^ ... The advantage to this is that you can use any timeouts you please on qdiskd. Ex: Want a 10 second token timeout with a 5 minute disk I/O timeout? Go for it! From a user perspective, it means users don't have to roll their own storage monitoring scripts for use in conjunction with watchdog timers in order to monitor shared storage, and the watchdog scripts they *do* roll won't keep rebooting a machine (or have to be overly intelligent) if the storage is inaccessible when the machine starts up. The disadvantage is that you must still have a special LUN or partition dedicated to qdiskd, and that you must manually configure it in cluster.conf. Preliminary testing on RHEL 5.6 was good. The functionality has existed since qdiskd was written. Expected behaviors: * When score drops below the minimum threshold (ex: heuristics all fail), the node reboots like it always has. * When one node hangs for more than the quorum disk timeout, it is evicted and fenced by the rest of the cluster like it always has. * If the quorum disk becomes inaccessible to all nodes in the cluster, the cluster's votes do not change and no nodes get evicted. Known issues: * If the disk qdiskd is accessing disappears for a long time and *comes back*, there is a chance that all nodes will try to evict each other. Administrators are encouraged to use the io_timeout parameter to prevent this.
Verified in version cman-2.0.115-84.el5, kernel 2.6.18-265.el5 1) When heuristic fails, node reboots. <cman expected_votes="3"> <totem token="30000"/> </cman> <quorumd label="a_cluster" master_wins="1" stop_cman="1" interval="1" tko="10" votes="0" log_facility="local6" log_level="7"> <heuristic program="ping -c1 -w1 xx.xx.xx.xx" score="1" interval="2" tko="4"/> </quorumd> 2) When node hangs for more than quorum disk timeout, it is fenced. echo "offline" > /sys/class/scsi_disk/2\:0\:0\:1/device/state echo "running" > /sys/class/scsi_disk/2\:0\:0\:1/device/state 3) When the quorum disk becomes inaccessible to all nodes in the cluster, no nodes are evicted.
An advisory has been issued which should help the problem described in this bug report. This report is therefore being closed with a resolution of ERRATA. For more information on therefore solution and/or where to find the updated files, please follow the link below. You may reopen this bug report if the solution does not work for you. http://rhn.redhat.com/errata/RHBA-2011-1001.html