Description of problem: #nigeb=nodes.ccs mtime=1111608446 size=864 nodes{ tank-01.lab.msp.redhat.com{ ip_interfaces { eth1="10.1.1.91" } usedev = "eth1" fence{ fence1{ tank-apc{ port="1" switch="1" } } } } tank-02.lab.msp.redhat.com{ ip_interfaces { eth1="10.1.1.92" } usedev = "eth1" fence{ fence1{ tank-apc{ port="2" switch="1" } } } } tank-03.lab.msp.redhat.com{ ip_interfaces { eth1="10.1.1.93" } usedev = "eth1" fence{ fence1{ tank-apc{ port="3" switch="1" } } } } tank-04.lab.msp.redhat.com{ ip_interfaces { eth1="10.1.1.94" } usedev = "eth1" fence{ fence1{ tank-apc{ port="4" switch="1" } } } } tank-05.lab.msp.redhat.com{ ip_interfaces { eth1="10.1.1.95" } usedev = "eth1" fence{ fence1{ tank-apc{ port="5" switch="1" } } } } } #dne=nodes.ccs hash=9FD1F953 Mar 23 14:10:52 tank-01 lock_gulmd[3053]: Starting lock_gulmd v6.0.2. (built Mar 18 2005 18:51:57) Copyright (C) 2004 Red Hat, Inc. All rights reserved. Mar 23 14:10:52 tank-01 lock_gulmd[3053]: You are running in Fail-over mode. Mar 23 14:10:52 tank-01 lock_gulmd[3053]: I am (tank-01.lab.msp.redhat.com) with ip (10.1.1.91) Mar 23 14:10:52 tank-01 lock_gulmd[3053]: Forked core [3054]. Mar 23 14:10:53 tank-01 lock_gulmd[3053]: Forked locktable [3055]. Mar 23 14:10:54 tank-01 lock_gulmd[3053]: Forked ltpx [3056]. Mar 23 14:10:54 tank-01 lock_gulmd_core[3054]: I see no Masters, So I am Arbitrating until enough Slaves talk to me. Mar 23 14:10:54 tank-01 lock_gulmd_core[3054]: Could not send quorum update to slave tank-01.lab.msp.redhat.com Mar 23 14:10:54 tank-01 lock_gulmd_core[3054]: New generation of server state. (1111608654415753) Mar 23 14:10:54 tank-01 lock_gulmd_LTPX[3056]: New Master at tank-01.lab.msp.redhat.com:10.1.1.91 Mar 23 14:10:55 tank-01 lock_gulmd_core[3054]: ERROR [config_ccs.c:66] For tank-04.lab.msp.redhat.com, ip 10.1.1.94 doesn't match 10.1.1.94 Mar 23 14:10:55 tank-01 lock_gulmd_core[3054]: ERROR [core_io.c:1385] Node (tank-04.lab.msp.redhat.com:10.1.1.94) has been denied from connecting here. Mar 23 14:10:55 tank-01 lock_gulmd_core[3054]: ERROR [config_ccs.c:66] For tank-05.lab.msp.redhat.com, ip 10.1.1.95 doesn't match 10.1.1.95 Mar 23 14:10:55 tank-01 lock_gulmd_core[3054]: ERROR [core_io.c:1385] Node (tank-05.lab.msp.redhat.com:10.1.1.95) has been denied from connecting here. Mar 23 14:10:55 tank-01 lock_gulmd_core[3054]: ERROR [config_ccs.c:66] For tank-02.lab.msp.redhat.com, ip 10.1.1.92 doesn't match 10.1.1.92 Mar 23 14:10:55 tank-01 lock_gulmd_core[3054]: ERROR [core_io.c:1385] Node (tank-02.lab.msp.redhat.com:10.1.1.92) has been denied from connecting here. Mar 23 14:10:55 tank-01 lock_gulmd_core[3054]: ERROR [config_ccs.c:66] For tank-03.lab.msp.redhat.com, ip 10.1.1.93 doesn't match 10.1.1.93 Mar 23 14:10:55 tank-01 lock_gulmd_core[3054]: ERROR [core_io.c:1385] Node (tank-03.lab.msp.redhat.com:10.1.1.93) has been denied from connecting here. Mar 23 14:10:58 tank-01 lock_gulmd_core[3054]: ERROR [config_ccs.c:66] For tank-02.lab.msp.redhat.com, ip 10.1.1.92 doesn't match 10.1.1.92 [root@tank-05 root]# gulm_tool getstats tank-01 I_am = Arbitrating quorum_has = 1 quorum_needs = 2 rank = 0 quorate = false GenerationID = 1111608654415753 run time = 372 pid = 3054 verbosity = Default failover = enabled locked = 0 [root@tank-05 root]# gulm_tool getstats tank-02 I_am = Client quorum_has = 1 quorum_needs = 2 rank = -1 quorate = false GenerationID = 0 run time = 373 pid = 7457 verbosity = Default failover = enabled locked = 0 [root@tank-05 root]# gulm_tool getstats tank-03 I_am = Pending quorum_has = 1 quorum_needs = 2 rank = 1 quorate = false GenerationID = 0 run time = 374 pid = 7457 verbosity = Default failover = enabled locked = 0 [root@tank-05 root]# gulm_tool getstats tank-04 I_am = Client quorum_has = 1 quorum_needs = 2 rank = -1 quorate = false GenerationID = 0 run time = 375 pid = 7457 verbosity = Default failover = enabled locked = 0 [root@tank-05 root]# gulm_tool getstats tank-05 I_am = Pending quorum_has = 1 quorum_needs = 2 rank = 2 quorate = false GenerationID = 0 run time = 375 pid = 7019 verbosity = Default failover = enabled locked = 0 Version-Release number of selected component (if applicable): [root@tank-02 root]# lock_gulmd -V lock_gulmd v6.0.2 (built Mar 18 2005 18:51:57) Copyright (C) 2004 Red Hat, Inc. All rights reserved.
fixed. switch had wrong default
fix verified.