Bug 151954 - lock_gulmd doesn't use "usedev" ip interface
Summary: lock_gulmd doesn't use "usedev" ip interface
Keywords:
Status: CLOSED NEXTRELEASE
Alias: None
Product: Red Hat Cluster Suite
Classification: Retired
Component: gulm
Version: 3
Hardware: All
OS: Linux
medium
medium
Target Milestone: ---
Assignee: michael conrad tadpol tilstra
QA Contact: Cluster QE
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2005-03-23 20:16 UTC by Corey Marthaler
Modified: 2009-04-16 20:24 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2005-03-23 23:31:30 UTC
Embargoed:


Attachments (Terms of Use)

Description Corey Marthaler 2005-03-23 20:16:26 UTC
Description of problem:
#nigeb=nodes.ccs mtime=1111608446 size=864
nodes{
        tank-01.lab.msp.redhat.com{
                ip_interfaces {
                        eth1="10.1.1.91"
                }
                usedev = "eth1"
                fence{
                        fence1{
                                tank-apc{
                                        port="1"
                                        switch="1"
                                }
                        }
                }
        }
        tank-02.lab.msp.redhat.com{
                ip_interfaces {
                        eth1="10.1.1.92"
                }
                usedev = "eth1"
                fence{
                        fence1{
                                tank-apc{
                                        port="2"
                                        switch="1"
                                }
                        }
                }
        }
        tank-03.lab.msp.redhat.com{
                ip_interfaces {
                        eth1="10.1.1.93"
                }
                usedev = "eth1"
                fence{
                        fence1{
                                tank-apc{
                                        port="3"
                                        switch="1"
                                }
                        }
                }
        }
        tank-04.lab.msp.redhat.com{
                ip_interfaces {
                        eth1="10.1.1.94"
                }
                usedev = "eth1"
                fence{
                        fence1{
                                tank-apc{
                                        port="4"
                                        switch="1"
                                }
                        }
                }
        }
        tank-05.lab.msp.redhat.com{
                ip_interfaces {
                        eth1="10.1.1.95"
                }
                usedev = "eth1"
                fence{
                        fence1{
                                tank-apc{
                                        port="5"
                                        switch="1"
                                }
                        }
                }
        }
}
#dne=nodes.ccs hash=9FD1F953

Mar 23 14:10:52 tank-01 lock_gulmd[3053]: Starting lock_gulmd v6.0.2. (built Mar
18 2005 18:51:57) Copyright (C) 2004 Red Hat, Inc.  All rights reserved.
Mar 23 14:10:52 tank-01 lock_gulmd[3053]: You are running in Fail-over mode.
Mar 23 14:10:52 tank-01 lock_gulmd[3053]: I am (tank-01.lab.msp.redhat.com) with
ip (10.1.1.91)
Mar 23 14:10:52 tank-01 lock_gulmd[3053]: Forked core [3054].
Mar 23 14:10:53 tank-01 lock_gulmd[3053]: Forked locktable [3055].
Mar 23 14:10:54 tank-01 lock_gulmd[3053]: Forked ltpx [3056].
Mar 23 14:10:54 tank-01 lock_gulmd_core[3054]: I see no Masters, So I am
Arbitrating until enough Slaves talk to me.
Mar 23 14:10:54 tank-01 lock_gulmd_core[3054]: Could not send quorum update to
slave tank-01.lab.msp.redhat.com
Mar 23 14:10:54 tank-01 lock_gulmd_core[3054]: New generation of server state.
(1111608654415753)
Mar 23 14:10:54 tank-01 lock_gulmd_LTPX[3056]: New Master at
tank-01.lab.msp.redhat.com:10.1.1.91
Mar 23 14:10:55 tank-01 lock_gulmd_core[3054]: ERROR [config_ccs.c:66] For
tank-04.lab.msp.redhat.com, ip 10.1.1.94 doesn't match 10.1.1.94
Mar 23 14:10:55 tank-01 lock_gulmd_core[3054]: ERROR [core_io.c:1385] Node
(tank-04.lab.msp.redhat.com:10.1.1.94) has been denied from connecting here.
Mar 23 14:10:55 tank-01 lock_gulmd_core[3054]: ERROR [config_ccs.c:66] For
tank-05.lab.msp.redhat.com, ip 10.1.1.95 doesn't match 10.1.1.95
Mar 23 14:10:55 tank-01 lock_gulmd_core[3054]: ERROR [core_io.c:1385] Node
(tank-05.lab.msp.redhat.com:10.1.1.95) has been denied from connecting here.
Mar 23 14:10:55 tank-01 lock_gulmd_core[3054]: ERROR [config_ccs.c:66] For
tank-02.lab.msp.redhat.com, ip 10.1.1.92 doesn't match 10.1.1.92
Mar 23 14:10:55 tank-01 lock_gulmd_core[3054]: ERROR [core_io.c:1385] Node
(tank-02.lab.msp.redhat.com:10.1.1.92) has been denied from connecting here.
Mar 23 14:10:55 tank-01 lock_gulmd_core[3054]: ERROR [config_ccs.c:66] For
tank-03.lab.msp.redhat.com, ip 10.1.1.93 doesn't match 10.1.1.93
Mar 23 14:10:55 tank-01 lock_gulmd_core[3054]: ERROR [core_io.c:1385] Node
(tank-03.lab.msp.redhat.com:10.1.1.93) has been denied from connecting here.
Mar 23 14:10:58 tank-01 lock_gulmd_core[3054]: ERROR [config_ccs.c:66] For
tank-02.lab.msp.redhat.com, ip 10.1.1.92 doesn't match 10.1.1.92

[root@tank-05 root]# gulm_tool getstats tank-01
I_am = Arbitrating
quorum_has = 1
quorum_needs = 2
rank = 0
quorate = false
GenerationID = 1111608654415753
run time = 372
pid = 3054
verbosity = Default
failover = enabled
locked = 0
[root@tank-05 root]# gulm_tool getstats tank-02
I_am = Client
quorum_has = 1
quorum_needs = 2
rank = -1
quorate = false
GenerationID = 0
run time = 373
pid = 7457
verbosity = Default
failover = enabled
locked = 0
[root@tank-05 root]# gulm_tool getstats tank-03
I_am = Pending
quorum_has = 1
quorum_needs = 2
rank = 1
quorate = false
GenerationID = 0
run time = 374
pid = 7457
verbosity = Default
failover = enabled
locked = 0
[root@tank-05 root]# gulm_tool getstats tank-04
I_am = Client
quorum_has = 1
quorum_needs = 2
rank = -1
quorate = false
GenerationID = 0
run time = 375
pid = 7457
verbosity = Default
failover = enabled
locked = 0
[root@tank-05 root]# gulm_tool getstats tank-05
I_am = Pending
quorum_has = 1
quorum_needs = 2
rank = 2
quorate = false
GenerationID = 0
run time = 375
pid = 7019
verbosity = Default
failover = enabled
locked = 0


Version-Release number of selected component (if applicable):
[root@tank-02 root]# lock_gulmd -V
lock_gulmd v6.0.2 (built Mar 18 2005 18:51:57)
Copyright (C) 2004 Red Hat, Inc.  All rights reserved.

Comment 1 michael conrad tadpol tilstra 2005-03-23 20:18:56 UTC
fixed.
switch had wrong default

Comment 2 Corey Marthaler 2005-03-23 23:31:30 UTC
fix verified.


Note You need to log in before you can comment on or make changes to this bug.