Bug 154830

Summary: attempting to open an existing config not in /etc/cluster will cause gui to die
Product: [Retired] Red Hat Cluster Suite Reporter: Corey Marthaler <cmarthal>
Component: redhat-config-clusterAssignee: Jim Parsons <jparsons>
Status: CLOSED NEXTRELEASE QA Contact: Cluster QE <mspqa-list>
Severity: high Docs Contact:
Priority: high    
Version: 4CC: cluster-maint
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2005-04-15 19:56:37 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:

Description Corey Marthaler 2005-04-14 14:32:21 UTC
Description of problem:
I tried to open a valid config file in /tmp and the GUI died:

Traceback (most recent call last):
  File "/usr/sbin/system-config-cluster", line 407, in ?
    runFullGUI()
  File "/usr/sbin/system-config-cluster", line 390, in runFullGUI
    baseapp = basecluster(glade_xml, app)
  File "/usr/sbin/system-config-cluster", line 126, in __init__
    self.configtab = ConfigTab(glade_xml, self.model_builder)
  File "/usr/share/system-config-cluster/ConfigTab.py", line 98, in __init__
    self.prepare_tree()
  File "/usr/share/system-config-cluster/ConfigTab.py", line 366, in prepare_tree
    resources = self.model_builder.getResources()
  File "/usr/share/system-config-cluster/ModelBuilder.py", line 473, in getResources
    return self.resources_ptr.getChildren()
AttributeError: 'NoneType' object has no attribute 'getChildren'

[root@tank-03 ~]# rpm -qa | grep system-config-cluster
system-config-cluster-0.9.30-1.0

Comment 1 Corey Marthaler 2005-04-14 14:39:13 UTC
In addititon, if I do have a valid config in /etc/cluster/ it will also fail
with the same assert.

Here is my config:

<?xml version="1.0"?>
<cluster name="tank-cluster" config_version="1">



<gulm>
        <lockserver name="tank-01.lab.msp.redhat.com"/>
        <lockserver name="tank-02.lab.msp.redhat.com"/>
        <lockserver name="tank-03.lab.msp.redhat.com"/>
        <lockserver name="tank-04.lab.msp.redhat.com"/>
        <lockserver name="tank-05.lab.msp.redhat.com"/>
</gulm>



<clusternodes>
        <clusternode name="tank-01.lab.msp.redhat.com" votes="1">

                <fence>
                        <method name="single">
                                <device name="apc" switch="1" port="1"/>
                        </method>
                </fence>
        </clusternode>
        <clusternode name="tank-02.lab.msp.redhat.com" votes="1">

                <fence>
                        <method name="single">
                                <device name="apc" switch="1" port="2"/>
                        </method>
                </fence>
        </clusternode>
        <clusternode name="tank-03.lab.msp.redhat.com" votes="1">

                <fence>
                        <method name="single">
                                <device name="apc" switch="1" port="3"/>
                        </method>
                </fence>
        </clusternode>
        <clusternode name="tank-04.lab.msp.redhat.com" votes="1">

                <fence>
                        <method name="single">
                                <device name="apc" switch="1" port="4"/>
                        </method>
                </fence>
        </clusternode>
        <clusternode name="tank-05.lab.msp.redhat.com" votes="1">

                <fence>
                        <method name="single">
                                <device name="apc" switch="1" port="5"/>
                        </method>
                </fence>
        </clusternode>
        <clusternode name="tank-06.lab.msp.redhat.com" votes="1">

                <fence>
                        <method name="single">
                                <device name="apc" switch="1" port="6"/>
                        </method>
                </fence>
        </clusternode>
</clusternodes>


<fencedevices>
        <fencedevice name="apc" agent="fence_apc" ipaddr="tank-apc" login="apc"
passwd="apc"/>
</fencedevices>


<rm>
</rm>

</cluster>


Comment 2 Corey Marthaler 2005-04-14 18:57:28 UTC
bumping the priorty because this happens wheither you have an existing dlm
config or a gulm config

Comment 3 Jim Parsons 2005-04-14 20:51:45 UTC
This is fixed in 0.9.31-1.0

Comment 4 Corey Marthaler 2005-04-15 19:56:37 UTC
fix verified.