Bug 154830 - attempting to open an existing config not in /etc/cluster will cause gui to die
attempting to open an existing config not in /etc/cluster will cause gui to die
Status: CLOSED NEXTRELEASE
Product: Red Hat Cluster Suite
Classification: Red Hat
Component: redhat-config-cluster (Show other bugs)
4
All Linux
high Severity high
: ---
: ---
Assigned To: Jim Parsons
Cluster QE
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2005-04-14 10:32 EDT by Corey Marthaler
Modified: 2009-04-16 15:51 EDT (History)
1 user (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2005-04-15 15:56:37 EDT
Type: ---
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Corey Marthaler 2005-04-14 10:32:21 EDT
Description of problem:
I tried to open a valid config file in /tmp and the GUI died:

Traceback (most recent call last):
  File "/usr/sbin/system-config-cluster", line 407, in ?
    runFullGUI()
  File "/usr/sbin/system-config-cluster", line 390, in runFullGUI
    baseapp = basecluster(glade_xml, app)
  File "/usr/sbin/system-config-cluster", line 126, in __init__
    self.configtab = ConfigTab(glade_xml, self.model_builder)
  File "/usr/share/system-config-cluster/ConfigTab.py", line 98, in __init__
    self.prepare_tree()
  File "/usr/share/system-config-cluster/ConfigTab.py", line 366, in prepare_tree
    resources = self.model_builder.getResources()
  File "/usr/share/system-config-cluster/ModelBuilder.py", line 473, in getResources
    return self.resources_ptr.getChildren()
AttributeError: 'NoneType' object has no attribute 'getChildren'

[root@tank-03 ~]# rpm -qa | grep system-config-cluster
system-config-cluster-0.9.30-1.0
Comment 1 Corey Marthaler 2005-04-14 10:39:13 EDT
In addititon, if I do have a valid config in /etc/cluster/ it will also fail
with the same assert.

Here is my config:

<?xml version="1.0"?>
<cluster name="tank-cluster" config_version="1">



<gulm>
        <lockserver name="tank-01.lab.msp.redhat.com"/>
        <lockserver name="tank-02.lab.msp.redhat.com"/>
        <lockserver name="tank-03.lab.msp.redhat.com"/>
        <lockserver name="tank-04.lab.msp.redhat.com"/>
        <lockserver name="tank-05.lab.msp.redhat.com"/>
</gulm>



<clusternodes>
        <clusternode name="tank-01.lab.msp.redhat.com" votes="1">

                <fence>
                        <method name="single">
                                <device name="apc" switch="1" port="1"/>
                        </method>
                </fence>
        </clusternode>
        <clusternode name="tank-02.lab.msp.redhat.com" votes="1">

                <fence>
                        <method name="single">
                                <device name="apc" switch="1" port="2"/>
                        </method>
                </fence>
        </clusternode>
        <clusternode name="tank-03.lab.msp.redhat.com" votes="1">

                <fence>
                        <method name="single">
                                <device name="apc" switch="1" port="3"/>
                        </method>
                </fence>
        </clusternode>
        <clusternode name="tank-04.lab.msp.redhat.com" votes="1">

                <fence>
                        <method name="single">
                                <device name="apc" switch="1" port="4"/>
                        </method>
                </fence>
        </clusternode>
        <clusternode name="tank-05.lab.msp.redhat.com" votes="1">

                <fence>
                        <method name="single">
                                <device name="apc" switch="1" port="5"/>
                        </method>
                </fence>
        </clusternode>
        <clusternode name="tank-06.lab.msp.redhat.com" votes="1">

                <fence>
                        <method name="single">
                                <device name="apc" switch="1" port="6"/>
                        </method>
                </fence>
        </clusternode>
</clusternodes>


<fencedevices>
        <fencedevice name="apc" agent="fence_apc" ipaddr="tank-apc" login="apc"
passwd="apc"/>
</fencedevices>


<rm>
</rm>

</cluster>
Comment 2 Corey Marthaler 2005-04-14 14:57:28 EDT
bumping the priorty because this happens wheither you have an existing dlm
config or a gulm config
Comment 3 Jim Parsons 2005-04-14 16:51:45 EDT
This is fixed in 0.9.31-1.0
Comment 4 Corey Marthaler 2005-04-15 15:56:37 EDT
fix verified.

Note You need to log in before you can comment on or make changes to this bug.