Bug 154830 - attempting to open an existing config not in /etc/cluster will cause gui to die
Summary: attempting to open an existing config not in /etc/cluster will cause gui to die
Alias: None
Product: Red Hat Cluster Suite
Classification: Retired
Component: redhat-config-cluster
Version: 4
Hardware: All
OS: Linux
Target Milestone: ---
Assignee: Jim Parsons
QA Contact: Cluster QE
Depends On:
TreeView+ depends on / blocked
Reported: 2005-04-14 14:32 UTC by Corey Marthaler
Modified: 2009-04-16 19:51 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2005-04-15 19:56:37 UTC

Attachments (Terms of Use)

Description Corey Marthaler 2005-04-14 14:32:21 UTC
Description of problem:
I tried to open a valid config file in /tmp and the GUI died:

Traceback (most recent call last):
  File "/usr/sbin/system-config-cluster", line 407, in ?
  File "/usr/sbin/system-config-cluster", line 390, in runFullGUI
    baseapp = basecluster(glade_xml, app)
  File "/usr/sbin/system-config-cluster", line 126, in __init__
    self.configtab = ConfigTab(glade_xml, self.model_builder)
  File "/usr/share/system-config-cluster/ConfigTab.py", line 98, in __init__
  File "/usr/share/system-config-cluster/ConfigTab.py", line 366, in prepare_tree
    resources = self.model_builder.getResources()
  File "/usr/share/system-config-cluster/ModelBuilder.py", line 473, in getResources
    return self.resources_ptr.getChildren()
AttributeError: 'NoneType' object has no attribute 'getChildren'

[root@tank-03 ~]# rpm -qa | grep system-config-cluster

Comment 1 Corey Marthaler 2005-04-14 14:39:13 UTC
In addititon, if I do have a valid config in /etc/cluster/ it will also fail
with the same assert.

Here is my config:

<?xml version="1.0"?>
<cluster name="tank-cluster" config_version="1">

        <lockserver name="tank-01.lab.msp.redhat.com"/>
        <lockserver name="tank-02.lab.msp.redhat.com"/>
        <lockserver name="tank-03.lab.msp.redhat.com"/>
        <lockserver name="tank-04.lab.msp.redhat.com"/>
        <lockserver name="tank-05.lab.msp.redhat.com"/>

        <clusternode name="tank-01.lab.msp.redhat.com" votes="1">

                        <method name="single">
                                <device name="apc" switch="1" port="1"/>
        <clusternode name="tank-02.lab.msp.redhat.com" votes="1">

                        <method name="single">
                                <device name="apc" switch="1" port="2"/>
        <clusternode name="tank-03.lab.msp.redhat.com" votes="1">

                        <method name="single">
                                <device name="apc" switch="1" port="3"/>
        <clusternode name="tank-04.lab.msp.redhat.com" votes="1">

                        <method name="single">
                                <device name="apc" switch="1" port="4"/>
        <clusternode name="tank-05.lab.msp.redhat.com" votes="1">

                        <method name="single">
                                <device name="apc" switch="1" port="5"/>
        <clusternode name="tank-06.lab.msp.redhat.com" votes="1">

                        <method name="single">
                                <device name="apc" switch="1" port="6"/>

        <fencedevice name="apc" agent="fence_apc" ipaddr="tank-apc" login="apc"



Comment 2 Corey Marthaler 2005-04-14 18:57:28 UTC
bumping the priorty because this happens wheither you have an existing dlm
config or a gulm config

Comment 3 Jim Parsons 2005-04-14 20:51:45 UTC
This is fixed in 0.9.31-1.0

Comment 4 Corey Marthaler 2005-04-15 19:56:37 UTC
fix verified.

Note You need to log in before you can comment on or make changes to this bug.