RHEL Engineering is moving the tracking of its product development work on RHEL 6 through RHEL 9 to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "RHEL project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs in the statuses "NEW", "ASSIGNED", and "POST" are being migrated throughout September 2023. Bugs of Red Hat partners with an assigned Engineering Partner Manager (EPM) are migrated in late September as per pre-agreed dates. Bugs against components "kernel", "kernel-rt", and "kpatch" are only migrated if still in "NEW" or "ASSIGNED". If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "RHEL project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/RHEL-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1126872 - ccs list services displays traceback (Comment instance has no attribute 'tagName')
Summary: ccs list services displays traceback (Comment instance has no attribute 'tagN...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Linux 6
Classification: Red Hat
Component: ricci
Version: 6.5
Hardware: x86_64
OS: Linux
unspecified
unspecified
Target Milestone: rc
: ---
Assignee: Chris Feist
QA Contact: cluster-qe@redhat.com
URL:
Whiteboard:
: 1199602 (view as bug list)
Depends On:
Blocks: 1172231
TreeView+ depends on / blocked
 
Reported: 2014-08-05 13:47 UTC by Chester
Modified: 2019-07-11 08:45 UTC (History)
6 users (show)

Fixed In Version: ccs-0.16.2-77.el6.x86_64
Doc Type: Bug Fix
Doc Text:
Cause: ccs doesn't properly handle comments in cluster.conf file Consequence: Tracebacks can occur in ccs when listing services if XML comments exist in the services section of cluster.conf Fix: ccs must ignore any comments in the services/resources sections of cluster.conf instead of trying to parse them. Result: ccs does not traceback when comments exist in the services or resources sections of cluster.conf
Clone Of:
Environment:
Last Closed: 2015-07-22 07:33:55 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 1377703 0 Troubleshoot None ccs prints a traceback and "AttributeError: Comment instance has no attribute 'getAttribute'" in a RHEL 6 High Availabil... 2019-07-11 08:45:02 UTC
Red Hat Product Errata RHBA-2015:1405 0 normal SHIPPED_LIVE ricci bug fix and enhancement update 2015-07-20 18:07:08 UTC

Description Chester 2014-08-05 13:47:27 UTC
Description of problem: Attempting to list the services in a cluster can result in a traceback. 


Version-Release number of selected component (if applicable):
[root@qpid-mgmt-02 tmp]# rpm -qa | grep -E "(ccs|rgmanager|ricci|cman|modclusterd|corosync)"
corosync-1.4.1-17.el6_5.1.x86_64
rgmanager-3.0.12.1-19.el6.x86_64
corosynclib-1.4.1-17.el6_5.1.x86_64
ricci-0.16.2-69.el6_5.1.x86_64
cman-3.0.12.1-59.el6_5.2.x86_64
ccs-0.16.2-69.el6_5.1.x86_64

How reproducible: 100%


Steps to Reproduce:
1. setup cluster
2. ccs -h qpid-mgmt-02 --lsservices

Actual results:
[root@qpid-mgmt-02 tmp]# ccs -h qpid-mgmt-02 --lsservices
service: name=qpid-mgmt-01-service, domain=qpid-mgmt-01-domain, recovery=restart
  script: ref=qpidd
service: name=qpid-mgmt-02-service, domain=qpid-mgmt-02-domain, recovery=restart
  script: ref=qpidd
service: name=qpid-mgmt-primary-service, exclusive=0, autostart=1, recovery=relocate
  script: ref=qpidd-primary
Traceback (most recent call last):
  File "/usr/sbin/ccs", line 2399, in <module>
    main(sys.argv[1:])
  File "/usr/sbin/ccs", line 265, in main
    if (listservices): list_services()
  File "/usr/sbin/ccs", line 673, in list_services
    print_services_map(node,0)
  File "/usr/sbin/ccs", line 696, in print_services_map
    print_services_map(cn, level + 1)
  File "/usr/sbin/ccs", line 694, in print_services_map
    print prefix + node.tagName + ": " + nodeattr
AttributeError: Comment instance has no attribute 'tagName'

Expected results:
Services are listed without traceback.

Additional info:
[root@qpid-mgmt-02 tmp]# cat /etc/cluster/cluster.conf
<?xml version="1.0"?>
<cluster config_version="20" name="qpid-mgmt">
	<cman expected_votes="1" two_node="1"/>
	<clusternodes>
		<clusternode name="qpid-mgmt-01" nodeid="1" votes="1">
			<fence>
				<method name="single"/>
			</fence>
		</clusternode>
		<clusternode name="qpid-mgmt-02" nodeid="2" votes="1">
			<fence>
				<method name="single"/>
			</fence>
		</clusternode>
	</clusternodes>
	<fencedevices/>
	<rm>
		<failoverdomains>
			<failoverdomain name="qpid-mgmt-01-domain" restricted="1">
				<failoverdomainnode name="qpid-mgmt-01"/>
			</failoverdomain>
			<failoverdomain name="qpid-mgmt-02-domain" restricted="1">
				<failoverdomainnode name="qpid-mgmt-02"/>
			</failoverdomain>
		</failoverdomains>
		<resources>
			<!-- start a qpidd broker acting as a backup -->
			<script file="/etc/init.d/qpidd" name="qpidd"/>
			<!-- promote the qpidd broker on this node to primary -->
			<script file="/etc/init.d/qpidd-primary" name="qpidd-primary"/>
			<!-- assign a virtual IP address for qpid client traffic -->
			<!-- NOTE: ideally we want the client traffic to be in a separate subnet -->
			<ip address="192.168.111.1" monitor_link="1"/>
		</resources>
		<!-- service configuration -->
		<!-- There is a qpidd service on each node, it should be restarted if it fails -->
		<service domain="qpid-mgmt-01-domain" name="qpid-mgmt-01-service" recovery="restart">
			<script ref="qpidd"/>
		</service>
		<service domain="qpid-mgmt-02-domain" name="qpid-mgmt-02-service" recovery="restart">
			<script ref="qpidd"/>
		</service>
		<!-- There should always be a single qpidd-primary service, it can run on any node -->
		<service autostart="1" exclusive="0" name="qpid-mgmt-primary-service" recovery="relocate">
			<script ref="qpidd-primary"/>
			<!-- The primary has the IP addresses for brokers and clients to connect -->
			<ip ref="192.168.111.1"/>
		</service>
	</rm>
</cluster>

Comment 1 Chester 2014-08-05 13:54:10 UTC
[root@qpid-mgmt-02 tmp]# clustat
Cluster Status for qpid-mgmt @ Tue Aug  5 09:39:45 2014
Member Status: Quorate

 Member Name                                                  ID   Status
 ------ ----                                                  ---- ------
 qpid-mgmt-01                                                     1 Online
 qpid-mgmt-02                                                     2 Online, Local



After rebooting, cluster seems to be in reliable state: 
[root@qpid-mgmt-02 ~]# clustat
Cluster Status for qpid-mgmt @ Tue Aug  5 09:49:36 2014
Member Status: Quorate

 Member Name                                                  ID   Status
 ------ ----                                                  ---- ------
 qpid-mgmt-01                                                     1 Online, rgmanager
 qpid-mgmt-02                                                     2 Online, Local, rgmanager

 Service Name                                        Owner (Last)                                        State         
 ------- ----                                        ----- ------                                        -----         
 service:qpid-mgmt-01-service                        qpid-mgmt-01                                        started       
 service:qpid-mgmt-02-service                        qpid-mgmt-02                                        started       
 service:qpid-mgmt-primary-service                   (qpid-mgmt-01)                                      recoverable 

And listing the services works: 
[root@qpid-mgmt-02 ~]# ccs -h qpid-mgmt-02 --lsnodes
qpid-mgmt-01: votes=1, nodeid=1
qpid-mgmt-02: votes=1, nodeid=2


So, it seems to traceback is limited to the cluster being out of whack

Comment 3 Chris Feist 2014-08-11 20:03:34 UTC
This error is occurring because ccs is not handling the comments in the cluster.conf file properly.

As a temporary workaround, removing the comments from the service & resources section will prevent this error from occurring.

Comment 5 Chris Feist 2015-03-02 23:50:01 UTC
Fix Upstream:

https://github.com/feist/ccs/commit/c672a2386688da1166898ce5e088951b07c04ba7


To test, use cluster.conf with these contents:
<?xml version="1.0"?>
<cluster config_version="20" name="a">
        <cman expected_votes="1" two_node="1"/>
        <clusternodes>
        </clusternodes>
        <fencedevices/>
        <rm>
                <resources>
                        <!-- COMMENT -->
                        <ip address="192.168.111.1" monitor_link="1"/>
                </resources>
        </rm>
</cluster>


And run: 'ccs -f /tmp/test.conf --lsservices', before the fix you get a traceback, after, you get this:

resources: 
  ip: monitor_link=1, address=192.168.111.1

Comment 6 Chris Feist 2015-03-03 23:27:28 UTC
Before Fix (using test_file created from cluster.conf in comment #5):
[root@ask-03 ~]# rpm -q ccs
ccs-0.16.2-75.el6.x86_64
[root@ask-03 ~]# ccs -f  test_file --lsservices
resources: 
Traceback (most recent call last):
  File "/usr/sbin/ccs", line 2450, in <module>
    main(sys.argv[1:])
  File "/usr/sbin/ccs", line 279, in main
    if (listservices): list_services()
  File "/usr/sbin/ccs", line 726, in list_services
    print_services_map(node,0)
  File "/usr/sbin/ccs", line 747, in print_services_map
    print_services_map(cn, level + 1)
  File "/usr/sbin/ccs", line 745, in print_services_map
    print prefix + node.tagName + ": " + nodeattr
AttributeError: Comment instance has no attribute 'tagName'


After Fix (using test_file created from cluster.conf in comment #5):
[root@ask-02 ~]# rpm -q ccs
ccs-0.16.2-77.el6.x86_64
[root@ask-02 ~]# ccs -f  test_file --lsservices
resources: 
  ip: monitor_link=1, address=192.168.111.1

Comment 9 errata-xmlrpc 2015-07-22 07:33:55 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2015-1405.html

Comment 10 Chris Feist 2016-01-18 23:35:39 UTC
*** Bug 1199602 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.