Bug 1126872
| Summary: | ccs list services displays traceback (Comment instance has no attribute 'tagName') | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 6 | Reporter: | Chester <knappch> |
| Component: | ricci | Assignee: | Chris Feist <cfeist> |
| Status: | CLOSED ERRATA | QA Contact: | cluster-qe <cluster-qe> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 6.5 | CC: | ccaulfie, cluster-maint, jruemker, rpeterso, rsteiger, teigland |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | x86_64 | ||
| OS: | Linux | ||
| Whiteboard: | |||
| Fixed In Version: | ccs-0.16.2-77.el6.x86_64 | Doc Type: | Bug Fix |
| Doc Text: |
Cause: ccs doesn't properly handle comments in cluster.conf file
Consequence: Tracebacks can occur in ccs when listing services if XML comments exist in the services section of cluster.conf
Fix: ccs must ignore any comments in the services/resources sections of cluster.conf instead of trying to parse them.
Result: ccs does not traceback when comments exist in the services or resources sections of cluster.conf
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2015-07-22 07:33:55 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
| Bug Depends On: | |||
| Bug Blocks: | 1172231 | ||
[root@qpid-mgmt-02 tmp]# clustat Cluster Status for qpid-mgmt @ Tue Aug 5 09:39:45 2014 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ qpid-mgmt-01 1 Online qpid-mgmt-02 2 Online, Local After rebooting, cluster seems to be in reliable state: [root@qpid-mgmt-02 ~]# clustat Cluster Status for qpid-mgmt @ Tue Aug 5 09:49:36 2014 Member Status: Quorate Member Name ID Status ------ ---- ---- ------ qpid-mgmt-01 1 Online, rgmanager qpid-mgmt-02 2 Online, Local, rgmanager Service Name Owner (Last) State ------- ---- ----- ------ ----- service:qpid-mgmt-01-service qpid-mgmt-01 started service:qpid-mgmt-02-service qpid-mgmt-02 started service:qpid-mgmt-primary-service (qpid-mgmt-01) recoverable And listing the services works: [root@qpid-mgmt-02 ~]# ccs -h qpid-mgmt-02 --lsnodes qpid-mgmt-01: votes=1, nodeid=1 qpid-mgmt-02: votes=1, nodeid=2 So, it seems to traceback is limited to the cluster being out of whack This error is occurring because ccs is not handling the comments in the cluster.conf file properly. As a temporary workaround, removing the comments from the service & resources section will prevent this error from occurring. Fix Upstream: https://github.com/feist/ccs/commit/c672a2386688da1166898ce5e088951b07c04ba7 To test, use cluster.conf with these contents: <?xml version="1.0"?> <cluster config_version="20" name="a"> <cman expected_votes="1" two_node="1"/> <clusternodes> </clusternodes> <fencedevices/> <rm> <resources> <!-- COMMENT --> <ip address="192.168.111.1" monitor_link="1"/> </resources> </rm> </cluster> And run: 'ccs -f /tmp/test.conf --lsservices', before the fix you get a traceback, after, you get this: resources: ip: monitor_link=1, address=192.168.111.1 Before Fix (using test_file created from cluster.conf in comment #5): [root@ask-03 ~]# rpm -q ccs ccs-0.16.2-75.el6.x86_64 [root@ask-03 ~]# ccs -f test_file --lsservices resources: Traceback (most recent call last): File "/usr/sbin/ccs", line 2450, in <module> main(sys.argv[1:]) File "/usr/sbin/ccs", line 279, in main if (listservices): list_services() File "/usr/sbin/ccs", line 726, in list_services print_services_map(node,0) File "/usr/sbin/ccs", line 747, in print_services_map print_services_map(cn, level + 1) File "/usr/sbin/ccs", line 745, in print_services_map print prefix + node.tagName + ": " + nodeattr AttributeError: Comment instance has no attribute 'tagName' After Fix (using test_file created from cluster.conf in comment #5): [root@ask-02 ~]# rpm -q ccs ccs-0.16.2-77.el6.x86_64 [root@ask-02 ~]# ccs -f test_file --lsservices resources: ip: monitor_link=1, address=192.168.111.1 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2015-1405.html *** Bug 1199602 has been marked as a duplicate of this bug. *** |
Description of problem: Attempting to list the services in a cluster can result in a traceback. Version-Release number of selected component (if applicable): [root@qpid-mgmt-02 tmp]# rpm -qa | grep -E "(ccs|rgmanager|ricci|cman|modclusterd|corosync)" corosync-1.4.1-17.el6_5.1.x86_64 rgmanager-3.0.12.1-19.el6.x86_64 corosynclib-1.4.1-17.el6_5.1.x86_64 ricci-0.16.2-69.el6_5.1.x86_64 cman-3.0.12.1-59.el6_5.2.x86_64 ccs-0.16.2-69.el6_5.1.x86_64 How reproducible: 100% Steps to Reproduce: 1. setup cluster 2. ccs -h qpid-mgmt-02 --lsservices Actual results: [root@qpid-mgmt-02 tmp]# ccs -h qpid-mgmt-02 --lsservices service: name=qpid-mgmt-01-service, domain=qpid-mgmt-01-domain, recovery=restart script: ref=qpidd service: name=qpid-mgmt-02-service, domain=qpid-mgmt-02-domain, recovery=restart script: ref=qpidd service: name=qpid-mgmt-primary-service, exclusive=0, autostart=1, recovery=relocate script: ref=qpidd-primary Traceback (most recent call last): File "/usr/sbin/ccs", line 2399, in <module> main(sys.argv[1:]) File "/usr/sbin/ccs", line 265, in main if (listservices): list_services() File "/usr/sbin/ccs", line 673, in list_services print_services_map(node,0) File "/usr/sbin/ccs", line 696, in print_services_map print_services_map(cn, level + 1) File "/usr/sbin/ccs", line 694, in print_services_map print prefix + node.tagName + ": " + nodeattr AttributeError: Comment instance has no attribute 'tagName' Expected results: Services are listed without traceback. Additional info: [root@qpid-mgmt-02 tmp]# cat /etc/cluster/cluster.conf <?xml version="1.0"?> <cluster config_version="20" name="qpid-mgmt"> <cman expected_votes="1" two_node="1"/> <clusternodes> <clusternode name="qpid-mgmt-01" nodeid="1" votes="1"> <fence> <method name="single"/> </fence> </clusternode> <clusternode name="qpid-mgmt-02" nodeid="2" votes="1"> <fence> <method name="single"/> </fence> </clusternode> </clusternodes> <fencedevices/> <rm> <failoverdomains> <failoverdomain name="qpid-mgmt-01-domain" restricted="1"> <failoverdomainnode name="qpid-mgmt-01"/> </failoverdomain> <failoverdomain name="qpid-mgmt-02-domain" restricted="1"> <failoverdomainnode name="qpid-mgmt-02"/> </failoverdomain> </failoverdomains> <resources> <!-- start a qpidd broker acting as a backup --> <script file="/etc/init.d/qpidd" name="qpidd"/> <!-- promote the qpidd broker on this node to primary --> <script file="/etc/init.d/qpidd-primary" name="qpidd-primary"/> <!-- assign a virtual IP address for qpid client traffic --> <!-- NOTE: ideally we want the client traffic to be in a separate subnet --> <ip address="192.168.111.1" monitor_link="1"/> </resources> <!-- service configuration --> <!-- There is a qpidd service on each node, it should be restarted if it fails --> <service domain="qpid-mgmt-01-domain" name="qpid-mgmt-01-service" recovery="restart"> <script ref="qpidd"/> </service> <service domain="qpid-mgmt-02-domain" name="qpid-mgmt-02-service" recovery="restart"> <script ref="qpidd"/> </service> <!-- There should always be a single qpidd-primary service, it can run on any node --> <service autostart="1" exclusive="0" name="qpid-mgmt-primary-service" recovery="relocate"> <script ref="qpidd-primary"/> <!-- The primary has the IP addresses for brokers and clients to connect --> <ip ref="192.168.111.1"/> </service> </rm> </cluster>