Bug 1250720

Summary: traceback when running 'pcs resource enable clvmd --wait'
Product: Red Hat Enterprise Linux 7 Reporter: Corey Marthaler <cmarthal>
Component: pcsAssignee: Chris Feist <cfeist>
Status: CLOSED ERRATA QA Contact: cluster-qe <cluster-qe>
Severity: medium Docs Contact:
Priority: medium    
Version: 7.2CC: cfeist, cluster-maint, rsteiger, tojeline
Target Milestone: rc   
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: pcs-0.9.143-1.el7 Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2015-11-19 09:38:24 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Patch to replace ra_id with resource none

Description Corey Marthaler 2015-08-05 20:12:08 UTC
Description of problem:
[root@mckinley-01 ~]# pcs resource enable clvmd --wait
Traceback (most recent call last):
  File "/usr/sbin/pcs", line 215, in <module>
    main(sys.argv[1:])
  File "/usr/sbin/pcs", line 159, in main
    cmd_map[command](argv)
  File "/usr/lib/python2.7/site-packages/pcs/resource.py", line 90, in resource_cmd
    resource_enable(argv)
  File "/usr/lib/python2.7/site-packages/pcs/resource.py", line 2163, in resource_enable
    % ra_id
NameError: global name 'ra_id' is not defined




# Full output from script
Configuring pacemaker to start dlm on mckinley-01...pcs cluster cib > /tmp/tmp.kKQIpXAsl7
pcs -f /tmp/tmp.kKQIpXAsl7 property set no-quorum-policy=freeze
pcs -f /tmp/tmp.kKQIpXAsl7 resource create dlm controld op monitor interval=30s on-fail=fence clone interleave=true ordered=true
pcs -f /tmp/tmp.kKQIpXAsl7 resource create clvmd ocf:heartbeat:clvm with_cmirrord=1 op monitor interval=30s on-fail=fence clone interleave=true ordered=true
pcs -f /tmp/tmp.kKQIpXAsl7 constraint order start dlm-clone then clvmd-clone
pcs -f /tmp/tmp.kKQIpXAsl7 constraint colocation add clvmd-clone with dlm-clone
pcs cluster cib-push /tmp/tmp.kKQIpXAsl7
pcs resource enable clvmd --wait
Traceback (most recent call last):
  File "/usr/sbin/pcs", line 215, in <module>
    main(sys.argv[1:])
  File "/usr/sbin/pcs", line 159, in main
    cmd_map[command](argv)
  File "/usr/lib/python2.7/site-packages/pcs/resource.py", line 90, in resource_cmd
    resource_enable(argv)
  File "/usr/lib/python2.7/site-packages/pcs/resource.py", line 2163, in resource_enable
    % ra_id
NameError: global name 'ra_id' is not defined



Version-Release number of selected component (if applicable):
pcs-0.9.142-2.el7.x86_64

Comment 1 Chris Feist 2015-08-05 20:16:48 UTC
Created attachment 1059626 [details]
Patch to replace ra_id with resource

pcs is doing a traceback while attempting to print the error message.  There is still an error starting clvmd that is not caused by pcs.

Patch to fix the problem is attached.

Comment 2 Tomas Jelinek 2015-08-10 13:22:58 UTC
Before Fix:
[root@rh71-node1 ~]# rpm -q pcs
pcs-0.9.142-2.el7.x86_64
# apache is not installed on the node
[root@rh71-node1:~]# pcs resource create apa apache --disabled
[root@rh71-node1:~]# pcs resource enable apa --wait
Traceback (most recent call last):
  File "/usr/sbin/pcs", line 215, in <module>
    main(sys.argv[1:])
  File "/usr/sbin/pcs", line 159, in main
    cmd_map[command](argv)
  File "/usr/lib/python2.7/site-packages/pcs/resource.py", line 90, in resource_cmd
    resource_enable(argv)
  File "/usr/lib/python2.7/site-packages/pcs/resource.py", line 2163, in resource_enable
    % ra_id
NameError: global name 'ra_id' is not defined



After Fix:
[root@rh71-node1:~]# rpm -q pcs
pcs-0.9.143-1.el7.x86_64
# apache is not installed on the node
[root@rh71-node1:~]# pcs resource create apa apache --disabled
[root@rh71-node1:~]# pcs resource enable apa --wait
Error: unable to start: 'apa', please check logs for failure information
Resource 'apa' is not running on any node

Comment 6 errata-xmlrpc 2015-11-19 09:38:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHSA-2015-2290.html