Bug 1287311
| Summary: | [fence_compute] Support per-plug status in combination with record-only | ||
|---|---|---|---|
| Product: | Red Hat Enterprise Linux 7 | Reporter: | Andrew Beekhof <abeekhof> |
| Component: | fence-agents | Assignee: | Oyvind Albrigtsen <oalbrigt> |
| Status: | CLOSED ERRATA | QA Contact: | Asaf Hirshberg <ahirshbe> |
| Severity: | medium | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 7.2 | CC: | abeekhof, cluster-maint, mgrac, oalbrigt, oblaut |
| Target Milestone: | rc | ||
| Target Release: | --- | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | fence-agents-4.0.11-41.el7 | Doc Type: | Bug Fix |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2016-11-04 04:48:58 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Other agents returns exit 0 for ON and 2 for OFF when --action=status. Pull request has an even simpler fix: https://github.com/ClusterLabs/fence-agents/pull/33 Andrew, it's seems like a username/password. I tried "hacluster" & "redhat" for both but it failed:
[root@overcloud-controller-1 ~]# fence_compute -o status -n i-dont-exist [...]; echo $?
Traceback (most recent call last):
File "/sbin/fence_compute", line 352, in <module>
main()
File "/sbin/fence_compute", line 334, in main
options["--username"],
KeyError: '--username'
1
[root@overcloud-controller-1 ~]# fence_compute --username hacluster -o status -n i-dont-exist [...]; echo $?
Traceback (most recent call last):
File "/sbin/fence_compute", line 352, in <module>
main()
File "/sbin/fence_compute", line 335, in main
options["--password"],
KeyError: '--password'
1
[root@overcloud-controller-1 ~]# fence_compute --username hacluster --password hacluster -o status -n i-dont-exist [...]; echo $?
Traceback (most recent call last):
File "/sbin/fence_compute", line 352, in <module>
main()
File "/sbin/fence_compute", line 348, in main
result = fence_action(None, options, set_power_status, get_power_status, get_plugs_list, None)
File "/usr/share/fence/fencing.py", line 964, in fence_action
status = get_multi_power_fn(tn, options, get_power_fn)
File "/usr/share/fence/fencing.py", line 871, in get_multi_power_fn
plug_status = get_power_fn(tn, options)
File "/sbin/fence_compute", line 48, in get_power_status
except ConnectionError as (err):
NameError: global name 'ConnectionError' is not defined
1
[root@overcloud-controller-1 ~]#
Do I need to had those arguments or I need to do something prior to the command and omit them?
Oh, "[...]" is supposed to mean the nova login details:
eg.
--username admin --password b64FZzF4gHz73jHFB9hFx7ysG --tenant-name admin --auth-url http://192.0.2.6:5000/v2.0
Andrew can you please advise: [heat-admin@overcloud-controller-0 ~]$ fence_compute -o status -n i-dont-exist --username admin --password HgwsgmZRB4jMQw8mXcXtZqNWr --tenant-name admin --auth-url http://10.35.180.10:5000/v2.0; echo $? Traceback (most recent call last): File "/usr/sbin/fence_compute", line 434, in <module> main() File "/usr/sbin/fence_compute", line 397, in main fix_plug_name(options) File "/usr/sbin/fence_compute", line 273, in fix_plug_name fix_domain(options) File "/usr/sbin/fence_compute", line 216, in fix_domain last_domain = nil NameError: global name 'nil' is not defined 1 [heat-admin@overcloud-controller-0 ~]$ using: [heat-admin@overcloud-controller-0 ~]$ rpm -qa |grep fence-agents-compute fence-agents-compute-4.0.11-39.el7.x86_64 New build to fix 'nil' issue. New build with patch from: https://github.com/ClusterLabs/fence-agents/commit/6a2f0f2b24233ddfdd8672e380e697a425af3ed7 I tried the latest build and got the following (using -vvv): [heat-admin@overcloud-controller-2 ~]$ fence_compute -o status -n i-dont-exist --username admin --password HgwsgmZRB4jMQw8mXcXtZqNWr --tenant-name admin --auth-url http://10.35.180.10:5000/v2.0 -vvv; echo $? Could not calculate the domain names used by compute nodes in nova Checking target 'i-dont-exist' against calculated domain 'None' get action: status Starting new HTTP connection (1): 10.35.180.10 "POST /v2.0/tokens HTTP/1.1" 200 1065 Starting new HTTP connection (1): 172.17.0.10 "GET /v2.1/e194873b1b7b4193b938651170b7bd12/os-services?host=i-dont-exist HTTP/1.1" 200 16 Failed: Unable to obtain correct plug status or plug is not available [heat-admin@overcloud-controller-2 ~]$ rpm -qa|grep fence-agents-compute fence-agents-compute-4.0.11-42.el7.x86_64 [heat-admin@overcloud-controller-2 ~]$ rpm -qa|grep fence-agents-common fence-agents-common-4.0.11-42.el7.x86_64 [heat-admin@overcloud-controller-2 ~]$ Looks correct to me. Drop the -vvv and see if you get the expected output from the description... Expected results: 1 Andrew, What about the "Failed: Unable to obtain..." message/error? is it expected? [heat-admin@overcloud-controller-2 ~]$ fence_compute -o status -n i-dont-exist --username admin --password HgwsgmZRB4jMQw8mXcXtZqNWr --tenant-name admin --auth-url http://10.35.180.10:5000/v2.0; echo $? Could not calculate the domain names used by compute nodes in nova Failed: Unable to obtain correct plug status or plug is not available 1 [heat-admin@overcloud-controller-2 ~]$ An error message seems reasonable under the circumstances. The important part is that the call now fails instead of reporting success. Verified. comment #15 > Andrew, > > What about the "Failed: Unable to obtain..." message/error? is it expected? > > [heat-admin@overcloud-controller-2 ~]$ fence_compute -o status -n > i-dont-exist --username admin --password HgwsgmZRB4jMQw8mXcXtZqNWr > --tenant-name admin --auth-url http://10.35.180.10:5000/v2.0; echo $? > Could not calculate the domain names used by compute nodes in nova > Failed: Unable to obtain correct plug status or plug is not available > > > 1 > [heat-admin@overcloud-controller-2 ~]$ Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-2373.html |
Description of problem: Currently record-only causes -o status to always return true. This isn't really a good idea. Version-Release number of selected component (if applicable): fence-agents-all-4.0.11-27.el7.x86_64 How reproducible: 100% Steps to Reproduce: 1. fence_compute -o status -n i-dont-exist [...]; echo $? 2. 3. Actual results: 0 Expected results: 1 Additional info: --- /sbin/fence_compute.orig 2015-11-26 20:09:12.165071987 -0500 +++ /sbin/fence_compute 2015-12-01 18:02:48.252544729 -0500 @@ -117,15 +119,14 @@ hypervisors = nova.hypervisors.list() for hypervisor in hypervisors: longhost = hypervisor.hypervisor_hostname - if options["--action"] == "list" and options["--domain"] != "": - shorthost = longhost.replace("." + options["--domain"], - "") + if options["--domain"] != "": + shorthost = longhost.replace("." + options["--domain"], "") + result[longhost] = ("", None) result[shorthost] = ("", None) else: result[longhost] = ("", None) return result - def define_new_opts(): all_opt["endpoint-type"] = { "getopt" : "e:", @@ -222,8 +223,14 @@ set_attrd_status(options["--plug"], "yes", options) sys.exit(0) - elif options["--action"] in ["status", "monitor"]: - sys.exit(0) + elif options["--action"] == "monitor": + sys.exit(0) + + elif options["--action"] == "status": + plugs = get_plugs_list(None, options) + if plugs.has_key(options["--plug"]): + sys.exit(0) + sys.exit(1) # The first argument is the Nova client version nova = nova_client.Client('2',