Bug 1287311 - [fence_compute] Support per-plug status in combination with record-only
[fence_compute] Support per-plug status in combination with record-only
Status: CLOSED ERRATA
Product: Red Hat Enterprise Linux 7
Classification: Red Hat
Component: fence-agents (Show other bugs)
7.2
Unspecified Unspecified
unspecified Severity medium
: rc
: ---
Assigned To: Oyvind Albrigtsen
Asaf Hirshberg
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2015-12-01 18:08 EST by Andrew Beekhof
Modified: 2016-11-04 00:48 EDT (History)
5 users (show)

See Also:
Fixed In Version: fence-agents-4.0.11-41.el7
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2016-11-04 00:48:58 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Andrew Beekhof 2015-12-01 18:08:06 EST
Description of problem:

Currently record-only causes -o status to always return true.
This isn't really a good idea.

Version-Release number of selected component (if applicable):

fence-agents-all-4.0.11-27.el7.x86_64

How reproducible:

100%

Steps to Reproduce:
1. fence_compute -o status -n i-dont-exist [...]; echo $?
2.
3.

Actual results:

0

Expected results:

1

Additional info:

--- /sbin/fence_compute.orig	2015-11-26 20:09:12.165071987 -0500
+++ /sbin/fence_compute	2015-12-01 18:02:48.252544729 -0500
@@ -117,15 +119,14 @@
 		hypervisors = nova.hypervisors.list()
 		for hypervisor in hypervisors:
 			longhost = hypervisor.hypervisor_hostname
-			if options["--action"] == "list" and options["--domain"] != "":
-				shorthost = longhost.replace("." + options["--domain"],
-                                                 "")
+			if options["--domain"] != "":
+				shorthost = longhost.replace("." + options["--domain"], "")
+				result[longhost] = ("", None)
 				result[shorthost] = ("", None)
 			else:
 				result[longhost] = ("", None)
 	return result
 
-
 def define_new_opts():
 	all_opt["endpoint-type"] = {
 		"getopt" : "e:",
@@ -222,8 +223,14 @@
 			set_attrd_status(options["--plug"], "yes", options)
 			sys.exit(0)
 
-		elif options["--action"] in ["status", "monitor"]:
- 			sys.exit(0)
+		elif options["--action"] == "monitor":
+			sys.exit(0)
+
+		elif options["--action"] == "status":
+			plugs = get_plugs_list(None, options)
+			if plugs.has_key(options["--plug"]):
+ 				sys.exit(0)
+			sys.exit(1)
 
 	# The first argument is the Nova client version
 	nova = nova_client.Client('2',
Comment 1 Marek Grac 2015-12-02 03:01:13 EST
Other agents returns exit 0 for ON and 2 for OFF when --action=status.
Comment 2 Andrew Beekhof 2015-12-02 21:37:04 EST
Pull request has an even simpler fix:

   https://github.com/ClusterLabs/fence-agents/pull/33
Comment 5 Asaf Hirshberg 2016-06-18 23:55:59 EDT
Andrew, it's seems like a username/password. I tried "hacluster" & "redhat" for both but it failed:

[root@overcloud-controller-1 ~]# fence_compute -o status -n i-dont-exist [...]; echo $?
Traceback (most recent call last):
  File "/sbin/fence_compute", line 352, in <module>
    main()
  File "/sbin/fence_compute", line 334, in main
    options["--username"],
KeyError: '--username'
1
[root@overcloud-controller-1 ~]# fence_compute  --username hacluster -o status -n i-dont-exist [...]; echo $?
Traceback (most recent call last):
  File "/sbin/fence_compute", line 352, in <module>
    main()
  File "/sbin/fence_compute", line 335, in main
    options["--password"],
KeyError: '--password'
1
[root@overcloud-controller-1 ~]# fence_compute  --username hacluster --password hacluster -o status -n i-dont-exist [...]; echo $?
Traceback (most recent call last):
  File "/sbin/fence_compute", line 352, in <module>
    main()
  File "/sbin/fence_compute", line 348, in main
    result = fence_action(None, options, set_power_status, get_power_status, get_plugs_list, None)
  File "/usr/share/fence/fencing.py", line 964, in fence_action
    status = get_multi_power_fn(tn, options, get_power_fn)
  File "/usr/share/fence/fencing.py", line 871, in get_multi_power_fn
    plug_status = get_power_fn(tn, options)
  File "/sbin/fence_compute", line 48, in get_power_status
    except ConnectionError as (err):
NameError: global name 'ConnectionError' is not defined
1
[root@overcloud-controller-1 ~]# 

Do I need to had those arguments or I need to do something prior to the command and omit them?
Comment 6 Andrew Beekhof 2016-06-19 21:31:59 EDT
Oh, "[...]" is supposed to mean the nova login details:

eg.
    --username admin --password b64FZzF4gHz73jHFB9hFx7ysG --tenant-name admin --auth-url http://192.0.2.6:5000/v2.0
Comment 7 Asaf Hirshberg 2016-06-29 03:49:43 EDT
Andrew can you please advise:

[heat-admin@overcloud-controller-0 ~]$ fence_compute -o status -n i-dont-exist --username admin --password HgwsgmZRB4jMQw8mXcXtZqNWr --tenant-name admin --auth-url http://10.35.180.10:5000/v2.0; echo $?
Traceback (most recent call last):
  File "/usr/sbin/fence_compute", line 434, in <module>
    main()
  File "/usr/sbin/fence_compute", line 397, in main
    fix_plug_name(options)
  File "/usr/sbin/fence_compute", line 273, in fix_plug_name
    fix_domain(options)
  File "/usr/sbin/fence_compute", line 216, in fix_domain
    last_domain = nil
NameError: global name 'nil' is not defined
1
[heat-admin@overcloud-controller-0 ~]$ 


using:
[heat-admin@overcloud-controller-0 ~]$ rpm -qa |grep fence-agents-compute
fence-agents-compute-4.0.11-39.el7.x86_64
Comment 8 Oyvind Albrigtsen 2016-07-04 09:51:24 EDT
New build to fix 'nil' issue.
Comment 12 Oyvind Albrigtsen 2016-07-06 04:39:59 EDT
New build with patch from: https://github.com/ClusterLabs/fence-agents/commit/6a2f0f2b24233ddfdd8672e380e697a425af3ed7
Comment 13 Asaf Hirshberg 2016-07-12 05:30:26 EDT
I tried the latest build and got the following (using -vvv): 

[heat-admin@overcloud-controller-2 ~]$ fence_compute -o status -n i-dont-exist --username admin --password HgwsgmZRB4jMQw8mXcXtZqNWr --tenant-name admin --auth-url http://10.35.180.10:5000/v2.0 -vvv; echo $? 
Could not calculate the domain names used by compute nodes in nova
Checking target 'i-dont-exist' against calculated domain 'None'
get action: status
Starting new HTTP connection (1): 10.35.180.10
"POST /v2.0/tokens HTTP/1.1" 200 1065
Starting new HTTP connection (1): 172.17.0.10
"GET /v2.1/e194873b1b7b4193b938651170b7bd12/os-services?host=i-dont-exist HTTP/1.1" 200 16
Failed: Unable to obtain correct plug status or plug is not available


[heat-admin@overcloud-controller-2 ~]$ rpm -qa|grep fence-agents-compute
fence-agents-compute-4.0.11-42.el7.x86_64
[heat-admin@overcloud-controller-2 ~]$ rpm -qa|grep fence-agents-common
fence-agents-common-4.0.11-42.el7.x86_64
[heat-admin@overcloud-controller-2 ~]$
Comment 14 Andrew Beekhof 2016-07-12 20:25:37 EDT
Looks correct to me.  Drop the -vvv and see if you get the expected output from the description...

Expected results:

1
Comment 15 Asaf Hirshberg 2016-07-13 02:49:15 EDT
Andrew,

What about the "Failed: Unable to obtain..." message/error? is it expected?

 [heat-admin@overcloud-controller-2 ~]$ fence_compute -o status -n i-dont-exist --username admin --password HgwsgmZRB4jMQw8mXcXtZqNWr --tenant-name admin --auth-url http://10.35.180.10:5000/v2.0; echo $? 
Could not calculate the domain names used by compute nodes in nova
Failed: Unable to obtain correct plug status or plug is not available


1
[heat-admin@overcloud-controller-2 ~]$
Comment 16 Andrew Beekhof 2016-07-13 19:27:51 EDT
An error message seems reasonable under the circumstances.
The important part is that the call now fails instead of reporting success.
Comment 17 Asaf Hirshberg 2016-07-13 23:47:56 EDT
Verified. comment #15

> Andrew,
> 
> What about the "Failed: Unable to obtain..." message/error? is it expected?
> 
>  [heat-admin@overcloud-controller-2 ~]$ fence_compute -o status -n
> i-dont-exist --username admin --password HgwsgmZRB4jMQw8mXcXtZqNWr
> --tenant-name admin --auth-url http://10.35.180.10:5000/v2.0; echo $? 
> Could not calculate the domain names used by compute nodes in nova
> Failed: Unable to obtain correct plug status or plug is not available
> 
> 
> 1
> [heat-admin@overcloud-controller-2 ~]$
Comment 19 errata-xmlrpc 2016-11-04 00:48:58 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2016-2373.html

Note You need to log in before you can comment on or make changes to this bug.