Bug 1271250

Summary: When introspection fails Ironic must mark Discovery as Failed
Product: Red Hat OpenStack Reporter: Ofer Blaut <oblaut>
Component: python-rdomanager-oscpluginAssignee: Brad P. Crochet <brad>
Status: CLOSED ERRATA QA Contact: Raviv Bar-Tal <rbartal>
Severity: high Docs Contact:
Priority: high    
Version: 7.0 (Kilo)CC: ddomingo, dtantsur, hbrock, jslagle, mburns, mcornea, oblaut, rhel-osp-director-maint
Target Milestone: ---Keywords: TestOnly, UserExperience
Target Release: 8.0 (Liberty)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Bug Fix
Doc Text:
In previous releases, a bug made it possible for failed nodes to be marked as available. Whenever this occurred, deployments failed because nodes were not in a proper state. This update backports an upstream patch to fix the bug.
Story Points: ---
Clone Of: Environment:
Last Closed: 2016-04-07 21:41:40 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
output none

Description Ofer Blaut 2015-10-13 13:30:27 UTC
Created attachment 1082427 [details]
output

Description of problem:

openstack baremetal introspection bulk start finish with Discovery completed , while it failed ( i thought ERROR: rdomanager_oscplug... was solved )


Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.Deploy virt setup  with 
export TESTENV_ARGS="--baremetal-bridge-names 'brbm brbm1 brbm2'" ( to hit fail introspection - https://bugzilla.redhat.com/show_bug.cgi?id=1234601 )
2. check Last line of the output 
3.

Actual results:


Expected results:


Additional info:

Comment 1 Mike Burns 2015-10-15 16:11:48 UTC
Ofer,  I'm confused, the summary and description seem to be saying the opposite things.  

I think the description is right.  If introspection fails, the cli should report failure.  I'm updating the summary, but please confirm.

Comment 3 Ofer Blaut 2015-10-21 10:31:26 UTC
You are right :-)

Comment 4 Brad P. Crochet 2016-02-08 13:27:18 UTC
This should be fixed. Please confirm.

Comment 7 Dmitry Tantsur 2016-03-29 15:31:05 UTC
I've simulated an error by modifying the ironic-inspector source to raise an exception. Results show the expected error message and stay "manageable":

[stack@instack ~]$ openstack baremetal introspection bulk start
Setting nodes for introspection to manageable...
Starting introspection of node: 792c0e83-26ec-4ad4-a775-b1021ea2fdda
Starting introspection of node: 9f471f94-7535-44f0-a1ef-1b75a1ebbd3a
Waiting for introspection to finish...
Introspection for UUID 792c0e83-26ec-4ad4-a775-b1021ea2fdda finished with error: Unexpected exception in background introspection thread
Introspection for UUID 9f471f94-7535-44f0-a1ef-1b75a1ebbd3a finished with error: Unexpected exception in background introspection thread
Setting manageable nodes to available...
Introspection completed with errors:
792c0e83-26ec-4ad4-a775-b1021ea2fdda: Unexpected exception in background introspection thread
9f471f94-7535-44f0-a1ef-1b75a1ebbd3a: Unexpected exception in background introspection thread
[stack@instack ~]$ echo $?
1
[stack@instack ~]$ ironic node-list
+--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+
| UUID                                 | Name   | Instance UUID                        | Power State | Provisioning State | Maintenance |
+--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+
| 792c0e83-26ec-4ad4-a775-b1021ea2fdda | node-2 | None                                 | power off   | manageable         | False       |
| 9f471f94-7535-44f0-a1ef-1b75a1ebbd3a | node-3 | None                                 | power off   | manageable         | False       |
+--------------------------------------+--------+--------------------------------------+-------------+--------------------+-------------+

Comment 9 errata-xmlrpc 2016-04-07 21:41:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHEA-2016-0604.html