Bug 1723176 - [FFU] FFU upgrade prepare failed "Timeout for heat deployment 'copy_ssh_key" with custom stack name
Summary: [FFU] FFU upgrade prepare failed "Timeout for heat deployment 'copy_ssh_key" ...
Keywords:
Status: CLOSED WORKSFORME
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-tripleoclient
Version: 10.0 (Newton)
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: Carlos Camacho
QA Contact: Ronnie Rasouli
URL:
Whiteboard:
Depends On:
Blocks: 1599764
TreeView+ depends on / blocked
 
Reported: 2019-06-23 15:06 UTC by Ronnie Rasouli
Modified: 2019-07-15 12:44 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-07-15 12:44:07 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Mistral executor log (2.73 MB, text/plain)
2019-06-23 15:06 UTC, Ronnie Rasouli
no flags Details

Description Ronnie Rasouli 2019-06-23 15:06:03 UTC
Created attachment 1583719 [details]
Mistral executor log

Description of problem:

The FFU prepare fails on heat copy ssh key to nodes

The overcloud name is different than overcloud - 

openstack stack list
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+--------------+
| ID                                   | Stack Name | Project                          | Stack Status    | Creation Time        | Updated Time |
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+--------------+
| 9309ef01-cc98-49cf-b8cd-9f4bc6ea6ab1 | qe-Cloud-0 | 67717731e2014d57844975b09b9c4219 | CREATE_COMPLETE | 2019-06-23T11:57:25Z | None         |
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+--------------+



Version-Release number of selected component (if applicable):
2019-06-20.1

How reproducible:
100%

Steps to Reproduce:
1. deploy RHOS10 with 3 controllers, 3 Ceph, 2 compute, custom stack name
2. Do FFU upgrade
3. run the FFU prepare command

Actual results:
FFU prepare fails

Expected results:
no errors

Additional info:

APIException: Environment not found [name=overcloud]

2019-06-23 07:32:42.582 31158 ERROR mistral.engine.default_executor [-] Failed to run action [action_ex_id=1ddd4bfd-1abb-4573-be9c-58a05f24b951, action_cls='<class 'mistral.actio
ns.action_factory.MistralAction'>', attributes='{u'client_method_name': u'environments.get'}', params='{u'name': u'overcloud'}']
 MistralAction.environments.get failed: <class 'mistralclient.api.base.APIException'>: Environment not found [name=overcloud]
2019-06-23 07:32:42.582 31158 ERROR mistral.engine.default_executor Traceback (most recent call last):
2019-06-23 07:32:42.582 31158 ERROR mistral.engine.default_executor   File "/usr/lib/python2.7/site-packages/mistral/engine/default_executor.py", line 90, in run_action
2019-06-23 07:32:42.582 31158 ERROR mistral.engine.default_executor     result = action.run()
2019-06-23 07:32:42.582 31158 ERROR mistral.engine.default_executor   File "/usr/lib/python2.7/site-packages/mistral/actions/openstack/base.py", line 142, in run
2019-06-23 07:32:42.582 31158 ERROR mistral.engine.default_executor     (self.__class__.__name__, self.client_method_name, e_str)
2019-06-23 07:32:42.582 31158 ERROR mistral.engine.default_executor ActionException: MistralAction.environments.get failed: <class 'mistralclient.api.base.APIException'>: Environ
ment not found [name=overcloud]
2019-06-23 07:32:42.582 31158 ERROR mistral.engine.default_executor
2019-06-23 07:32:43.043 31158 INFO mistral.engine.rpc_backend.rpc [-] Received RPC request 'run_action'[rpc_ctx=MistralContext {u'project_name': u'admin', u'user_id': u'5c5c1de2b

Comment 3 Jose Luis Franco 2019-07-08 13:13:11 UTC
Hello Ronnie,

Could you reproduce this issue in the FFWD job? When it occurred we tried to debug it but couldn't identify proper logs or a tip, if you manage to reproduce it let us know. Otherwise, could we close this bug?

Comment 4 Jiri Stransky 2019-07-15 12:44:07 UTC
We discussed this on triage call, there has been no reproducer, let's reopen when we get one.


Note You need to log in before you can comment on or make changes to this bug.