Bug 1723176

Summary: [FFU] FFU upgrade prepare failed "Timeout for heat deployment 'copy_ssh_key" with custom stack name
Product: Red Hat OpenStack Reporter: Ronnie Rasouli <rrasouli>
Component: python-tripleoclientAssignee: Carlos Camacho <ccamacho>
Status: CLOSED WORKSFORME QA Contact: Ronnie Rasouli <rrasouli>
Severity: high Docs Contact:
Priority: unspecified    
Version: 10.0 (Newton)CC: hbrock, jfrancoa, jslagle, jstransk, lbezdick, mburns
Target Milestone: ---Keywords: Regression
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-07-15 12:44:07 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1599764    
Attachments:
Description Flags
Mistral executor log none

Description Ronnie Rasouli 2019-06-23 15:06:03 UTC
Created attachment 1583719 [details]
Mistral executor log

Description of problem:

The FFU prepare fails on heat copy ssh key to nodes

The overcloud name is different than overcloud - 

openstack stack list
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+--------------+
| ID                                   | Stack Name | Project                          | Stack Status    | Creation Time        | Updated Time |
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+--------------+
| 9309ef01-cc98-49cf-b8cd-9f4bc6ea6ab1 | qe-Cloud-0 | 67717731e2014d57844975b09b9c4219 | CREATE_COMPLETE | 2019-06-23T11:57:25Z | None         |
+--------------------------------------+------------+----------------------------------+-----------------+----------------------+--------------+



Version-Release number of selected component (if applicable):
2019-06-20.1

How reproducible:
100%

Steps to Reproduce:
1. deploy RHOS10 with 3 controllers, 3 Ceph, 2 compute, custom stack name
2. Do FFU upgrade
3. run the FFU prepare command

Actual results:
FFU prepare fails

Expected results:
no errors

Additional info:

APIException: Environment not found [name=overcloud]

2019-06-23 07:32:42.582 31158 ERROR mistral.engine.default_executor [-] Failed to run action [action_ex_id=1ddd4bfd-1abb-4573-be9c-58a05f24b951, action_cls='<class 'mistral.actio
ns.action_factory.MistralAction'>', attributes='{u'client_method_name': u'environments.get'}', params='{u'name': u'overcloud'}']
 MistralAction.environments.get failed: <class 'mistralclient.api.base.APIException'>: Environment not found [name=overcloud]
2019-06-23 07:32:42.582 31158 ERROR mistral.engine.default_executor Traceback (most recent call last):
2019-06-23 07:32:42.582 31158 ERROR mistral.engine.default_executor   File "/usr/lib/python2.7/site-packages/mistral/engine/default_executor.py", line 90, in run_action
2019-06-23 07:32:42.582 31158 ERROR mistral.engine.default_executor     result = action.run()
2019-06-23 07:32:42.582 31158 ERROR mistral.engine.default_executor   File "/usr/lib/python2.7/site-packages/mistral/actions/openstack/base.py", line 142, in run
2019-06-23 07:32:42.582 31158 ERROR mistral.engine.default_executor     (self.__class__.__name__, self.client_method_name, e_str)
2019-06-23 07:32:42.582 31158 ERROR mistral.engine.default_executor ActionException: MistralAction.environments.get failed: <class 'mistralclient.api.base.APIException'>: Environ
ment not found [name=overcloud]
2019-06-23 07:32:42.582 31158 ERROR mistral.engine.default_executor
2019-06-23 07:32:43.043 31158 INFO mistral.engine.rpc_backend.rpc [-] Received RPC request 'run_action'[rpc_ctx=MistralContext {u'project_name': u'admin', u'user_id': u'5c5c1de2b

Comment 3 Jose Luis Franco 2019-07-08 13:13:11 UTC
Hello Ronnie,

Could you reproduce this issue in the FFWD job? When it occurred we tried to debug it but couldn't identify proper logs or a tip, if you manage to reproduce it let us know. Otherwise, could we close this bug?

Comment 4 Jiri Stransky 2019-07-15 12:44:07 UTC
We discussed this on triage call, there has been no reproducer, let's reopen when we get one.