Bug 1720080 - [FFU] FFU undercloud failed 'Stack' object has no attribute '__getitem__'
Summary: [FFU] FFU undercloud failed 'Stack' object has no attribute '__getitem__'
Keywords:
Status: CLOSED DUPLICATE of bug 1651136
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-tripleoclient
Version: 10.0 (Newton)
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: RHOS Maint
QA Contact: Ronnie Rasouli
URL:
Whiteboard:
Depends On:
Blocks: 1544752 1688098
TreeView+ depends on / blocked
 
Reported: 2019-06-13 06:42 UTC by Ronnie Rasouli
Modified: 2019-06-19 10:08 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-19 10:08:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Mistral executor log (1.67 MB, text/plain)
2019-06-13 06:47 UTC, Ronnie Rasouli
no flags Details
Upgrade log installing ceph Ansible and updating triploe-client (449.78 KB, text/plain)
2019-06-13 06:48 UTC, Ronnie Rasouli
no flags Details
Hope I gathered all the needed logs (5.25 MB, application/gzip)
2019-06-14 15:12 UTC, Tzach Shefi
no flags Details

Description Ronnie Rasouli 2019-06-13 06:42:01 UTC
Description of problem:
The FFU undercloud failed by 'Stack' object has no attribute '__getitem__'.

Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 2442, in install
    _run_validation_groups(["post-upgrade"], mistral_url)
  File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 1953, in _run_validation_groups
    stack_names_list = [stack['stack_name'] for stack in heat.stacks.list()]
TypeError: 'Stack' object has no attribute '__getitem__'
2019-06-12 21:08:21,034 ERROR: 
#############################################################################
Undercloud upgrade failed.

Reason: 'Stack' object has no attribute '__getitem__'

See the previous output for details about what went wrong.  The full install
log can be found at /home/stack/.instack/install-undercloud.log.

#############################################################################

Looking at the Mistral api log reveals 

Version-Release number of selected component (if applicable):

core_puddle: 2019-06-10.3
How reproducible:
most likely

Steps to Reproduce:
1. deploy osp10
2. perform FFU undercloud upgrade
3.

Actual results:
The failure above 

Expected results:
no errors

Additional info:

Looking at the mistral executor log reveals probably a  problem in Zaqar websocket

2019-06-12 18:36:12.754 30353 ERROR mistral.engine.default_executor [-] Failed to run action [action_ex_id=13aee27e-a409-4399-a2ab-d811be9b3bb9, action_cls='<class 'mistral.actions.action_factory.SwiftAction'>', attributes='{u'client_method_name': u'head_container'}', params='{u'headers': None, u'container': u'overcloud'}']
 SwiftAction.head_container failed: <class 'swiftclient.exceptions.ClientException'>: Container HEAD failed
2019-06-12 18:36:12.754 30353 ERROR mistral.engine.default_executor Traceback (most recent call last):
2019-06-12 18:36:12.754 30353 ERROR mistral.engine.default_executor   File "/usr/lib/python2.7/site-packages/mistral/engine/default_executor.py", line 90, in run_action
2019-06-12 18:36:12.754 30353 ERROR mistral.engine.default_executor     result = action.run()
2019-06-12 18:36:12.754 30353 ERROR mistral.engine.default_executor   File "/usr/lib/python2.7/site-packages/mistral/actions/openstack/base.py", line 142, in run
2019-06-12 18:36:12.754 30353 ERROR mistral.engine.default_executor     (self.__class__.__name__, self.client_method_name, e_str)
2019-06-12 18:36:12.754 30353 ERROR mistral.engine.default_executor ActionException: SwiftAction.head_container failed: <class 'swiftclient.exceptions.ClientException'>: Container HEAD failed
2019-06-12 18:36:12.754 30353 ERROR mistral.engine.default_executor 
2019-06-12 18:36:13.893 30353 INFO mistral.engine.rpc_backend.rpc [-] Received RPC request 'run_action'[rpc_ctx=MistralContext {u'project_name': u'admin', u'user_id': u'ad68fccda20e45adbe996be449b64cdd', u'roles': [u'admin'], u'auth_uri': u'https://192.168.24.2:13000/v3', u'auth_cacert': None, u'auth_token': u'efe07a4343894fc182de8dc0f31faad0', u'expires_at': u'2019-06-13T02:36:09.000000Z', u'is_trust_scoped': False, u'service_catalog': u'[{"endpoints": [{"adminURL": "http://192.168.24.1:9292", "region": "regionOne", "internalURL": "http://192.168.24.1:9292", "publicURL": "https://192.168.24.2:13292"}], "type": "image", "name": "glance"}, {"endpoints": [{"adminURL": "http://192.168.24.1:8004/v1/bd7a8726115c4e97b091bdb0e3cb4511", "region": "regionOne", "internalURL": "http://192.168.24.1:8004/v1/bd7a8726115c4e97b091bdb0e3cb4511", "publicURL": "https://192.168.24.2:13004/v1/bd7a8726115c4e97b091bdb0e3cb4511"}], "type": "orchestration", "name": "heat"}, {"endpoints": [{"adminURL": "http://192.168.24.1:35357/v2.0", "region": "regionOne", "internalURL": "http://192.168.24.1:5000/v2.0", "publicURL": "https://192.168.24.2:13000/v2.0"}], "type": "identity", "name": "keystone"}, {"endpoints": [{"adminURL": "http://192.168.24.1:8080", "region": "regionOne", "internalURL": "http://192.168.24.1:8080/v1/AUTH_bd7a8726115c4e97b091bdb0e3cb4511", "publicURL": "https://192.168.24.2:13808/v1/AUTH_bd7a8726115c4e97b091bdb0e3cb4511"}], "type": "object-store", "name": "swift"}, {"endpoints": [{"adminURL": "http://192.168.24.1:8989/v2", "region": "regionOne", "internalURL": "http://192.168.24.1:8989/v2", "publicURL": "https://192.168.24.2:13989/v2"}], "type": "workflowv2", "name": "mistral"}, {"endpoints": [{"adminURL": "http://192.168.24.1:8777", "region": "regionOne", "internalURL": "http://192.168.24.1:8777", "publicURL": "https://192.168.24.2:13777"}], "type": "metering", "name": "ceilometer"}, {"endpoints": [{"adminURL": "http://192.168.24.1:8888", "region": "regionOne", "internalURL": "http://192.168.24.1:8888", "publicURL": "https://192.168.24.2:13888"}], "type": "messaging", "name": "zaqar"}, {"endpoints": [{"adminURL": "http://192.168.24.1:6385", "region": "regionOne", "internalURL": "http://192.168.24.1:6385", "publicURL": "https://192.168.24.2:13385"}], "type": "baremetal", "name": "ironic"}, {"endpoints": [{"adminURL": "http://192.168.24.1:8774/v2.1", "region": "regionOne", "internalURL": "http://192.168.24.1:8774/v2.1", "publicURL": "https://192.168.24.2:13774/v2.1"}], "type": "compute", "name": "nova"}, {"endpoints": [{"adminURL": "ws://192.168.24.1:9000", "region": "regionOne", "internalURL": "ws://192.168.24.1:9000", "publicURL": "wss://192.168.24.2:9000"}], "type": "messaging-websocket", "name": "zaqar-websocket"}, {"endpoints": [{"adminURL": "http://192.168.24.1:9696", "region": "regionOne", "internalURL": "http://192.168.24.1:9696", "publicURL": "https://192.168.24.2:13696"}], "type": "network", "name": "neutron"}, {"endpoints": [{"adminURL": "http://192.168.24.1:5050", "region": "regionOne", "internalURL": "http://192.168.24.1:5050", "publicURL": "https://192.168.24.2:13050"}], "type": "baremetal-introspection", "name": "ironic-inspector"}, {"endpoints": [{"adminURL": "http://192.168.24.1:8042", "region": "regionOne", "internalURL": "http://192.168.24.1:8042", "publicURL": "https://192.168.24.2:13042"}], "type": "alarming", "name": "aodh"}]', u'project_id': u'bd7a8726115c4e97b091bdb0e3cb4511', u'user_name': u'admin'}, action_ex_id=9c2a76d4-bef5-489a-accf-b1128748ff2f, action_class=mistral.actions.openstack.actions.MistralAction, attributes={u'client_method_name': u'environments.get'}, params={u'name': u'overcloud'}]
2019-06-12 18:36:14.416 30353 WARNING mistral.actions.openstack.base [-] Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/mistral/actions/openstack/base.py", line 127, in run
    result = method(**self._kwargs_for_run)
  File "/usr/lib/python2.7/site-packages/mistralclient/api/v2/environments.py", line 80, in get
    return self._get('/environments/%s' % name)
  File "/usr/lib/python2.7/site-packages/mistralclient/api/base.py", line 122, in _get
    self._raise_api_exception(resp)
  File "/usr/lib/python2.7/site-packages/mistralclient/api/base.py", line 140, in _raise_api_exception
    error_message=error_data)
APIException: Environment not found [name=overcloud]

Comment 1 Ronnie Rasouli 2019-06-13 06:47:08 UTC
Created attachment 1580118 [details]
Mistral executor log

Comment 2 Ronnie Rasouli 2019-06-13 06:48:42 UTC
Created attachment 1580119 [details]
Upgrade log installing ceph Ansible and updating triploe-client

Comment 3 Tzach Shefi 2019-06-14 15:10:49 UTC
I'd just hit same problem on a simple upgrade from 12 to 13, LVM backed system.  
My 12 puddle date was  2019-05-17.1



Looks like exact same error

2019-06-14 09:59:06,981 DEBUG: An exception occurred
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 2442, in install
    _run_validation_groups(["post-upgrade"], mistral_url)
  File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 1953, in _run_validation_groups
    stack_names_list = [stack['stack_name'] for stack in heat.stacks.list()]
TypeError: 'Stack' object has no attribute '__getitem__'
2019-06-14 09:59:06,982 ERROR: 
#############################################################################
Undercloud upgrade failed.

Reason: 'Stack' object has no attribute '__getitem__'

See the previous output for details about what went wrong.  The full install
log can be found at /home/stack/.instack/install-undercloud.log.

#############################################################################

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 2442, in install
    _run_validation_groups(["post-upgrade"], mistral_url)
  File "/usr/lib/python2.7/site-packages/instack_undercloud/undercloud.py", line 1953, in _run_validation_groups
    stack_names_list = [stack['stack_name'] for stack in heat.stacks.list()]
TypeError: 'Stack' object has no attribute '__getitem__'
Command '['instack-upgrade-undercloud']' returned non-zero exit status 1



while grepping around I'd also noticed some other problems not sure if related to same root casue


18a012): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1843, in _do_build_and_run_instance\n    filter_prop
erties)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2082, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))
\n', u'RescheduledException: Build of instance c2b23a5c-ac98-4944-aa2b-a31b59a66670 was re-scheduled: Failed to provision instance c2b23a5c-ac98-4944-aa2b-a31b59a66670: Failed t
o prepare to deploy. Error: IPMI call failed: chassis bootdev pxe.\n']
/var/log/nova/nova-conductor.log:850:Traceback (most recent call last):
/var/log/nova/nova-conductor.log:863:Traceback (most recent call last):
/var/log/nova/nova-conductor.log:876:Traceback (most recent call last):
/var/log/nova/nova-conductor.log:908:2019-06-10 06:46:07.342 23240 ERROR nova.scheduler.utils [req-bfd9e5a9-8738-4a66-9bcc-8ec71f59445a ec59609fe8ed430e8d3bd20ae75ac7eb 401a6df3
b4d543d084cd8723306fdd51 - default default] [instance: 5c514955-f5be-4893-8e03-04df36dab786] Error from last host: undercloud-0.redhat.local (node 00999ca8-d808-498b-bc21-5544cd
c484d2): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1843, in _do_build_and_run_instance\n    filter_prop
erties)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2082, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))
\n', u'RescheduledException: Build of instance 5c514955-f5be-4893-8e03-04df36dab786 was re-scheduled: Failed to provision instance 5c514955-f5be-4893-8e03-04df36dab786: Failed t
o deploy. Error: IPMI call failed: power on.\n']
/var/log/nova/nova-conductor.log:910:Traceback (most recent call last):
/var/log/nova/nova-conductor.log:923:Traceback (most recent call last):
/var/log/nova/nova-conductor.log:936:Traceback (most recent call last):
/var/log/nova/nova-conductor.log:953:2019-06-10 06:46:08.888 23240 ERROR nova.scheduler.utils [req-e5c14ada-2bf3-4a86-8af0-17e2e8bd5f56 ec59609fe8ed430e8d3bd20ae75ac7eb 401a6df3
b4d543d084cd8723306fdd51 - default default] [instance: 220cd3ec-48ec-4ce1-8daf-31b40a088877] Error from last host: undercloud-0.redhat.local (node 524fdea8-35da-4589-861d-4f5448
d37539): [u'Traceback (most recent call last):\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1843, in _do_build_and_run_instance\n    filter_prop
erties)\n', u'  File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2082, in _build_and_run_instance\n    instance_uuid=instance.uuid, reason=six.text_type(e))
\n', u'RescheduledException: Build of instance 220cd3ec-48ec-4ce1-8daf-31b40a088877 was re-scheduled: Failed to provision instance 220cd3ec-48ec-4ce1-8daf-31b40a088877: Failed t
o deploy. Error: IPMI call failed: power on.\n']
/var/log/nova/nova-conductor.log:955:Traceback (most recent call last):
/var/log/nova/nova-conductor.log:968:Traceback (most recent call last):
/var/log/nova/nova-conductor.log:981:Traceback (most recent call last):

Comment 4 Tzach Shefi 2019-06-14 15:12:04 UTC
Created attachment 1580741 [details]
Hope I gathered all the needed logs

Comment 6 Alex Schultz 2019-06-17 16:35:24 UTC
Caused by fix for Bug 1651136. That bug has been updated with a fix for this.

Comment 7 Carlos Camacho 2019-06-19 10:08:18 UTC

*** This bug has been marked as a duplicate of bug 1651136 ***


Note You need to log in before you can comment on or make changes to this bug.