Hide Forgot
rhel-osp-director: Update overcloud 7.2->7.3 Timed out waiting for a reply to message ID 5c0c681a9efc40b0bde965ae625bf2aa Environment: openstack-tripleo-heat-templates-0.8.6-112.el7ost.noarch instack-undercloud-2.1.2-37.el7ost.noarch Steps to reproduce: 1. Deploy overcloud 7.2 HA+1 compute + 1 swift + 1 cinder node. 2. Attempt to update the overcloud to 7.3 Result: yes "" |openstack overcloud update stack overcloud -i --templates -e /usr/share/openstack-tripleo-heat-templates/overcloud-resource-registry-puppet.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/updates/update-from-vip.yaml -e network-environment.yaml starting package update on stack overcloud IN_PROGRESS IN_PROGRESS IN_PROGRESS IN_PROGRESS IN_PROGRESS IN_PROGRESS IN_PROGRESS WAITING not_started: [u'overcloud-compute-0', u'overcloud-blockstorage-0', u'overcloud-controller-0', u'overcloud-controller-1', u'overcloud-controller-2'] on_breakpoint: [u'overcloud-objectstorage-0'] WARNING: tripleo_common.stack_update removing breakpoint on overcloud-objectstorage-0 Breakpoint reached, continue? Regexp or Enter=proceed, no=cancel update, C-c=quit interactive mode: IN_PROGRESS IN_PROGRESS IN_PROGRESS IN_PROGRESS IN_PROGRESS IN_PROGRESS IN_PROGRESS IN_PROGRESS IN_PROGRESS IN_PROGRESS ERROR: openstack ERROR: Timed out waiting for a reply to message ID 5c0c681a9efc40b0bde965ae625bf2aa I edited /etc/heat/heat.conf on the undercloud: rpc_response_timeout = 300 num_engine_workers = 4 and restarted openstack-heat-engine.service The yum update didn't complete on nodes.
Could you please look into which resource failed, and if still available attach the heat-engine logs?
I'm pretty sure this is just due to the fact that the EndpointMap stack creates 30(!) nested stacks, all at exactly the same time, just as an extremely heavyweight way of defining a custom function. I submitted a change upstream to generate it statically instead, which should speed it up by a factor of 31.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2016-0264.html