Bug 1292952 - rhel-osp-director: failed to update 7.0->7.2: UPDATE_FAILED
Summary: rhel-osp-director: failed to update 7.0->7.2: UPDATE_FAILED
Keywords:
Status: CLOSED INSUFFICIENT_DATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rhosp-director
Version: 7.0 (Kilo)
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 7.0 (Kilo)
Assignee: chris alfonso
QA Contact: yeylon@redhat.com
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-12-18 20:15 UTC by Dan Yasny
Modified: 2016-04-18 07:11 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2016-01-27 15:19:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Dan Yasny 2015-12-18 20:15:30 UTC
rhel-osp-director: failed to update 7.0->7.2 

stack_status_reason:  resources.ControllerNodesPostDeployment: Error: resources.ControllerPostPuppet.resources.ControllerPostPuppetRestartDeployment.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1  

Virtual UC env:
openstack-tripleo-puppet-elements-0.0.1-5.el7ost.noarch
openstack-tripleo-common-0.0.1.dev6-5.git49b57eb.el7ost.noarch
openstack-tripleo-heat-templates-0.8.6-94.el7ost.noarch
openstack-tripleo-image-elements-0.9.6-10.el7ost.noarch
openstack-tripleo-0.0.7-0.1.1664e566.el7ost.noarch
instack-0.0.7-2.el7ost.noarch
instack-undercloud-2.1.2-36.el7ost.noarch


Original Deployment command: openstack overcloud deploy --templates --control-scale 3 --compute-scale 1 --ceph-storage-scale 1   --ntp-server 10.5.26.10 --timeout 90 -e network-environment.yaml

Network isolation, HA, v7.0

Update command:
yes "" | openstack overcloud update stack overcloud -i --templates   -e /usr/share/openstack-tripleo-heat-templates/overcloud-resource-registry-puppet.yaml    -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml    -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml    -e /usr/share/openstack-tripleo-heat-temp
lates/environments/updates/update-from-keystone-admin-internal-api.yaml  -e /usr/share/openstack-tripleo-heat-templates/environments/storage-environment.yaml  -e network-environment.yaml

Steps:
1. deploy as per command above
2. update UC to current puddle
#rhos-release -L
Installed repositories (rhel-7.2):
  7-director
  7
  rhel-7.2
3. reboot the UC, verify all services came up after reboot
4. modify environment files as per update KB
5. Run the update command

Result:
heat resource-show overcloud ControllerNodesPostDeployment
...
| resource_status        | UPDATE_FAILED  
| resource_status_reason | resources.ControllerNodesPostDeployment: Error: resources.ControllerPostPuppet.resources.ControllerPostPuppetRestartDeployment.resources[0]: Deployment to server failed: deploy_status_code: Deployment exited with non-zero status code: 1
| resource_type          | OS::TripleO::ControllerPostDeployment                                                                                                                                                                                                        



This failure is 100% repeatable, environment 09 available for debugging.

Comment 2 James Slagle 2016-01-13 20:58:13 UTC
can you run heat deployment-show on the failed deployment?

is the environment still around for further debugging?

Comment 3 Dan Yasny 2016-01-13 21:33:10 UTC
(In reply to James Slagle from comment #2)
> can you run heat deployment-show on the failed deployment?
> 
> is the environment still around for further debugging?

Unfortunately no. I was hoping for a faster initial response and kept it up in this state for 3 weeks, but with all the holidays, time ran out. The setup has been redeployed several times over since. I will try to reproduce and update the BZ (leaving the needinfo in place)

Comment 4 Mike Burns 2016-01-27 15:19:36 UTC
If you reproduce with 7.3, please reopen.

Comment 5 Dan Yasny 2016-01-27 15:31:26 UTC
(In reply to Mike Burns from comment #4)
> If you reproduce with 7.3, please reopen.

Thanks Mike, I'll probably open a new one instead if this happens again


Note You need to log in before you can comment on or make changes to this bug.