Bug 1245226
Summary: | 'openstack overcloud node delete/ or update' fails when the stack is deployed with network isolation | ||
---|---|---|---|
Product: | Red Hat OpenStack | Reporter: | Ronelle Landy <rlandy> |
Component: | rhosp-director | Assignee: | chris alfonso <calfonso> |
Status: | CLOSED NOTABUG | QA Contact: | yeylon <yeylon> |
Severity: | high | Docs Contact: | |
Priority: | high | ||
Version: | 7.0 (Kilo) | CC: | hbrock, jprovazn, mandreou, mburns, mcornea, ohochman, rhel-osp-director-maint, rlandy, sclewis, srevivo, whayutin, zbitter |
Target Milestone: | z1 | Keywords: | Automation, Triaged, ZStream |
Target Release: | Director | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2015-08-20 16:30:00 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Ronelle Landy
2015-07-21 14:07:14 UTC
Need to verify when attempting to update stack with network-isolation without sending the .yaml files that the stack won't get stack on UPDATE_FAILED : [stack@rhos-compute-node-18 ~]$ openstack overcloud update stack overcloud --plan overcloud -i --debug [stack@rhos-compute-node-18 ~]$ heat stack-list +--------------------------------------+------------+---------------+----------------------+ | id | stack_name | stack_status | creation_time | +--------------------------------------+------------+---------------+----------------------+ | b9d47a7d-48e7-4902-a51c-7472275e6958 | overcloud | UPDATE_FAILED | 2015-07-20T23:41:35Z | +--------------------------------------+------------+---------------+----------------------+ Ronelle, should we move this bug to docs then? Poked at this today. I could successfully update to 2 compute nodes from 1 and also delete a node, but with the caveat that rlandy mentions (including the -e for the network isolation in the node delete commandline args). To this end, there is a docs review @ https://review.gerrithub.io/#/c/242143 that aims to help here. Testing notes below, if useful: DEPLOY 1/1/1: openstack overcloud deploy --plan overcloud --debug --log-file overcloud_deployment_net_isolation.log --control-scale 1 --compute-scale 1 --ceph-storage-scale 1 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml SCALE ADD COMPUTE: openstack management plan set -S Compute-1=2 fe58fe8a-15b9-49d6-bfb1-e85085bfae38 openstack overcloud deploy --debug --plan overcloud -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml DELETE A COMPUTE: openstack overcloud node delete --stack overcloud --debug --plan fe58fe8a-15b9-49d6-bfb1-e85085bfae38 27e3caf9-a7b4-4ff6-a5ce-d90b12282bbe ^^^FAIL because I didn't include the network isolation env 47:| Networks | d5f2bf6b-f751-4f48-8c3c-c4cef78014c3 | OS::TripleO::Network | UPDATE_FAILED | 2015-08-04T12:02:13Z | DELETE STACK AND DEPLOY AGAIN WITH 2 COMPUTE: openstack overcloud deploy --plan overcloud --debug --log-file overcloud_deployment_net_isolation.log --control-scale 1 --compute-scale 2 --ceph-storage-scale 1 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml DELETE A COMPUTE: openstack overcloud node delete --stack overcloud --debug --plan fe58fe8a-15b9-49d6-bfb1-e85085bfae38 5e96f8bc-424f-4a32-be7d-5a4d59050d37 -e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml -e /usr/share/openstack-tripleo-heat-templates/environments/net-single-nic-with-vlans.yaml ~10mins later UPDATE_COMPLETE and node gone (In reply to chris alfonso from comment #7) > Ronelle, should we move this bug to docs then? Yes - was basically a user error |