Issuing scaling up of a node takes place even if the maximum number of new nodes is higher then the number of free nodes. in my example: $ ironic node-list +--------------------------------------+------+----------... | UUID | Name | Instance UUID | Power State ... +--------------------------------------+------+----------... | ...... | None | ...... | power on | active ... | ...... | None | ...... | power on | active ... | ...... | None | ...... | power on | active ... | ...... | None | ...... | power on | active ... | ...... | None | None | power off | available ... | ...... | None | ...... | power on | active ... | ...... | None | None | power off | available ... | ...... | None | None | power off | available ... +--------------------------------------+------+----------... 6 nodes in total, out of which 3 are Ceph nodes. I ran: $ openstack overcloud scale stack -r Ceph-Storage-1 -n 9 overcloud overcloud Scaling out role Ceph-Storage-1 in stack overcloud to 9 nodes The process to scale up has started, but should not have begun in this case. Trying lowering this with: $ openstack overcloud scale stack -r Ceph-Storage-1 -n 4 overcloud overcloud gives me: Scaling out role Ceph-Storage-1 in stack overcloud to 4 nodes ERROR: openstack Role Ceph-Storage-1 has already 9 nodes, can't set lower value
Moving this to an oscplugin bug since the CLI is the only place that speaks across both Tuskar and Ironic.
Assigning this to Jan as he's the one who implemented stack scaling.
The scale command is being replaced with the newly added CLI command which is now used for any update of stack: openstack management plan set $PLAN_UUID -S Compute-1=2 openstack overcloud deploy --plan-uuid $PLAN_UUID I can implement a check in the "openstack overcloud deploy" command which would check number of available nodes, but TBH not sure if this is high prio - we don't implement ATM any other tests for other input - e.g. flavor, images,...
Latest version: $ openstack overcloud deploy --templates --control-scale 1 --compute-scale 2 Deployment failed: Not enough nodes - available: 0, requested: 3
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2015:1862