Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1663545

Summary: [rhos-prio] Issues recovering after scaling out to more nodes than available
Product: Red Hat OpenStack Reporter: rosingh
Component: python-tripleoclientAssignee: Rabi Mishra <ramishra>
Status: CLOSED ERRATA QA Contact: Victor Voronkov <vvoronko>
Severity: high Docs Contact:
Priority: urgent    
Version: 10.0 (Newton)CC: agurenko, dvd, emacchi, hbrock, jslagle, mburns, mcornea, ramishra, rosingh, sandyada, sbaker, shardy, ssmolyak
Target Milestone: ---Keywords: Triaged, ZStream
Target Release: 10.0 (Newton)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: python-tripleoclient-5.4.6-2.el7ost Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-04-30 16:58:51 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description rosingh 2019-01-04 19:48:34 UTC
Description of problem:
In the process of running a stack deploy on a site with 96 BL460C blades ("Compute" blades). The customer had the number of computeV2 nodes set to 96, when in reality, they have 0 available at this time  and the stack update failed. 
However, when they set the number of computeV2 nodes to the correct number of 0, the templates fail deploy due to:
Not enough nodes - available: 99, requested: 195

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 15 Victor Voronkov 2019-04-02 16:04:32 UTC
Verified on compose 2019-03-26.1 that with wrong ComputeCount update fails and after fixing to valid avaliable compute host amount and rerunning deploy script, overcloud stack is updated successfully

Comment 17 errata-xmlrpc 2019-04-30 16:58:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0921