Bug 1127207 - Adding second compute host, deployment indicators light up for compute and neutron
Summary: Adding second compute host, deployment indicators light up for compute and ne...
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: rubygem-staypuft
Version: 5.0 (RHEL 6)
Hardware: Unspecified
OS: Linux
Target Milestone: ---
: Installer
Assignee: Jason E. Rist
QA Contact: Omri Hochman
URL: https://trello.com/c/8RDvUv05/375-ui-...
Depends On:
TreeView+ depends on / blocked
Reported: 2014-08-06 11:44 UTC by Tzach Shefi
Modified: 2016-09-29 13:32 UTC (History)
5 users (show)

Fixed In Version: ruby193-rubygem-staypuft-0.5.3-1.el7ost
Doc Type: Bug Fix
Doc Text:
Clone Of:
Last Closed: 2016-09-29 13:32:05 UTC

Attachments (Terms of Use)
Screen shot (129.75 KB, image/png)
2014-08-06 11:44 UTC, Tzach Shefi
no flags Details

Description Tzach Shefi 2014-08-06 11:44:06 UTC
Created attachment 924459 [details]
Screen shot

Description of problem: Deployed Neutron none HA with one compute, deployment completed successfully.  Then wanted to add another compute host, booted compute2 vm assigned it to compute group, clicked deploy. 

Returning to deployment page, expected to see only one progress circle indicator "turning" for compute group, yet all three indicator circles were turning (controller\neutron\compute). 

Version-Release number of selected component (if applicable):

How reproducible:
Probably every time

Steps to Reproduce:
1. Deploy one compute host, neutron and controller.
2. Add another compute 

Actual results:
All three indicators show action, see attached pic.

Expected results:
Only compute group should indicate change\progress. 

Additional info:

Comment 1 Mike Burns 2014-08-06 12:08:08 UTC
Does it do the right thing in deploying the new compute host?  or does it run all the hosts again?

Comment 4 Tzach Shefi 2014-09-02 08:17:32 UTC
Sorry for delay, missed needinfo email.
From what I recall it only changed the new compute host, didn't touch other hosts.
I'll try this again on my next foreman deployment to be sure.

Comment 5 Jiri Tomasek 2014-11-28 07:54:07 UTC
Pull request here: https://github.com/theforeman/staypuft/pull/385

Comment 7 Alexander Chuzhoy 2014-12-09 23:00:38 UTC
Verified: FailedQA


Added 1 compute to an already deployed setup and clicked on deploy.
All host show as being deployed (there's a clock indicator next to all deployed and being deployed hosts), instead of just the added compute.

Comment 9 Mike Burns 2014-12-17 19:28:25 UTC
After multiple rounds, and a lot of investigation, we've determined that the bug is a lot more involved than we originally thought and fixing it is significantly more invasive and difficult as well.

Basically, what it comes down to is the logic to determine if a host is deployed is written for a single deployment run and not multiple.  There is no 100% accurate way to say a host is completely deployed in a previous run.  

We found a way that would work for probably 90% of the use cases, but had a significant critical limitation.  Since we switched to PuppetSSH for deployments, hosts that are deployed have a specific parameter set on them that relates the puppet runmode (service) while hosts that haven't been deployed have a different runmode (none).  The problem comes when you have a host that was perhaps already deployed in foreman, but needs to be redeployed (maybe a faulty disk that was replaced).  That host is removed from the deployment, then re-added.  The host doesn't change it's runmode value.  That would cause the host to immediately appear in the deployed column and makes it *impossible* to deploy through RHEL-OSP Installer.

The options we have for resolving this are:

* do the above, but add a step so that the runmode is re-set to none when adding a host to the deployment
* completely re-write the logic around what is deployed and not to make it smarter.  
* add some other variable or parameter somewhere (host, deployment, etc) that will keep track of what hosts are deployed and handle the corner case above.

Given that all of these changes are more invasive than a simple UI update, I'd like to defer this.

Comment 10 Jaromir Coufal 2016-09-29 13:32:05 UTC
Closing list of bugs for RHEL OSP Installer since its support cycle has already ended [0]. If there is some bug closed by mistake, feel free to re-open.

For new deployments, please, use RHOSP director (starting with version 7).

-- Jaromir Coufal
-- Sr. Product Manager
-- Red Hat OpenStack Platform

[0] https://access.redhat.com/support/policy/updates/openstack/platform

Note You need to log in before you can comment on or make changes to this bug.