Created attachment 924459 [details] Screen shot Description of problem: Deployed Neutron none HA with one compute, deployment completed successfully. Then wanted to add another compute host, booted compute2 vm assigned it to compute group, clicked deploy. Returning to deployment page, expected to see only one progress circle indicator "turning" for compute group, yet all three indicator circles were turning (controller\neutron\compute). Version-Release number of selected component (if applicable): rhel-osp-installer-0.1.6-5.el6ost.noarch foreman-installer-1.5.0-0.6.RC2.el6ost.noarch openstack-foreman-installer-2.0.16-1.el6ost.noarch How reproducible: Probably every time Steps to Reproduce: 1. Deploy one compute host, neutron and controller. 2. Add another compute 3. Actual results: All three indicators show action, see attached pic. Expected results: Only compute group should indicate change\progress. Additional info:
Does it do the right thing in deploying the new compute host? or does it run all the hosts again?
Sorry for delay, missed needinfo email. From what I recall it only changed the new compute host, didn't touch other hosts. I'll try this again on my next foreman deployment to be sure.
Pull request here: https://github.com/theforeman/staypuft/pull/385
Verified: FailedQA Environment: openstack-puppet-modules-2014.2.6-1.el7ost.noarch rhel-osp-installer-0.5.2-1.el7ost.noarch ruby193-rubygem-foreman_openstack_simplify-0.0.6-8.el7ost.noarch rhel-osp-installer-client-0.5.2-1.el7ost.noarch openstack-foreman-installer-3.0.5-1.el7ost.noarch ruby193-rubygem-staypuft-0.5.3-1.el7ost.noarch Added 1 compute to an already deployed setup and clicked on deploy. All host show as being deployed (there's a clock indicator next to all deployed and being deployed hosts), instead of just the added compute.
After multiple rounds, and a lot of investigation, we've determined that the bug is a lot more involved than we originally thought and fixing it is significantly more invasive and difficult as well. Basically, what it comes down to is the logic to determine if a host is deployed is written for a single deployment run and not multiple. There is no 100% accurate way to say a host is completely deployed in a previous run. We found a way that would work for probably 90% of the use cases, but had a significant critical limitation. Since we switched to PuppetSSH for deployments, hosts that are deployed have a specific parameter set on them that relates the puppet runmode (service) while hosts that haven't been deployed have a different runmode (none). The problem comes when you have a host that was perhaps already deployed in foreman, but needs to be redeployed (maybe a faulty disk that was replaced). That host is removed from the deployment, then re-added. The host doesn't change it's runmode value. That would cause the host to immediately appear in the deployed column and makes it *impossible* to deploy through RHEL-OSP Installer. The options we have for resolving this are: * do the above, but add a step so that the runmode is re-set to none when adding a host to the deployment * completely re-write the logic around what is deployed and not to make it smarter. * add some other variable or parameter somewhere (host, deployment, etc) that will keep track of what hosts are deployed and handle the corner case above. Given that all of these changes are more invasive than a simple UI update, I'd like to defer this.
Closing list of bugs for RHEL OSP Installer since its support cycle has already ended [0]. If there is some bug closed by mistake, feel free to re-open. For new deployments, please, use RHOSP director (starting with version 7). -- Jaromir Coufal -- Sr. Product Manager -- Red Hat OpenStack Platform [0] https://access.redhat.com/support/policy/updates/openstack/platform