Description of problem: ======================= Used packstack to install openstack as follows: Cloud Controller installed with: dashboard, cinder, nova-scheduler, database, queue, nova-cert, nova-consoleauth, glance-api, glance-registry, keystone Nova compute node installed with: nova-compute, nova-network, nova-api packstack fails with an error. Version-Release number of selected component (if applicable): ============================================================ Folsom. Packstack Ver: openstack-packstack-2012.2.2-0.1.dev262.el6.noarch How reproducible: ================= 100% Steps to Reproduce: =================== 1. Use packstack with the answer file attached. Make sure you run packstack in the Cloud Controller CLI. Actual results: =============== 1. packstack exit with the following error: Testing if puppet apply is finished : 10.35.110.15_api_nova.pp.log [ ERROR ] Error during puppet run : err: /Stage[main]/Nova::Api/Exec[nova-db-sync]: Failed to call refresh: /usr/bin/nova-manage db sync returned 1 instead of one of [0] at /var/tmp/a1817a9e-cf20-465a-b241-9f1009f7c05b/modules/nova/manifests/api.pp:98 Please check log file /var/tmp/a1817a9e-cf20-465a-b241-9f1009f7c05b/openstack-setup_2012_12_30_17_12_41.log for more information I've checked the configuration files on nova compute node. packstack misconfigured nova.conf with 127.0.0.1, While it should have set the routable Cloud Controller IP Address: glance_api_servers=127.0.0.1:9292 sql_connection=mysql://nova:nova_default_password.0.1/nova qpid_hostname=127.0.0.1 Same goes for api-paste.ini: auth_host=127.0.0.1 * Please note that 127.0.0.1 is the default value set by packstack in the answers file. I ran packstack from the Cloud Controller machine, Therefore this value should be valid. In order to support this topology, packstack should "translated" the loopback address to a routable IP Address. Expected results: ================= packstack should run with no errors.
Created attachment 670434 [details] packstack log (DEBUG mode)
Created attachment 670435 [details] answers file
I think the logic should be (warning, pseudo-code) if any of the IPs != 127.0.0.1, then none of the IPs should be 127.0.0.1
(In reply to comment #4) > I think the logic should be (warning, pseudo-code) > if any of the IPs != 127.0.0.1, then none of the IPs should be 127.0.0.1 Agree and marking this bug as Triaged as I just ran into this same issue. Basically, 127.0.0.1 should only be valid if you are doing an 'all in one' install. If you have any multi-host at all, setting anything to 127.0.0.1 will either cause packstack to fail (as above) or just result in a non-working configuration. Perhaps we should provide a cmdline option/config file param for all-in-one that sets the IP addr in that case for all services to 127.0.0.1, but unless you set this specific option/flag then it never defaults to that, to prevent you from accidentally selecting it in a multi-host install scenario?
(In reply to comment #5) > (In reply to comment #4) > > I think the logic should be (warning, pseudo-code) > > if any of the IPs != 127.0.0.1, then none of the IPs should be 127.0.0.1 > > Agree and marking this bug as Triaged as I just ran into this same issue. > Basically, 127.0.0.1 should only be valid if you are doing an 'all in one' > install. If you have any multi-host at all, setting anything to 127.0.0.1 > will either cause packstack to fail (as above) or just result in a > non-working configuration. > > Perhaps we should provide a cmdline option/config file param for all-in-one > that sets the IP addr in that case for all services to 127.0.0.1, but unless > you set this specific option/flag then it never defaults to that, to prevent > you from accidentally selecting it in a multi-host install scenario? After further consultation with Yaniv Kaul, I think that we should avoid from 127.0.0.1 even in all-in-one installations. If we do use it, The user will have to modify almost all .conf files in order to expand his system.
patch to change packstack to no longer default to 127.0.0.1 https://review.openstack.org/#/c/19867/ this will prevent this problem occurring
This is a dup of https://bugzilla.redhat.com/show_bug.cgi?id=886541 which is now ON_QA *** This bug has been marked as a duplicate of bug 886541 ***