When installing RDO with Packstack, I got the following error: Applying 192.168.142.4_neutron.pp Applying 192.168.142.3_neutron.pp Applying 192.168.142.2_neutron.pp 192.168.142.4_neutron.pp : [ DONE ] 192.168.142.3_neutron.pp : [ DONE ] 192.168.142.2_neutron.pp : [ DONE ] Applying 192.168.142.2_osclient.pp Applying 192.168.142.2_horizon.pp Applying 192.168.142.2_ceilometer.pp 192.168.142.2_osclient.pp : [ DONE ] 192.168.142.2_horizon.pp : [ DONE ] [ ERROR ] ERROR : Error appeared during Puppet run: 192.168.142.2_ceilometer.pp Error: /Stage[main]/Ceilometer::Db/Exec[ceilometer-dbsync]: Failed to call refresh: ceilometer-dbsync --config-file=/etc/ceilometer/ceilometer.conf returned 1 instead of one of [0] You will find full trace in log /var/tmp/packstack/20131129-175308-CUtdHb/manifests/192.168.142.2_ceilometer.pp.log Please check log file /var/tmp/packstack/20131129-175308-CUtdHb/openstack-setup.log for more information When I checked the log file mentioned, it contained a few warnings, and the following error: Notice: /Stage[main]/Ceilometer::Db/Exec[ceilometer-dbsync]/returns: 2013-11-29 18:25:20.358 24476 CRITICAL ceilometer [-] could not connect to localhost:27017: [Errno 111] ECONNREFUSED Error: /Stage[main]/Ceilometer::Db/Exec[ceilometer-dbsync]: Failed to call refresh: ceilometer-dbsync --config-file=/etc/ceilometer/ceilometer.conf returned 1 instead of one of [0] Error: /Stage[main]/Ceilometer::Db/Exec[ceilometer-dbsync]: ceilometer-dbsync --config-file=/etc/ceilometer/ceilometer.conf returned 1 instead of one of [0] On rerunning Packstack with the same answer file, it runs through to completion.
Just saw bug #1028690 - this looks like the same one. I'm installing Packstack on a cluster of VMs, each got an 8GB disk allocated to them, and I see that the CentOS installer auto-partitioned them roughly 500M /boot, 3.5GB /, and 4GB swap. Is there a way to either (a) ensure that Mongo can do its thing in this sort of config, or (b) get us a better error message? Also, I don't understand why it works the second time round. Thanks, Dave.
*** This bug has been marked as a duplicate of bug 1034395 ***