Block Storage's configuration including db_sync happened simultaneously across nodes.
Due to multiple simultaneous cinder-manage db_sync processes, some tables ended up in unexpected states for some of the processes, causing error messages.
To avoid this problem, the first node that runs cinder-manage db_sync now completes before the other HA nodes attempt to run it.
This avoids tables ending up in incorrect states.
Description of problem:
During the first puppet run across nodes, cinder installs fine on the first node. The other two nodes wait for the first to complete correctly (that wait on Exec[i-am-cinder-vip-OR-cinder-is-up-on-vip]), but then they eventually hit errors when they try to run "cinder-manage db_sync":
Notice: /Stage[main]/Cinder::Api/Exec[cinder-manage db_sync]/returns: 2014-06-25 10:58:13.329 27248 TRACE cinder OperationalError: (OperationalErro\r) (1050, "Table 'sm_flavors' already exists") '\nCREATE TABLE sm_flavors (\n\tcreated_at DATETIME, \n\tupdated_at DATETIME, \n\tdeleted_at DATETIME, \n\tdeleted BOOL, \n\tid INTEGER NOT NULL AUTO_INCREMENT, \n\tlabel VARCHAR(255), \n\tdescription VARCHAR(255), \n\tPRIMARY KEY (id), \n\tCHECK (deleted IN (0, 1))\n)ENGINE=InnoDB\n\n' ()
Notice: /Stage[main]/Cinder::Api/Exec[cinder-manage db_sync]/returns: 2014-06-25 10:58:13.329 27248 TRACE cinder
Error: /Stage[main]/Cinder::Api/Exec[cinder-manage db_sync]: Failed to call refresh: cinder-manage db sync returned 1 instead of one of [0]
More detail: cinder-api and cinder-scheduler are running without error after the first puppet run, and that the puppet agent does not show errors on subsequent runs.
Verified using openstack-foreman-installer-2.0.16-1.el6ost.noarch on the Foreman node, on the OpenStack nodes i used poodle 5.0.el7/2014-07-31.1.
I hit BZ #1125136 but otherwise puppet ran ok. Cinder db-sync step passed without errors on all three nodes. Will attach "puppet agent -tvd" logs.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
http://rhn.redhat.com/errata/RHEA-2014-1003.html