Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1113294

Summary: cinder db_sync fails on 2nd and 3rd nodes in ha-all-in-one-controller
Product: Red Hat OpenStack Reporter: Crag Wolfe <cwolfe>
Component: openstack-foreman-installerAssignee: Crag Wolfe <cwolfe>
Status: CLOSED ERRATA QA Contact: tkammer
Severity: high Docs Contact:
Priority: high    
Version: 5.0 (RHEL 7)CC: breeler, jguiditt, jstransk, mburns, morazi, rhos-maint, yeylon
Target Milestone: gaKeywords: OtherQA
Target Release: 5.0 (RHEL 7)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-foreman-installer-2.0.13-1.el6ost Doc Type: Bug Fix
Doc Text:
Block Storage's configuration including db_sync happened simultaneously across nodes. Due to multiple simultaneous cinder-manage db_sync processes, some tables ended up in unexpected states for some of the processes, causing error messages. To avoid this problem, the first node that runs cinder-manage db_sync now completes before the other HA nodes attempt to run it. This avoids tables ending up in incorrect states.
Story Points: ---
Clone Of: Environment:
Last Closed: 2014-08-04 18:35:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
puppet output from all three nodes
none
fix verified - puppet agent logs from all 3 nodes none

Description Crag Wolfe 2014-06-25 21:47:24 UTC
Description of problem:

During the first puppet run across nodes, cinder installs fine on the first node.  The other two nodes wait for the first to complete correctly (that wait on Exec[i-am-cinder-vip-OR-cinder-is-up-on-vip]), but then they eventually hit errors when they try to run "cinder-manage db_sync":

Notice: /Stage[main]/Cinder::Api/Exec[cinder-manage db_sync]/returns: 2014-06-25 10:58:13.329 27248 TRACE cinder OperationalError: (OperationalErro\r) (1050, "Table 'sm_flavors' already exists") '\nCREATE TABLE sm_flavors (\n\tcreated_at DATETIME, \n\tupdated_at DATETIME, \n\tdeleted_at DATETIME, \n\tdeleted BOOL, \n\tid INTEGER NOT NULL AUTO_INCREMENT, \n\tlabel VARCHAR(255), \n\tdescription VARCHAR(255), \n\tPRIMARY KEY (id), \n\tCHECK (deleted IN (0, 1))\n)ENGINE=InnoDB\n\n' ()
Notice: /Stage[main]/Cinder::Api/Exec[cinder-manage db_sync]/returns: 2014-06-25 10:58:13.329 27248 TRACE cinder 
Error: /Stage[main]/Cinder::Api/Exec[cinder-manage db_sync]: Failed to call refresh: cinder-manage db sync returned 1 instead of one of [0]

Comment 1 Crag Wolfe 2014-06-25 21:54:13 UTC
Created attachment 912250 [details]
puppet output from all three nodes

Comment 4 Crag Wolfe 2014-06-27 00:07:37 UTC
More detail: cinder-api and cinder-scheduler are running without error after the first puppet run, and that the puppet agent does not show errors on subsequent runs.

Comment 6 Jason Guiditta 2014-07-09 15:25:57 UTC
Merged

Comment 7 Jason Guiditta 2014-07-09 15:26:32 UTC
Oops, not in build yet, resetting state

Comment 10 Jiri Stransky 2014-08-01 12:36:53 UTC
Verified using openstack-foreman-installer-2.0.16-1.el6ost.noarch on the Foreman node, on the OpenStack nodes i used poodle 5.0.el7/2014-07-31.1.

I hit BZ #1125136 but otherwise puppet ran ok. Cinder db-sync step passed without errors on all three nodes. Will attach "puppet agent -tvd" logs.

Comment 11 Jiri Stransky 2014-08-01 12:39:14 UTC
Created attachment 923288 [details]
fix verified - puppet agent logs from all 3 nodes

Comment 13 errata-xmlrpc 2014-08-04 18:35:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHEA-2014-1003.html