Description of problem: Just after reboot of controller I see in /var/log/messages lots of errors where openstack service are unable to connect to DB. E.g.: Feb 18 13:27:10 fed-cloud09 cinder-volume: 2015-02-18 13:27:10.941 4396 TRACE cinder.openstack.common.threadgroup OperationalError: (OperationalError) (2013, "Lost connection to MySQL server at 'reading initial communication packet', system error: 104") None None Several moment later it is successfully connected and it works fine. So it is just about getting rid of this errors in log and postpone starting little bit. This is probably distribution only related, so I am filing it here and not in upstream. I understand that we could not depend on db.service directly, because it can be on different machine. I would suggest to create service openstack-wait-for-db.service, which would all other (e.g. openstack-cinder-api.service) have listed in "After". This service would just check if MARIADB_HOST point to different host (and then exit immediately) or if it point to this machine and then wait till mariadb service is available. Version-Release number of selected component (if applicable): happened to me on RDO Icehouse
Makes sense, targeted to Mitaka and possibly backported to Liberty.
Why not just do not log TRACE if that's too verbose? oslo-db library will try reconnecting and success once database is available.
I think this is stale and can be closed?