Hide Forgot
Description of problem: openstack-db --service nova --update fail while upgrading from Girzzly to Havana Version-Release number of selected component (if applicable): openstack-utils-2013.2-2.el6ost.noarch How reproducible: 100% Steps to Reproduce: 1.Upgrade Grizzly to Havana use: openstack-db --service nova --update in order to update nova database Actual results: Error message: No handlers could be found for logger "neutron.common.legacy" (Database not updated) Expected results: 2013-12-15 09:56:56.575 14942 INFO migrate.versioning.api [-] 161 -> 162... 2013-12-15 09:56:56.578 14942 INFO migrate.versioning.api [-] done 2013-12-15 09:56:56.578 14942 INFO migrate.versioning.api [-] 162 -> 163... 2013-12-15 09:56:56.580 14942 INFO migrate.versioning.api [-] done ... 2013-12-15 09:57:39.263 14942 INFO migrate.versioning.api [-] 214 -> 215... 2013-12-15 09:57:39.265 14942 INFO migrate.versioning.api [-] done 2013-12-15 09:57:39.265 14942 INFO migrate.versioning.api [-] 215 -> 216... 2013-12-15 09:57:39.277 14942 INFO migrate.versioning.api [-] done Additional info:
Workaround for the problem: nova-manage db sync
Roey: Can you confirm which versions of the following packages were installed when you encountered this problem? - openstack-neutron - openstack-nova-common If it's possible to reproduce this bug on your end, would you run the following command and attach the output? bash -x /usr/bin/openstack-db --service nova --update Thanks.
Versions: openstack-neutron-2013.2-16.el6ost.noarch openstack-nova-common-2013.2-10.el6ost.noarch [root@rose12 ~(keystone_admin)]# bash -x /usr/bin/openstack-db --service nova --update + systemctl --version + '[' 3 -gt 0 ']' + case "$1" in + shift + APP=nova + shift + '[' 1 -gt 0 ']' + case "$1" in + MODE=sync + shift + '[' 0 -gt 0 ']' + '[' '!' sync ']' + '[' '!' nova ']' + case "$APP" in + '[' sync = sync ']' + db_synced ++ db_manage version + version= + return 1 + echo 'Can'\''t determine the existing sync level.' Can't determine the existing sync level. + echo 'Please ensure the database is running and already initialised.' Please ensure the database is running and already initialised. + exit 1
Hmm, that shows a different error message than in your original report ("No handlers could be found for logger "neutron.common.legacy"). The error in comment 5 can be caused by bad permissions on /var/log/nova/nova-manage.log. Bz #1044155 is to have openstack-db provide better error messages in this situation. You can verify you're running into this problem by looking at the permissions on /var/log/nova/nova-manage.log. If it's owned by root, this is what's causing openstack-db to bail out. Are you able to reproduce the error message you reported in comment 1?
The error in comment 5 is caused due to premissions as described in comment 6. I can't seem to reproduce the error in comment 1, might be my mistake or a fixed issue in older versions.
To test: chown root: /var/log/nova/nova-manage.log openstack-db --service nova --update You'll probably get DB access error unless you specify the non default passwords, but that's inconsequential. To verify just see that /var/log/nova/nova-manage.log has changed back to the nova user
Updating Target Milestone for bug in erratum
*** Bug 1049069 has been marked as a duplicate of this bug. ***
Verified on Grizzly -> Havana with : Version-Release number of selected component (if applicable): ------------------------------------------------------------- Grizzly puddle: 2014-01-02.2 Havana puddle: 2014-01-16.1 python-nova-2013.2.1-2.el6ost.noarch openstack-nova-console-2013.2.1-2.el6ost.noarch openstack-nova-scheduler-2013.2.1-2.el6ost.noarch python-novaclient-2.15.0-2.el6ost.noarch openstack-nova-common-2013.2.1-2.el6ost.noarch openstack-nova-cert-2013.2.1-2.el6ost.noarch openstack-nova-compute-2013.2.1-2.el6ost.noarch openstack-nova-api-2013.2.1-2.el6ost.noarch openstack-nova-conductor-2013.2.1-2.el6ost.noarch openstack-nova-novncproxy-2013.2.1-2.el6ost.noarch Results: -------- [root@rose12 ~]# openstack-db --service nova --update 2014-01-19 12:52:05.094 32424 INFO migrate.versioning.api [-] 161 -> 162... 2014-01-19 12:52:05.096 32424 INFO migrate.versioning.api [-] done 2014-01-19 12:52:05.096 32424 INFO migrate.versioning.api [-] 162 -> 163... 2014-01-19 12:52:05.098 32424 INFO migrate.versioning.api [-] done 2014-01-19 12:52:05.098 32424 INFO migrate.versioning.api [-] 163 -> 164... 2014-01-19 12:52:05.100 32424 INFO migrate.versioning.api [-] done 2014-01-19 12:52:05.100 32424 INFO migrate.versioning.api [-] 164 -> 165...
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHBA-2014-0046.html