Created attachment 751736 [details] api.log "cinder backup-create" never exits, images stuck in backing-up status. [root@orange-vdsf ~(keystone_admin)]# cinder list +--------------------------------------+------------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+------------+--------------+------+-------------+----------+-------------+ | 62f94e4f-059a-43de-ae7e-1ce657623056 | backing-up | None | 1 | None | false | | | d5b0c68f-9594-4bd2-be22-b96c608cfe3c | backing-up | None | 1 | None | false | | +--------------------------------------+------------+--------------+------+-------------+----------+-------------+ [root@orange-vdsf ~(keystone_admin)]# cinder backup-list +--------------------------------------+--------------------------------------+----------+------+------+--------------+-----------+ | ID | Volume ID | Status | Name | Size | Object Count | Container | +--------------------------------------+--------------------------------------+----------+------+------+--------------+-----------+ | cf7bdeef-55c5-4024-9470-36f1b33ae3d6 | 62f94e4f-059a-43de-ae7e-1ce657623056 | creating | None | 1 | None | None | | e71ecc5b-1496-4c6f-ad8b-37fb9f44e00b | d5b0c68f-9594-4bd2-be22-b96c608cfe3c | creating | None | 1 | None | None | +--------------------------------------+--------------------------------------+----------+------+------+--------------+-----------+
Created attachment 751737 [details] volume.log
Something has gone really wrong here with the database: ProgrammingError: (ProgrammingError) (1146, "Table 'cinder.volumes' doesn't exist") What install/upgrade steps were performed to get the server to this state? Install RHOS 3 builds, upgrade from RHOS 2.1? Packstack/no packstack?
(In reply to Eric Harney from comment #3) > Something has gone really wrong here with the database: > > ProgrammingError: (ProgrammingError) (1146, "Table 'cinder.volumes' doesn't > exist") > > What install/upgrade steps were performed to get the server to this state? > Install RHOS 3 builds, upgrade from RHOS 2.1? Packstack/no packstack? This was a clean installing using packstack --allinone.
Created attachment 755755 [details] api.log - gold-vdsd Attaching api.log from another host with the same issue. This host had an empty volume.log, so not attaching volume.log. This host however has many errors of: "Maximum number of volumes allowed (10) exceeded". I tried using "cinder backup-create" both before and after raising the max number of volumes, but it became stuck both times.
Chances are that swift (and cinder-backup) are not running. I've raised this upstream so that this scenario results in a less time-consuming failure mode: https://bugs.launchpad.net/cinder/+bug/1200040
verified on: openstack-cinder-2013.2-0.11.rc1.el6ost.noarch python-cinder-2013.2-0.11.rc1.el6ost.noarch python-cinderclient-1.0.6-1.el6ost.noarch cinder backup-create fails with error, however, it doesn't stuck! ~(keystone_admin)]# cinder backup-list /usr/lib/python2.6/site-packages/babel/__init__.py:33: UserWarning: Module backports was already imported from /usr/lib64/python2.6/site-packages/backports/__init__.pyc, but /usr/lib/python2.6/site-packages is being added to sys.path from pkg_resources import get_distribution, ResolutionError +--------------------------------------+--------------------------------------+--------+---------+------+--------------+---------------+ | ID | Volume ID | Status | Name | Size | Object Count | Container | +--------------------------------------+--------------------------------------+--------+---------+------+--------------+---------------+ | 756ef62a-1fe2-4f1a-823a-07a9029ceae9 | f12a2575-1c48-40f4-8580-c06930af553e | error | backup_ | 6 | None | volumebackups | | ac238322-9ccd-4e52-a081-e76caa2bc323 | fc45046e-b278-4df3-a29f-242f883a8d01 | error | kaka1 | 1 | None | volumebackups | +--------------------------------------+--------------------------------------+--------+---------+------+--------------+---------------+
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. http://rhn.redhat.com/errata/RHEA-2013-1859.html