Bug 1082736 - GlusterFS: Rebased snapshot volume file is not deleted
Summary: GlusterFS: Rebased snapshot volume file is not deleted
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 4.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z4
: 4.0
Assignee: Eric Harney
QA Contact: Dafna Ron
URL:
Whiteboard:
Depends On:
Blocks: 1045196
TreeView+ depends on / blocked
 
Reported: 2014-03-31 17:52 UTC by Eric Harney
Modified: 2016-04-26 14:42 UTC (History)
8 users (show)

Fixed In Version: openstack-cinder-2013.2.3-1.el6ost
Doc Type: Bug Fix
Doc Text:
Previously, an incorrect filename was generated when the GlusterFS driver attempted to delete its data during volume deletion operations. Consequently, the volume file may not be deleted, resulting in a leak of space resources, and data being unexpectedly left on the GlusterFS volume after the Block Storage volume is removed. With this update, the Block Storage GlusterFS driver has been patched to delete volume data from the GlusterFS share when the Block Storage volume is deleted.
Clone Of:
Environment:
Last Closed: 2014-05-29 19:59:35 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logs (1.45 MB, application/x-gzip)
2014-04-17 12:49 UTC, Dafna Ron
no flags Details
logs 1G (2.00 MB, application/x-gzip)
2014-04-17 17:29 UTC, Dafna Ron
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1300303 0 None None None Never
OpenStack gerrit 86158 0 None None None Never
Red Hat Bugzilla 1078975 0 high CLOSED Increase timeout for GlusterFS Snapshot delete operation 2021-02-22 00:41:40 UTC
Red Hat Bugzilla 1089070 0 medium CLOSED Depend on sufficiently new version of libvirt 2022-07-09 06:17:07 UTC
Red Hat Product Errata RHBA-2014:0577 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform 4 Bug Fix and Enhancement Advisory 2014-05-29 23:55:40 UTC

Internal Links: 1078975 1089070

Description Eric Harney 2014-03-31 17:52:41 UTC
Reported upstream:

Deleting a volume after an attached snapshot delete operation that results in a blockRebase will not remove the file.

This appears to be because delete_volume removes the file determined by local_path(volume) rather than the active file.

Need to confirm, but looks accurate.

Comment 1 Jon Bernard 2014-04-08 16:12:18 UTC
Confirmed, the behaviour described in the LP bug is exactly what I see here.

Comment 8 Dafna Ron 2014-04-17 12:48:09 UTC
Tried running scenario on openstack-cinder-2013.2.3-1.el6ost.noarch

[root@puma31 ~(keystone_admin)]# cinder snapshot-create 81c4827a-7f9d-41e0-a2dd-76f7a829f3f9 --force True
+---------------------+--------------------------------------+
|       Property      |                Value                 |
+---------------------+--------------------------------------+
|      created_at     |      2014-04-17T12:28:11.388782      |
| display_description |                 None                 |
|     display_name    |                 None                 |
|          id         | 56446771-0c10-430a-8970-c126dfef7825 |
|       metadata      |                  {}                  |
|         size        |                  10                  |
|        status       |               creating               |
|      volume_id      | 81c4827a-7f9d-41e0-a2dd-76f7a829f3f9 |
+---------------------+--------------------------------------+
[root@puma31 ~(keystone_admin)]# cinder snapshot-list 
+--------------------------------------+--------------------------------------+-----------+--------------+------+
|                  ID                  |              Volume ID               |   Status  | Display Name | Size |
+--------------------------------------+--------------------------------------+-----------+--------------+------+
| 56446771-0c10-430a-8970-c126dfef7825 | 81c4827a-7f9d-41e0-a2dd-76f7a829f3f9 | available |     None     |  10  |
+--------------------------------------+--------------------------------------+-----------+--------------+------+

[root@puma31 ~(keystone_admin)]# cinder snapshot-list 
+--------------------------------------+--------------------------------------+----------------+--------------+------+
|                  ID                  |              Volume ID               |     Status     | Display Name | Size |
+--------------------------------------+--------------------------------------+----------------+--------------+------+
| 56446771-0c10-430a-8970-c126dfef7825 | 81c4827a-7f9d-41e0-a2dd-76f7a829f3f9 | error_deleting |     None     |  10  |
+--------------------------------------+--------------------------------------+----------------+--------------+------+


I am getting a timeout on the delete of the snapshot - moving back to dev

2014-04-17 15:38:34.621 9622 DEBUG cinder.openstack.common.processutils [-] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf mount -t glusterfs 10.35.102.17:/Dafna-openstack /var/lib/cinder/mnt/59ed8cb64fc8948968a
29181234051a2 execute /usr/lib/python2.6/site-packages/cinder/openstack/common/processutils.py:142
2014-04-17 15:38:34.778 9622 ERROR cinder.openstack.common.rpc.amqp [req-b941eb53-d126-4ced-9f0d-f82065f3e586 97e5450b24624fd78ad6fa6d8a14ef3d c3178ebef2c24d1b9a045bd67483a83c] Exception during message handling
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp Traceback (most recent call last):
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py", line 441, in _process_data
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp     **args)
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py", line 148, in dispatch
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp     return getattr(proxyobj, method)(ctxt, **kwargs)
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 435, in delete_snapshot
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp     {'status': 'error_deleting'})
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp     self.gen.next()
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 423, in delete_snapshot
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp     self.driver.delete_snapshot(snapshot_ref)
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/lockutils.py", line 247, in inner
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp     retval = f(*args, **kwargs)
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 557, in delete_snapshot
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp     self._delete_snapshot(snapshot)
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 652, in _delete_snapshot
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp     online_delete_info)
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 830, in _delete_snapshot_online
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp     raise exception.GlusterfsException(msg)
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp GlusterfsException: Timed out while waiting for Nova update for deletion of snapshot 56446771-0c10-430a-8970-c126dfef7825.
2014-04-17 15:38:34.778 9622 TRACE cinder.openstack.common.rpc.amqp 
2014-04-17 15:38:34.792 9622 DEBUG cinder.volume.drivers.glusterfs [-] Available shares: [u'10.35.102.17:/Dafna-openstack'] _ensure_shares_mounted /usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py:1033
2014-04-17 15:38:34.792 9622 DEBUG cinder.openstack.common.processutils [-] Running cmd (subprocess): sudo cinder-rootwrap /etc/cinder/rootwrap.conf df --portability --block-size 1 /var/lib/cinder/mnt/59ed8cb64fc8948968a29181234051a2 exe
cute /usr/lib/python2.6/site-packages/cinder/openstack/common/processutils.py:142


2014-04-17 15:39:56.919 16245 ERROR nova.virt.libvirt.driver [req-bdc5c40f-c09c-4192-943c-fc0c31c9fd2f 97e5450b24624fd78ad6fa6d8a14ef3d c3178ebef2c24d1b9a045bd67483a83c] Failed to send updated snapshot status to volume service.
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver Traceback (most recent call last):
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1537, in _volume_snapshot_update_status
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver     status)
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 194, in wrapper
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 387, in update_snapshot_status
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver     'progress': '90%'}
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.6/site-packages/cinderclient/v1/volume_snapshots.py", line 172, in update_snapshot_status
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver     base.getid(snapshot), update_dict)
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.6/site-packages/cinderclient/v1/volume_snapshots.py", line 168, in _action
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver     return self.api.client.post(url, body=body)
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 210, in post
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 174, in _cs_request
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver     **kwargs)
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 157, in request
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver BadRequest: Snapshot status error_deleting not allowed for update_snapshot_status (HTTP 400) (Request-ID: req-1177acda-6b9b-499e-aacd-f6cd1f497016)
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver

Comment 9 Dafna Ron 2014-04-17 12:49:24 UTC
Created attachment 887167 [details]
logs

Comment 10 Dafna Ron 2014-04-17 12:55:31 UTC
just for clarification, I am trying to delete the volume snapshot while the volume is still attached to the instance. 

also, once the failure happens, even after we terminate the instance, we are still unable to delete the snapshot: 

2014-04-17 15:51:57.209 9622 DEBUG cinder.openstack.common.processutils [req-bf619f3a-1930-4b23-ad07-aca37053bd03 97e5450b24624fd78ad6fa6d8a14ef3d c3178ebef2c24d1b9a045bd67483a83c] Result was 1 execute /usr/lib/python2.6/site-packages/cinder/openstack/common/processutils.py:167
2014-04-17 15:51:57.270 9622 ERROR cinder.openstack.common.rpc.amqp [req-bf619f3a-1930-4b23-ad07-aca37053bd03 97e5450b24624fd78ad6fa6d8a14ef3d c3178ebef2c24d1b9a045bd67483a83c] Exception during message handling
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp Traceback (most recent call last):
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py", line 441, in _process_data
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp     **args)
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py", line 148, in dispatch
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp     return getattr(proxyobj, method)(ctxt, **kwargs)
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 435, in delete_snapshot
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp     {'status': 'error_deleting'})
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp     self.gen.next()
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 423, in delete_snapshot
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp     self.driver.delete_snapshot(snapshot_ref)
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/lockutils.py", line 247, in inner
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp     retval = f(*args, **kwargs)
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 557, in delete_snapshot
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp     self._delete_snapshot(snapshot)
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 664, in _delete_snapshot
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp     self._qemu_img_commit(snapshot_path)
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 877, in _qemu_img_commit
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp     return self._execute('qemu-img', 'commit', path, run_as_root=True)
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 142, in execute
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp     return processutils.execute(*cmd, **kwargs)
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/processutils.py", line 173, in execute
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp     cmd=' '.join(cmd))
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp ProcessExecutionError: Unexpected error while running command.
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf qemu-img commit /var/lib/cinder/mnt/59ed8cb64fc8948968a29181234051a2/volume-81c4827a-7f9d-41e0-a2dd-76f7a829f3f9.56446771-0c10-430a-8970-c126dfef7825
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp Exit code: 1
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp Stdout: ''
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp Stderr: 'Image is already committed\n'
2014-04-17 15:51:57.270 9622 TRACE cinder.openstack.common.rpc.amqp

Comment 11 Eric Harney 2014-04-17 14:39:16 UTC
Comment #8 indicates that you hit bug 1056037.  This will be alleviated in the (hopefully near) future with a change in bug 1078975.

Comment #10 makes sense as a repercussion of hitting that bug, as it tries twice to commit the same qcow2 image.  This should probably be worked on as a separate new bug.

Since this bug is about a specific issue related to volume deletion not removing a file I would suggest attempting this test case with a smaller volume in the meantime and address that problem in a different bug.

Comment 12 Dafna Ron 2014-04-17 17:28:11 UTC
1GB snapshot fails delete as well 

2014-04-17 20:21:05.784 9622 DEBUG qpid.messaging.io.raw [-] SENT[301bb00]: '\x0f\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x01\n\x00\x00' writeable /usr/lib/python2.6/site-packages/qpid/messaging/driver.py:480
2014-04-17 20:21:12.466 9622 ERROR cinder.openstack.common.rpc.amqp [req-a5332753-aa9b-49a7-88f4-1555a0d4e340 97e5450b24624fd78ad6fa6d8a14ef3d c3178ebef2c24d1b9a045bd67483a83c] Exception during message handling
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp Traceback (most recent call last):
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py", line 441, in _process_data
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp     **args)
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py", line 148, in dispatch
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp     return getattr(proxyobj, method)(ctxt, **kwargs)
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 435, in delete_snapshot
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp     {'status': 'error_deleting'})
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp     self.gen.next()
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 423, in delete_snapshot
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp     self.driver.delete_snapshot(snapshot_ref)
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/lockutils.py", line 247, in inner
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp     retval = f(*args, **kwargs)
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 557, in delete_snapshot
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp     self._delete_snapshot(snapshot)
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 652, in _delete_snapshot
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp     online_delete_info)
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 817, in _delete_snapshot_online
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp     raise exception.GlusterfsException(msg)
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp GlusterfsException: Unable to delete snapshot 86eb5645-1c53-4197-b78e-b34093a23b71, status: error_deleting.
2014-04-17 20:21:12.466 9622 TRACE cinder.openstack.common.rpc.amqp 
2014-04-17 20:21:19.051 9622 DEBUG cinder.openstack.common.periodic_task [-] Running periodic task VolumeManager._publish_service_capabilities run_periodic_tasks /usr/lib/python2.6/site-packages/cinder/openstack/common/periodic_task.py:1
76
2014-04-17 20:21:19.052 9622 DEBUG cinder.manager [-] Notifying Schedulers of capabilities ... _publish_service_capabilities /usr/lib/python2.6/site-packages/cinder/manager.py:135
2014-04-17 20:21:19.052 9622 DEBUG cinder.openstack.common.rpc.amqp [-] Making asynchronous fanout cast... fanout_cast /usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py:640
2014-04-17 20:21:19.052 9622 DEBUG cinder.openstack.common.rpc.amqp [-] UNIQUE_ID is 639405a3abfc47c2b84889194550bf11. _add_unique_id /usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py:345

root@puma31 ~(keystone_admin)]# cinder list 
c+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
|                  ID                  |   Status  | Display Name | Size | Volume Type | Bootable |             Attached to              |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
| 0bad191a-4126-46f5-b224-e5215fc4d776 | available |   volume2    |  10  |     None    |  false   |                                      |
| 81c4827a-7f9d-41e0-a2dd-76f7a829f3f9 | available |   volume1    |  10  |     None    |  false   |                                      |
| b589e774-cc34-400d-a818-cff7e79d6517 |   in-use  |     giga     |  1   |     None    |  false   | 9a03e400-335e-4826-8fe2-b9c0dcb17351 |
+--------------------------------------+-----------+--------------+------+-------------+----------+--------------------------------------+
[root@puma31 ~(keystone_admin)]# cinder snapshot-list 
+--------------------------------------+--------------------------------------+----------------+--------------+------+
|                  ID                  |              Volume ID               |     Status     | Display Name | Size |
+--------------------------------------+--------------------------------------+----------------+--------------+------+
| 56446771-0c10-430a-8970-c126dfef7825 | 81c4827a-7f9d-41e0-a2dd-76f7a829f3f9 | error_deleting |     None     |  10  |
| 86eb5645-1c53-4197-b78e-b34093a23b71 | b589e774-cc34-400d-a818-cff7e79d6517 | error_deleting |     None     |  1   |
+--------------------------------------+--------------------------------------+----------------+--------------+------+
[root@puma31 ~(keystone_admin)]#

Comment 13 Dafna Ron 2014-04-17 17:29:57 UTC
Created attachment 887235 [details]
logs 1G

Comment 14 Eric Harney 2014-04-17 18:06:02 UTC
(In reply to Dafna Ron from comment #12)

Still looking at this, but can you grab Nova's and Cinder's api logs as well?

Comment 15 Eric Harney 2014-04-17 19:30:59 UTC
Comment #8 snipped some more revealing information from the Nova compute log showing connection issues between Nova and Cinder:

2014-04-17 15:39:49.237 16245 DEBUG nova.virt.libvirt.driver [req-bdc5c40f-c09c-4192-943c-fc0c31c9fd2f 97e5450b24624fd78ad6fa6d8a14ef3d c3178ebef2c24d1b9a045bd67483a83c] waiting for blockRebase job completion _volume_snapshot_delete /usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py:1785
2014-04-17 15:39:49.740 16245 DEBUG nova.volume.cinder [req-bdc5c40f-c09c-4192-943c-fc0c31c9fd2f 97e5450b24624fd78ad6fa6d8a14ef3d c3178ebef2c24d1b9a045bd67483a83c] Cinderclient connection created using URL: http://10.35.104.17:8776/v1/c3178ebef2c24d1b9a045bd67483a83c cinderclient /usr/lib/python2.6/site-packages/nova/volume/cinder.py:96
2014-04-17 15:39:49.741 16245 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 10.35.104.17
2014-04-17 15:39:49.804 16245 DEBUG urllib3.connectionpool [-] "POST /v1/c3178ebef2c24d1b9a045bd67483a83c/snapshots/56446771-0c10-430a-8970-c126dfef7825/action HTTP/1.1" 400 113 _make_request /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295
2014-04-17 15:39:49.805 16245 DEBUG cinderclient.client [-] Failed attempt(1 of 3), retrying in 1 seconds _cs_request /usr/lib/python2.6/site-packages/cinderclient/client.py:202
2014-04-17 15:39:50.806 16245 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 10.35.104.17
2014-04-17 15:39:50.847 16245 DEBUG urllib3.connectionpool [-] "POST /v1/c3178ebef2c24d1b9a045bd67483a83c/snapshots/56446771-0c10-430a-8970-c126dfef7825/action HTTP/1.1" 400 113 _make_request /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295
2014-04-17 15:39:50.847 16245 DEBUG cinderclient.client [-] Failed attempt(2 of 3), retrying in 2 seconds _cs_request /usr/lib/python2.6/site-packages/cinderclient/client.py:202
2014-04-17 15:39:52.848 16245 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 10.35.104.17
2014-04-17 15:39:52.882 16245 DEBUG urllib3.connectionpool [-] "POST /v1/c3178ebef2c24d1b9a045bd67483a83c/snapshots/56446771-0c10-430a-8970-c126dfef7825/action HTTP/1.1" 400 113 _make_request /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295
2014-04-17 15:39:52.883 16245 DEBUG cinderclient.client [-] Failed attempt(3 of 3), retrying in 4 seconds _cs_request /usr/lib/python2.6/site-packages/cinderclient/client.py:202
2014-04-17 15:39:56.884 16245 INFO urllib3.connectionpool [-] Starting new HTTP connection (1): 10.35.104.17
2014-04-17 15:39:56.919 16245 DEBUG urllib3.connectionpool [-] "POST /v1/c3178ebef2c24d1b9a045bd67483a83c/snapshots/56446771-0c10-430a-8970-c126dfef7825/action HTTP/1.1" 400 113 _make_request /usr/lib/python2.6/site-packages/urllib3/connectionpool.py:295
2014-04-17 15:39:56.919 16245 ERROR nova.virt.libvirt.driver [req-bdc5c40f-c09c-4192-943c-fc0c31c9fd2f 97e5450b24624fd78ad6fa6d8a14ef3d c3178ebef2c24d1b9a045bd67483a83c] Failed to send updated snapshot status to volume service.
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver Traceback (most recent call last):
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1537, in _volume_snapshot_update_status
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver     status)
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 194, in wrapper
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver     res = method(self, ctx, snapshot_id, *args, **kwargs)
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.6/site-packages/nova/volume/cinder.py", line 387, in update_snapshot_status
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver     'progress': '90%'}
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.6/site-packages/cinderclient/v1/volume_snapshots.py", line 172, in update_snapshot_status
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver     base.getid(snapshot), update_dict)
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.6/site-packages/cinderclient/v1/volume_snapshots.py", line 168, in _action
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver     return self.api.client.post(url, body=body)
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 210, in post
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver     return self._cs_request(url, 'POST', **kwargs)
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 174, in _cs_request
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver     **kwargs)
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver   File "/usr/lib/python2.6/site-packages/cinderclient/client.py", line 157, in request
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver     raise exceptions.from_response(resp, body)
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver BadRequest: Snapshot status error_deleting not allowed for update_snapshot_status (HTTP 400) (Request-ID: req-1177acda-6b9b-499e-aacd-f6cd1f497016)
2014-04-17 15:39:56.919 16245 TRACE nova.virt.libvirt.driver

Comment 19 Dafna Ron 2014-04-22 11:55:27 UTC
verified on the following packages while disregarding bug 1078975

libvirt-0.10.2-29.el6_5.2.x86_64
openstack-cinder-2013.2.3-1.el6ost.noarch

Comment 20 Lon Hohberger 2014-04-22 13:50:45 UTC
libvirt-0.10.2-29.el6_5.7 is what's in 6.5.z.

We can respin openstack-cinder to include:

  Requires: libvirt >= 0.10.2-29.el6_5.2

Though, we do expect the underlying RHEL installation to have the latest updates installed, so this shouldn't affect many people.

Comment 21 Eric Harney 2014-04-22 13:53:32 UTC
(In reply to Lon Hohberger from comment #20)
> libvirt-0.10.2-29.el6_5.7 is what's in 6.5.z.
> 
> We can respin openstack-cinder to include:
> 
>   Requires: libvirt >= 0.10.2-29.el6_5.2
> 

It has to be a Nova dep and not a Cinder one (bug 1089070).

Comment 24 errata-xmlrpc 2014-05-29 19:59:35 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

http://rhn.redhat.com/errata/RHBA-2014-0577.html


Note You need to log in before you can comment on or make changes to this bug.