Bug 1033652 - GlusterFS: can't create a second snapshot (online) after deleting the first one (online)
Summary: GlusterFS: can't create a second snapshot (online) after deleting the first o...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-cinder
Version: 4.0
Hardware: x86_64
OS: Linux
high
urgent
Target Milestone: z1
: 4.0
Assignee: Eric Harney
QA Contact: Yogev Rabl
URL:
Whiteboard: storage
: 1056469 (view as bug list)
Depends On: 1016798 1040711 1056037
Blocks: 1033714 1045196
TreeView+ depends on / blocked
 
Reported: 2013-11-22 14:31 UTC by Dafna Ron
Modified: 2016-04-26 19:54 UTC (History)
9 users (show)

Fixed In Version: openstack-cinder-2013.2.1-2.el6ost
Doc Type: Bug Fix
Doc Text:
Previously, deleting the oldest Block Storage GlusterFS snapshot for attached volumes would leave the volume/snapshot data in an inconsistent state. This has been fixed, so now deleting the oldest Block Storage GlusterFS snapshot for attached volumes is done correctly.
Clone Of:
Environment:
Last Closed: 2014-01-23 14:22:59 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
logs (15.17 KB, application/x-gzip)
2013-11-22 14:31 UTC, Dafna Ron
no flags Details
logs (3.53 MB, application/x-gzip)
2013-12-11 13:03 UTC, Dafna Ron
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Launchpad 1254050 0 None None None Never
OpenStack gerrit 63223 0 None None None Never
Red Hat Bugzilla 1040711 0 high CLOSED GlusterFS online snap deletion may loop endlessly 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHBA-2014:0046 0 normal SHIPPED_LIVE Red Hat Enterprise Linux OpenStack Platform 4 Bug Fix and Enhancement Advisory 2014-01-23 00:51:59 UTC

Internal Links: 1040711

Description Dafna Ron 2013-11-22 14:31:19 UTC
Created attachment 827821 [details]
logs

Description of problem:

we get an error when we try to create a second snapshot from a volume which is used as a bootable disk after deleting the first snapshot created 

Version-Release number of selected component (if applicable):

openstack-cinder-2013.2-2.el6ost.noarch

How reproducible:

100%

scenario #1:

1. create a volume from an image
2. boot an instance from the volume
3. create a snapshot from the instance
4. delete the volume snapshot
5. create a new snapshot from the instance

scenario#2:

1. create a volume from an image
2. boot an instance from the volume
3. create a snapshot through the volume using --force=True
4. delete the volume
5. create a second snapshot for the volume using --force=True

Actual results:

we fail to create the snapshot with No such file or directory\n error

Expected results:

we should be able to create the snapshot 

Additional info:

**we also cannot delete the snapshots with same error**

2013-11-22 16:11:36.955 14458 WARNING cinder.quota [req-66ebb212-b114-48c4-9f0c-19aafeb38918 24b77982be8049ee9cd5ad7bed913565 7eb59aa89e8944d098554ff6f5a4cf88] Deprecated: Default quota for resource: gigabytes is set by the default quota flag: quota_gigabytes, it is now deprecated. Please use the the default quota class for default quota.
2013-11-22 16:11:36.956 14458 WARNING cinder.quota [req-66ebb212-b114-48c4-9f0c-19aafeb38918 24b77982be8049ee9cd5ad7bed913565 7eb59aa89e8944d098554ff6f5a4cf88] Deprecated: Default quota for resource: snapshots is set by the default quota flag: quota_snapshots, it is now deprecated. Please use the the default quota class for default quota.
2013-11-22 16:11:52.879 14458 ERROR cinder.openstack.common.rpc.amqp [req-b7df5de9-3c2c-4a6f-9ca4-c2d442fafe48 24b77982be8049ee9cd5ad7bed913565 7eb59aa89e8944d098554ff6f5a4cf88] Exception during message handling
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp Traceback (most recent call last):
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py", line 441, in _process_data
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp     **args)
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py", line 148, in dispatch
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp     return getattr(proxyobj, method)(ctxt, **kwargs)
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 808, in wrapper
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp     return func(self, *args, **kwargs)
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 371, in create_snapshot
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp     {'status': 'error'})
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp     self.gen.next()
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 362, in create_snapshot
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp     model_update = self.driver.create_snapshot(snapshot_ref)
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 375, in create_snapshot
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp     new_snap_path)
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 464, in _create_qcow2_snap_file
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp     self._execute(*command, run_as_root=True)
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 142, in execute
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp     return processutils.execute(*cmd, **kwargs)
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/processutils.py", line 173, in execute
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp     cmd=' '.join(cmd))
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp ProcessExecutionError: Unexpected error while running command.
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf qemu-img create -f qcow2 -o backing_file=/var/lib/cinder/mnt/92ce777f40909398918e29a4128dfce0/volume-7e3df44e-14e1-4a92-b612-b0dd7731a4e2.013312d5-e64b-477b-a3ee-6a0b764a5f46 /var/lib/cinder/mnt/92ce777f40909398918e29a4128dfce0/volume-7e3df44e-14e1-4a92-b612-b0dd7731a4e2.aa750eab-31dc-4446-b0f0-2ed9ebd77ee2
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp Exit code: 1
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp Stdout: ''
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp Stderr: "Could not open '/var/lib/cinder/mnt/92ce777f40909398918e29a4128dfce0/volume-7e3df44e-14e1-4a92-b612-b0dd7731a4e2.013312d5-e64b-477b-a3ee-6a0b764a5f46': No such file or directory\n"
2013-11-22 16:11:52.879 14458 TRACE cinder.openstack.common.rpc.amqp 
(END)

Comment 1 Eric Harney 2013-11-27 13:40:53 UTC
It is likely that this bug is being / will be fixed via some of the other bugs in this area related to permissions.

Did any other errors appear in the log at snapshot creation/deletion time for the first snapshot?

Comment 2 Dafna Ron 2013-11-29 13:49:09 UTC
(In reply to Eric Harney from comment #1)
> It is likely that this bug is being / will be fixed via some of the other
> bugs in this area related to permissions.
> 
> Did any other errors appear in the log at snapshot creation/deletion time
> for the first snapshot?

I attached everything. 
nothing more except what I added in this bug.

Comment 3 Ayal Baron 2013-12-01 09:50:30 UTC
(In reply to Eric Harney from comment #1)
> It is likely that this bug is being / will be fixed via some of the other
> bugs in this area related to permissions.
> 
> Did any other errors appear in the log at snapshot creation/deletion time
> for the first snapshot?

Please block on relevant bugs

Comment 4 Dafna Ron 2013-12-02 09:56:04 UTC
(In reply to Ayal Baron from comment #3)
> (In reply to Eric Harney from comment #1)
> > It is likely that this bug is being / will be fixed via some of the other
> > bugs in this area related to permissions.
> > 
> > Did any other errors appear in the log at snapshot creation/deletion time
> > for the first snapshot?
> 
> Please block on relevant bugs

I attached relevant logs when I opened the bug.

Comment 5 Eric Harney 2013-12-04 20:50:52 UTC
This will no longer happen with 1016798 fixed.  (Same issue.)

Comment 7 Dafna Ron 2013-12-11 13:00:41 UTC
the create succeeds but delete of second snapshot will fail with what seems to me as same error. 
moving back to dev. 

reproduce: 

1. create a volume
2. boot instance from the volume
3. create a snapshot (cinder snapshot-create <vol-id> --force True --display-name test)
4. delete the snapshot (cinder snapshot-delete <snapshot>)
5. create a second snapshot (cinder snapshot-create <vol-id> --force True --display-name test1)
6. delete the second snapshot (cinder snapshot-delete <snapshot>)

2013-12-11 14:48:45.636 3860 WARNING cinder.quota [req-aae0a49b-bd70-4074-be3e-50b5ddee2a3a 674776fc1eea47718301aeacbab072b3 5cea8d9e58c841dfb03b1cda755b539d] Deprecated: Default quota for resource: gigabytes is set by the default quota flag: quota_gigabytes, it is now deprecated. Please use the the default quota class for default quota.
2013-12-11 14:48:45.636 3860 WARNING cinder.quota [req-aae0a49b-bd70-4074-be3e-50b5ddee2a3a 674776fc1eea47718301aeacbab072b3 5cea8d9e58c841dfb03b1cda755b539d] Deprecated: Default quota for resource: snapshots is set by the default quota flag: quota_snapshots, it is now deprecated. Please use the the default quota class for default quota.
2013-12-11 14:48:56.176 3860 ERROR cinder.openstack.common.rpc.amqp [req-c2a0a7ed-45bc-4065-b013-a43b9b37dc3d 674776fc1eea47718301aeacbab072b3 5cea8d9e58c841dfb03b1cda755b539d] Exception during message handling
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp Traceback (most recent call last):
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py", line 441, in _process_data
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp     **args)
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py", line 148, in dispatch
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp     return getattr(proxyobj, method)(ctxt, **kwargs)
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 809, in wrapper
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp     return func(self, *args, **kwargs)
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 371, in create_snapshot
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp     {'status': 'error'})
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp     self.gen.next()
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 362, in create_snapshot
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp     model_update = self.driver.create_snapshot(snapshot_ref)
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/lockutils.py", line 247, in inner
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp     retval = f(*args, **kwargs)
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 389, in create_snapshot
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp     new_snap_path)
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 478, in _create_qcow2_snap_file
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp     self._execute(*command, run_as_root=True)
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 143, in execute
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp     return processutils.execute(*cmd, **kwargs)
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/processutils.py", line 173, in execute
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp     cmd=' '.join(cmd))
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp ProcessExecutionError: Unexpected error while running command.
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf qemu-img create -f qcow2 -o backing_file=/var/lib/cinder/mnt/249458a2755cd0a9f302b9d81eb3f35d/volume-8e113c6f-f30c-4f6a-b511-2af507d7e758.8820d941-904e-49ba-8582-453b533cd033 /var/lib/cinder/mnt/249458a2755cd0a9f302b9d81eb3f35d/volume-8e113c6f-f30c-4f6a-b511-2af507d7e758.1264bdc0-fc0a-4817-976c-ce061a787d92
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp Exit code: 1
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp Stdout: ''
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp Stderr: "Could not open '/var/lib/cinder/mnt/249458a2755cd0a9f302b9d81eb3f35d/volume-8e113c6f-f30c-4f6a-b511-2af507d7e758.8820d941-904e-49ba-8582-453b533cd033': No such file or directory\n"
2013-12-11 14:48:56.176 3860 TRACE cinder.openstack.common.rpc.amqp

Comment 8 Dafna Ron 2013-12-11 13:03:52 UTC
Created attachment 835273 [details]
logs

Comment 12 Stephen Gordon 2013-12-12 14:24:41 UTC
Please provide "Doc Text" for this bug describing the issue and suggested workaround.

Comment 14 Eric Harney 2013-12-13 19:48:34 UTC
Problem appears to be an issue with how we call libvirt from Nova to perform a blockRebase, used when deleting the last snapshot.

We pass base=<filename> but should pass base=None.  The current method appears to rebase but leaves the qcow2 file with an incorrect qcow2 pointer to a file that doesn't exist.

Comment 16 Eric Harney 2014-01-09 22:49:50 UTC
Note: Bug 1040711 still exists and affects GlusterFS snapshot deletion for attached volumes, but should be less severe.  I think this fix is still worth verifying independently.

Comment 18 Yogev Rabl 2014-01-14 14:16:50 UTC
At the moment I'm not able to verify the bug, because it is blocked by Bug 1052969

Comment 19 Yogev Rabl 2014-01-15 13:51:00 UTC
the bug that blocks this bug has a target release in A2. Please change the one of the milestones.

Comment 20 Eric Harney 2014-01-15 15:25:36 UTC
(In reply to Yogev Rabl from comment #19)
Done

Comment 21 Dafna Ron 2014-01-17 15:22:28 UTC
I tested this bug with 1052969

we either have a new bug or this is not solved: 

We fail to delete the snapshot:

2014-01-17 17:12:30.036 9009 ERROR cinder.openstack.common.rpc.amqp [req-2772b058-94f1-4d15-a7d1-661b0682dfed e6ee6034307247b78807be047bd10e76 d4aaa7c237054d408a65f40bb4ee74d0] Exception during message handling
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp Traceback (most recent call last):
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/amqp.py", line 441, in _process_data
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp     **args)
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/rpc/dispatcher.py", line 148, in dispatch
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp     return getattr(proxyobj, method)(ctxt, **kwargs)
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/utils.py", line 820, in wrapper
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp     return func(self, *args, **kwargs)
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 424, in delete_snapshot
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp     {'status': 'error_deleting'})
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib64/python2.6/contextlib.py", line 23, in __exit__
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp     self.gen.next()
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/manager.py", line 412, in delete_snapshot
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp self.driver.delete_snapshot(snapshot_ref)
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/openstack/common/lockutils.py", line 247, in inner
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp     retval = f(*args, **kwargs)
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 636, in delete_snapshot
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp     online_delete_info)
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp   File "/usr/lib/python2.6/site-packages/cinder/volume/drivers/glusterfs.py", line 801, in _delete_snapshot_online
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp     raise exception.GlusterfsException(msg)
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp GlusterfsException: Unable to delete snapshot ade7e0e8-e8f0-44da-9308-b41aaed609ac, status: error_deleting.
2014-01-17 17:12:30.036 9009 TRACE cinder.openstack.common.rpc.amqp
/var/log/cinder/volume.log (END)




2014-01-17 17:12:29.138 15658 ERROR nova.openstack.common.rpc.amqp [req-1fae1101-f20a-424e-9ea8-74367d0faf88 e6ee6034307247b78807be047bd10e76 d4aaa7c237054d408a65f40bb4ee74d0] Exception during message handling
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp Traceback (most recent call last):
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py", line 461, in _process_data
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp     **args)
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py", line 172, in dispatch
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp     result = getattr(proxyobj, method)(ctxt, **kwargs)
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/common.py", line 439, in inner
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp     return catch_client_exception(exceptions, func, *args, **kwargs)
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/common.py", line 420, in catch_client_exception
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp     return func(*args, **kwargs)
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2365, in volume_snapshot_delete
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp     snapshot_id, delete_info)
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1808, in volume_snapshot_delete
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp     context, snapshot_id, 'error_deleting')
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1801, in volume_snapshot_delete
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp     snapshot_id, delete_info=delete_info)
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1773, in _volume_snapshot_delete
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp     abort_on_error=True):
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line 1442, in _wait_for_block_job
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp     status = domain.blockJobInfo(disk_path, 0)
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 187, in doit
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp     result = proxy_call(self._autowrap, f, *args, **kwargs)
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 147, in proxy_call
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp     rv = execute(f,*args,**kwargs)
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp File "/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 76, in tworker
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp     rv = meth(*args,**kwargs)
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1826, in blockJobInfo
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp     if ret is None: raise libvirtError ('virDomainGetBlockJobInfo() failed', dom=self)
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp libvirtError: virDomainGetBlockJobInfo() failed
2014-01-17 17:12:29.138 15658 TRACE nova.openstack.common.rpc.amqp
2014-01-17 17:12:32.411 15658 DEBUG qpid.messaging.io.ops [-] SENT[3aea908]: ConnectionHeartbeat() write_op /usr/lib/python2.6/site-packages/qpid/messaging/driver.py:685
2014-01-17 17:12:32.413 15658 DEBUG qpid.messaging.io.raw [-] SENT[3aea908]: '\x0f\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x01\n\x00\x00' writeable /usr/lib/python2.6/site-packages/qpid/messaging/driver.py:480
2014-01-17 17:12:32.417 15658 DEBUG qpid.messaging.io.raw [-] READ[3aea908]: '\x0f\x00\x00\x10\x00\x00\x00\x00\x00\x00\x00\x00\x01\n\x00\x00' readable /usr/lib/python2.6/site-packages/qpid/messaging/driver.py:416
2014-01-17 17:12:32.417 15658 DEBUG qpid.messaging.io.ops [-] RCVD[3aea908]: ConnectionHeartbeat() write /usr/lib/python2.6/site-packages/qpid/messaging/driver.py:653
2014-01-17 17:12:32.774 15658 DEBUG nova.openstack.common.rpc.amqp [-] Making synchronous call on conductor ... multicall /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py:553
2014-01-17 17:12:32.775 15658 DEBUG nova.openstack.common.rpc.amqp [-] MSG_ID is b4947f51e65848ce8b3c57b26865a1f4 multicall /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py:556
2014-01-17 17:12:32.775 15658 DEBUG nova.openstack.common.rpc.amqp [-] UNIQUE_ID is 5c4a6cf79917497aae384ea8e2f95c8d. _add_unique_id /usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py:341
2014-01-17 17:12:32.777 15658 DEBUG qpid.messaging.io.ops [-] SENT[37bc638]: MessageTransfer(destination='amq.topic', id=serial(0), sync=True, headers=(DeliveryProperties(routing_key='topic/nova/conductor'



2014-01-17 14:56:10.783+0000: 14991: info : libvirt version: 0.10.2, package: 29.el6 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 2013-10-09-06:25:35, x86-026.build.eng.bos.redhat.com)
2014-01-17 14:56:10.783+0000: 14991: error : virNWFilterDHCPSnoopEnd:2131 : internal error ifname "vnet1" not in key map
2014-01-17 14:56:10.806+0000: 14991: error : virNetDevGetIndex:653 : Unable to get index for interface vnet1: No such device
2014-01-17 14:56:11.842+0000: 14983: error : virNWFilterDHCPSnoopEnd:2131 : internal error ifname "vnet5" not in key map
2014-01-17 14:56:11.854+0000: 14983: error : virNetDevGetIndex:653 : Unable to get index for interface vnet5: No such device
2014-01-17 14:56:13.260+0000: 14986: error : virNWFilterDHCPSnoopEnd:2131 : internal error ifname "vnet3" not in key map
2014-01-17 14:56:13.287+0000: 14986: error : virNetDevGetIndex:653 : Unable to get index for interface vnet3: No such device
2014-01-17 14:56:15.909+0000: 14991: error : virNWFilterDHCPSnoopEnd:2131 : internal error ifname "vnet4" not in key map
2014-01-17 14:56:15.926+0000: 14991: error : virNetDevGetIndex:653 : Unable to get index for interface vnet4: No such device
2014-01-17 14:56:18.128+0000: 14991: error : virNWFilterDHCPSnoopEnd:2131 : internal error ifname "vnet0" not in key map
2014-01-17 14:56:18.141+0000: 14991: error : virNetDevGetIndex:653 : Unable to get index for interface vnet0: No such device
2014-01-17 14:56:20.379+0000: 14986: error : virNWFilterDHCPSnoopEnd:2131 : internal error ifname "vnet2" not in key map
2014-01-17 14:56:20.381+0000: 14986: error : virNetDevGetIndex:653 : Unable to get index for interface vnet2: No such device
2014-01-17 15:02:42.376+0000: 14983: error : qemuDomainSnapshotFSFreeze:11112 : argument unsupported: QEMU guest agent is not configured

Comment 22 Eric Harney 2014-01-17 16:12:16 UTC
libvirtError: virDomainGetBlockJobInfo() failed

This comes from not having a sufficient version of libvirt for bug 1038815, need libvirt-0.10.2-29.el6_5.2.

Your box has libvirt-0.10.2-29.el6.x86_64.

Comment 26 Dafna Ron 2014-01-21 18:03:55 UTC
delete of snapshot when it's the only snapshot for a volume fails. 

after discussion with devel, this bug was verified using a volume with more than one snapshot:

1. create a volume
2. boot instance from the volume
3. create two snapshots
4. delete one. 

the delete succeeded. 

moving to verified on openstack-cinder-2013.2.1-5.el6ost.noarch

Comment 29 Lon Hohberger 2014-02-04 17:20:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2014-0046.html

Comment 30 Eric Harney 2014-05-20 13:48:04 UTC
*** Bug 1056469 has been marked as a duplicate of this bug. ***


Note You need to log in before you can comment on or make changes to this bug.