Bug 1087576 - snapshot (create image) from vm fails
Summary: snapshot (create image) from vm fails
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-nova
Version: 4.0
Hardware: All
OS: Linux
unspecified
high
Target Milestone: ---
: 6.0 (Juno)
Assignee: Pádraig Brady
QA Contact: Ami Jeain
URL:
Whiteboard:
Depends On:
Blocks: 1040649 1104759
TreeView+ depends on / blocked
 
Reported: 2014-04-14 16:58 UTC by satya routray
Modified: 2023-09-18 09:58 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-08-14 10:24:24 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description satya routray 2014-04-14 16:58:52 UTC
Description of problem:
not able to snapshot(create image) a vm

Version-Release number of selected component (if applicable):
 redhat openstack 4.0

How reproducible:


Steps to Reproduce:
1. install redhat openstack 4.0(havana) with backend as ceph
2. create a vm
3. try to create a snapshot of VM

Actual results:

Snapshot(image of VM) vanished

checked the compute log:
<snip>
2014-04-09 17:31:27.986 4657 AUDIT nova.compute.manager
[req-4c0c83c3-5ce7-48b0-b587-070ef3090644
27e1d0e4e3d64d2d82d77c3531247e8d a82a410d10764767896ca0dac97f6fbf]
[instance: 69ce68ef-ab83-4c42-bd71-e39fcf27dc02] instance snapshotting

2014-04-09 17:31:28.236 4657 INFO nova.virt.libvirt.driver
[req-4c0c83c3-5ce7-48b0-b587-070ef3090644
27e1d0e4e3d64d2d82d77c3531247e8d a82a410d10764767896ca0dac97f6fbf]
[instance: 69ce68ef-ab83-4c42-bd71-e39fcf27dc02] Beginning live snapshot
process

2014-04-09 17:31:28.649 4657 INFO nova.virt.libvirt.driver
[req-4c0c83c3-5ce7-48b0-b587-070ef3090644
27e1d0e4e3d64d2d82d77c3531247e8d a82a410d10764767896ca0dac97f6fbf]
[instance: 69ce68ef-ab83-4c42-bd71-e39fcf27dc02] Snapshot extracted,
beginning image upload

2014-04-09 17:31:29.145 4657 ERROR nova.openstack.common.rpc.amqp
[req-4c0c83c3-5ce7-48b0-b587-070ef3090644
27e1d0e4e3d64d2d82d77c3531247e8d a82a410d10764767896ca0dac97f6fbf]
Exception during message handling

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp
Traceback (most recent call last):

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/amqp.py",
line 461, in _process_data

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp
**args)

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/openstack/common/rpc/dispatcher.py",
line 172, in dispatch

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp
result = getattr(proxyobj, method)(ctxt, **kwargs)

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 353, in
decorated_function

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp
return function(self, context, *args, **kwargs)

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/exception.py", line 90, in wrapped

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp
payload)

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/exception.py", line 73, in wrapped

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp
return f(self, context, *args, **kw)

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 243, in
decorated_function

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp     pass

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 229, in
decorated_function

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp
return function(self, context, *args, **kwargs)

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 271, in
decorated_function

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp     e,
sys.exc_info())

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 258, in
decorated_function

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp
return function(self, context, *args, **kwargs)

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 319, in
decorated_function

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp     %
image_id, instance=instance)

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 309, in
decorated_function

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp
*args, **kwargs)

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2308,
in snapshot_instance

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp
task_states.IMAGE_SNAPSHOT)

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/compute/manager.py", line 2339,
in _snapshot_instance

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp
update_task_state)

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line
1408, in snapshot

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp
image_format)

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/nova/virt/libvirt/driver.py", line
1504, in _live_snapshot

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp
libvirt.VIR_DOMAIN_BLOCK_REBASE_SHALLOW)

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 179, in doit

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp
result = proxy_call(self._autowrap, f, *args, **kwargs)

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 139, in
proxy_call

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp     rv
= execute(f,*args,**kwargs)

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib/python2.6/site-packages/eventlet/tpool.py", line 77, in tworker

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp     rv
= meth(*args,**kwargs)

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp   File
"/usr/lib64/python2.6/site-packages/libvirt.py", line 626, in blockRebase

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp     if
ret == -1: raise libvirtError ('virDomainBlockRebase() failed', dom=self)

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp
libvirtError: unsupported configuration: block copy is not supported
with this QEMU binary                 <<--------- the actual issue

2014-04-09 17:31:29.145 4657 TRACE nova.openstack.common.rpc.amqp

2014-04-09 17:31:32.761 4657 AUDIT nova.compute.resource_tracker [-]
Auditing locally available compute resources

2014-04-09 17:31:33.483 4657 AUDIT nova.compute.resource_tracker [-]
Free ram (MB): 193045

2014-04-09 17:31:33.484 4657 AUDIT nova.compute.resource_tracker [-]
Free disk (GB): 49

2014-04-09 17:31:33.484 4657 AUDIT nova.compute.resource_tracker [-]
Free VCPUS: 31

</snip>



Expected results:
Snapshot should be created without any issue

Comment 4 Nikola Dipanov 2014-04-29 13:22:56 UTC
Looking at the code and the trace posted here - it seems that the snapshot code in the the libvirt driver fails to recognize that we are running with rbd disks as it should never get to the _live_snapshot method.

The code that checks for this is

        if self.has_min_version(MIN_LIBVIRT_LIVESNAPSHOT_VERSION,
                                MIN_QEMU_LIVESNAPSHOT_VERSION,
                                REQ_HYPERVISOR_LIVESNAPSHOT) \
                and not source_format == "lvm" and not source_format == 'rbd':
            live_snapshot = True
            # Abort is an idempotent operation, so make sure any block
            # jobs which may have failed are ended. This operation also
            # confims the running instance, as opposed to the system as a
            # whole, has a new enough version of the hypervisor (bug 1193146).
            try:
                virt_dom.blockJobAbort(disk_path, 0)
            except libvirt.libvirtError as ex:
                error_code = ex.get_error_code()
                if error_code == libvirt.VIR_ERR_CONFIG_UNSUPPORTED:
                    live_snapshot = False
                else:
                    pass

Several thing could have gone wrong here - either we don't deduce the source format well, or the version of libvirt we ship does not riase libvirt.VIR_ERR_CONFIG_UNSUPPORTED.

@satya
It wold be good to get the libvirt xml of the instance that causes this, so that we can rule out the source_format issue. 

In case it's the error that gets raised by blockJobAbort, we can possibly work around it in the Nova code, but we'll need confirmation from someone in the libvirt team that this is not an actual libvirt issue, so adding Dan to the bug.

Comment 6 Steve Reichard 2014-04-29 14:03:53 UTC
The error occurred when when running community bits.  Still waiting for Inktank and them to get together a redploy the Enterprise versions versus the ceph community bits.

Comment 10 Stephen Gordon 2014-05-23 18:03:28 UTC
Steve I notice you have marked this as a blocker for the reference architecture team, is there any chance you could assist us with reproducing so we can nail this down?

Comment 11 Stephen Gordon 2014-05-23 18:04:50 UTC
Satya, can you attach the output of this command to the bug for us:

# rpm -qa

We need to nail down exactly which versions of some key packages are installed to isolate the issue and determine whether or not this has already been resolved by other activities.

Comment 12 Russell Bryant 2014-07-02 15:53:50 UTC
Re-assigning to pbrady.  From the Nova side, he's taking a look at ceph integration now.

Comment 13 Stephen Gordon 2014-07-30 15:51:54 UTC
We need a response to my query in comment # 11 and an indication that this is still relevant.

Comment 14 satya routray 2014-08-14 06:33:08 UTC
works fine with inktank ceph

Comment 15 Stephen Gordon 2014-08-14 10:24:24 UTC
Thanks for the update.


Note You need to log in before you can comment on or make changes to this bug.