Bug 1190571 - [RHEV 3.4.5] Storage Live Migration fails
Summary: [RHEV 3.4.5] Storage Live Migration fails
Keywords:
Status: CLOSED DUPLICATE of bug 1190742
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 3.4.4
Hardware: x86_64
OS: Linux
urgent
urgent
Target Milestone: ---
: 3.4.6
Assignee: Adam Litke
QA Contact: Aharon Canan
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-02-09 08:02 UTC by Martin Tessun
Modified: 2019-05-20 11:32 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-02-12 15:45:22 UTC
oVirt Team: Storage


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 1345043 None None None Never

Description Martin Tessun 2015-02-09 08:02:28 UTC
Description of problem:
After upgrading to RHEV 3.4.5 and RHEL 6.6 on the hypervisors, live storage migration fails at the first step (creating Snapshot)

Version-Release number of selected component (if applicable):
RHEV-M 3.4.5 / vdsm vdsm-4.14.18-6.el6ev.x86_64

How reproducible:
always

Steps to Reproduce:
0. (Upgrade to RHEV 3.4.5 and Hypervisor to RHEL 6.6)
1. Select a running VM
2. Select a disk and say "move" in the Admin Web-UI
3. Start Storage Live migration

Actual results:
Live migration fails with the following errors (although snapshopt seems to be created correctly):

	
2015-Feb-09, 08:52 Failed to complete snapshot 'Auto-generated for Live Storage Migration' creation for VM 'mtessun-ipa'.

2015-Feb-09, 08:52 Failed to create live snapshot 'Auto-generated for Live Storage Migration' for VM 'mtessun-ipa'. VM restart is recommended.

2015-Feb-09, 08:52 Snapshot 'Auto-generated for Live Storage Migration' creation for VM 'mtessun-ipa' was initiated by admin.

Expected results:
Live migration should work


Additional info:

Comment 4 Liron Aravot 2015-02-10 09:15:06 UTC
This seems as a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1115126
and seems related to the version of libvirt in use.
this issue was observed by ebenahar as well.

Adam, can you take a look? (as you looked into the bug i've attached)

thanks,
Liron

Comment 5 Liron Aravot 2015-02-10 09:17:40 UTC
Sorry, not a duplicate - a clone of.

Comment 6 Allon Mureinik 2015-02-11 16:04:51 UTC
(In reply to Liron Aravot from comment #4)
> This seems as a duplicate of
> https://bugzilla.redhat.com/show_bug.cgi?id=1115126
> and seems related to the version of libvirt in use.
> this issue was observed by ebenahar as well.
> 
> Adam, can you take a look? (as you looked into the bug i've attached)
> 
> thanks,
> Liron

Adam, can you please confirm/refute?

Comment 7 Adam Litke 2015-02-11 21:41:02 UTC
It seems like the most likely explanation but without the full vdsm log during a failure I won't be able to confirm it.  I need to see the error messages from libvirt calls which aren't captured by this grep of the vdsm logs.

Martin Tessun, could you attach a full vdsm log from when the problem happened?

Comment 9 Martin Tessun 2015-02-12 09:56:58 UTC
Hi Adam,

log is attached.

The relevant part seems to be this snippet:

Thread-153::INFO::2015-02-06 16:09:16,701::clientIF::321::vds::(prepareVolumePath) prepared volume path: /rhev/data-center/84ee5743-d9e6-40f7-bc84-3ee68455acc6/f6589b8e-4eed-4e3a-91e2-5726ba39a4dd/images/a6b6478b-a1a7-44a8-9669-7d6ba236e774/3a436041-a566-4bca-83b2-7cc999339d3a
Thread-153::DEBUG::2015-02-06 16:09:16,701::vm::4075::vm.Vm::(snapshot) vmId=`11e5229d-d751-4f48-ba1f-d7f0b8f207f9`::<domainsnapshot>
        <disks>
                <disk name="vda" snapshot="external" type="block">
                        <source dev="/rhev/data-center/84ee5743-d9e6-40f7-bc84-3ee68455acc6/f6589b8e-4eed-4e3a-91e2-5726ba39a4dd/images/a6b6478b-a1a7-44a8-9669-7d6ba236e774/3a436041-a566-4bca-83b2-7cc999339d3a" type="block"/>
                </disk>
        </disks>
</domainsnapshot>

Thread-153::DEBUG::2015-02-06 16:09:16,708::libvirtconnection::124::root::(wrapper) Unknown libvirterror: ecode: 67 edom: 35 level: 2 message: unsupported configuration: source for disk 'vda' is not a regular file; refusing to generate external snapshot name
Thread-153::DEBUG::2015-02-06 16:09:16,708::vm::4096::vm.Vm::(snapshot) vmId=`11e5229d-d751-4f48-ba1f-d7f0b8f207f9`::Snapshot failed using the quiesce flag, trying again without it (unsupported configuration: source for disk 'vda' is not a regular file; refusing to generate external snapshot name)
Thread-153::DEBUG::2015-02-06 16:09:16,714::libvirtconnection::124::root::(wrapper) Unknown libvirterror: ecode: 67 edom: 35 level: 2 message: unsupported configuration: source for disk 'vda' is not a regular file; refusing to generate external snapshot name
Thread-153::ERROR::2015-02-06 16:09:16,714::vm::4100::vm.Vm::(snapshot) vmId=`11e5229d-d751-4f48-ba1f-d7f0b8f207f9`::Unable to take snapshot
Traceback (most recent call last):
  File "/usr/share/vdsm/vm.py", line 4098, in snapshot
    self._dom.snapshotCreateXML(snapxml, snapFlags)
  File "/usr/share/vdsm/vm.py", line 928, in f
    ret = attr(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/vdsm/libvirtconnection.py", line 92, in wrapper
    ret = f(*args, **kwargs)
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 1679, in snapshotCreateXML
    if ret is None:raise libvirtError('virDomainSnapshotCreateXML() failed', dom=self)
libvirtError: unsupported configuration: source for disk 'vda' is not a regular file; refusing to generate external snapshot name

Cheers,
Martin

Comment 10 Adam Litke 2015-02-12 14:56:24 UTC
Thanks Martin,

This is indeed a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1115126

Comment 11 Allon Mureinik 2015-02-12 15:45:22 UTC
(In reply to Adam Litke from comment #10)
> Thanks Martin,
> 
> This is indeed a duplicate of
> https://bugzilla.redhat.com/show_bug.cgi?id=1115126

Thanks guys.
Closing as a dup of 1190742 (the 3.4.z clone of 1115126).

*** This bug has been marked as a duplicate of bug 1190742 ***


Note You need to log in before you can comment on or make changes to this bug.