Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1523614 - Copy image to a block storage destination does not work after disk extension in a snapshot in DC pre-4.0
Copy image to a block storage destination does not work after disk extension ...
Status: CLOSED ERRATA
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine (Show other bugs)
4.1.7
Unspecified Unspecified
unspecified Severity high
: ovirt-4.2.1
: ---
Assigned To: Benny Zlotnik
Kevin Alon Goldblatt
:
Depends On: 1527898
Blocks:
  Show dependency treegraph
 
Reported: 2017-12-08 06:51 EST by Roman Hodain
Modified: 2018-05-15 13:47 EDT (History)
17 users (show)

See Also:
Fixed In Version: vdsm v4.20.13
Doc Type: Known Issue
Doc Text:
Previously, when a user attempted to move a disk with a snapshot that had been created before the disk was extended, the operation failed in storage domains whose data center was 4.0 or earlier. This occurred because "qemu-img convert" with compat=0.10 images interprets the space after the backing file as zeroes, sometimes causing the output disk to be larger than the logical volume created for it. In the current release, an attempt to move such a disk is blocked with an error message stating that the disk's snapshot must be deleted before moving the disk.
Story Points: ---
Clone Of:
Environment:
Last Closed: 2018-05-15 13:46:12 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: Storage
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3294721 None None None 2017-12-19 19:05 EST
oVirt gerrit 85783 master MERGED core: block disk copy operations for qcow compat 0.10 2018-01-08 07:01 EST
Red Hat Product Errata RHEA-2018:1488 None None None 2018-05-15 13:47 EDT

  None (edit)
Description Roman Hodain 2017-12-08 06:51:38 EST
Description of problem:
When a snapshot is crated on top of a thin provisioned disk and the disk is then extended the live storage migration fails when syncing the images between the storage domains.

Version-Release number of selected component (if applicable):
vdsm-4.19.36-1.el7ev
vdsm-4.17.43-1.el7ev

How reproducible:
100%

Steps to Reproduce:
1. Create a VM with thin provisioned disk on block storage domain.
2. Create a snapshot.
3. Live migrate the disk to another block storage domain.

Actual results:
46524dd8-8079-4b8e-9dc4-377414e86480::ERROR::2017-12-04 01:24:15,005::image::464::Storage.Image::(_interImagesCopy) Copy image error: image=eb2241b7-a2d3-4047-b865-570176de709c, src domain=b30263e4-49d8-468b-9c71-d68d5f8d79e2, dst domain=061d3f67-c803-4a02-8a4b-06d6bc1802be
Traceback (most recent call last):
  File "/usr/share/vdsm/storage/image.py", line 455, in _interImagesCopy
    backingFormat=backingFormat)
  File "/usr/lib/python2.7/site-packages/vdsm/qemuimg.py", line 207, in convert
    raise QImgError(rc, out, err)
QImgError: ecode=1, stdout=[], stderr=['qemu-img: error while writing sector 276866432: No space left on device', 'qemu-img: Failed to flush the refcount block cache: No space left on device'], message=None

Expected results:
The clone operation works

Additional info:
The command triggering this issue is:
/usr/bin/qemu-img convert ... -f qcow2 /rhev/data-center/mnt/blockSD/b30263e4-49d8-468b-9c71-d68d5f8d79e2/images/eb2241b7-a2d3-4047-b865-570176de709c/0a795045-6a48-41f9-8a3f-b6fa25682f98 -O qcow2 -o compat=0.10,backing_file=9dd0fb65-1881-4fea-bc7c-498ef367f1f8,backing_fmt=qcow2 /rhev/data-center/mnt/blockSD/061d3f67-c803-4a02-8a4b-06d6bc1802be/images/eb2241b7-a2d3-4047-b865-570176de709c/0a795045-6a48-41f9-8a3f-b6fa25682f98 (cwd /rhev/data-center/mnt/blockSD/b30263e4-49d8-468b-9c71-d68d5f8d79e2/images/eb2241b7-a2d3-4047-b865-570176de709c)

The volume and its parent is 
0a795045-6a48-41f9-8a3f-b6fa25682f98	b30263e4-49d8-468b-9c71-d68d5f8d79e2	106.00g	IU_eb2241b7-a2d3-4047-b865-570176de709c,MD_5,PU_9dd0fb65-1881-4fea-bc7c-498ef367f1f8
9dd0fb65-1881-4fea-bc7c-498ef367f1f8	b30263e4-49d8-468b-9c71-d68d5f8d79e2	65.00g	IU_eb2241b7-a2d3-4047-b865-570176de709c,MD_4,PU_00000000-0000-0000-0000-000000000000

The volumes on the destination are created inthe following way:
... lvcreate ... --size 1024m --addtag OVIRT_VOL_INITIALIZING --name 9dd0fb65-1881-4fea-bc7c-498ef367f1f8 061d3f67-c803-4a02-8a4b-06d6bc1802be (cwd None)
.. lvcreate ... --size 1024m --addtag OVIRT_VOL_INITIALIZING --name 0a795045-6a48-41f9-8a3f-b6fa25682f98 061d3f67-c803-4a02-8a4b-06d6bc1802be (cwd None)
...
... lvextend ... --autobackup n --size 66560m 061d3f67-c803-4a02-8a4b-06d6bc1802be/9dd0fb65-1881-4fea-bc7c-498ef367f1f8 (cwd None)
... lvextend ... --autobackup n --size 108544m 061d3f67-c803-4a02-8a4b-06d6bc1802be/0a795045-6a48-41f9-8a3f-b6fa25682f98 (cwd None)

The sizes are matching.

The qcow2 structure is created in this way:

... /usr/bin/qemu-img create -f qcow2 -o compat=0.10 /rhev/data-center/c36e24b4-16e5-45ca-838d-dd054563401e/061d3f67-c803-4a02-8a4b-06d6bc1802be/images/eb2241b7-a2d3-4047-b865-570176de709c/9dd0fb65-1881-4fea-bc7c-498ef367f1f8 68719476736 (cwd None)
... /usr/bin/qemu-img create -f qcow2 -o compat=0.10 -b 9dd0fb65-1881-4fea-bc7c-498ef367f1f8 -F qcow2 /rhev/data-center/c36e24b4-16e5-45ca-838d-dd054563401e/061d3f67-c803-4a02-8a4b-06d6bc1802be/images/eb2241b7-a2d3-4047-b865-570176de709c/0a795045-6a48-41f9-8a3f-b6fa25682f98 (cwd /rhev/data-center/c36e24b4-16e5-45ca-838d-dd054563401e/061d3f67-c803-4a02-8a4b-06d6bc1802be/images/eb2241b7-a2d3-4047-b865-570176de709c)

Ten we convert the parent (successful)
... /usr/bin/qemu-img convert -t none -T none -f qcow2 /rhev/data-center/mnt/blockSD/b30263e4-49d8-468b-9c71-d68d5f8d79e2/images/eb2241b7-a2d3-4047-b865-570176de709c/9dd0fb65-1881-4fea-bc7c-498ef367f1f8 -O qcow2 -o compat=0.10 /rhev/data-center/mnt/blockSD/061d3f67-c803-4a02-8a4b-06d6bc1802be/images/eb2241b7-a2d3-4047-b865-570176de709c/9dd0fb65-1881-4fea-bc7c-498ef367f1f8 (cwd None)

and then its leaf fails with the mentioned error:
... /usr/bin/qemu-img convert ... -f qcow2 /rhev/data-center/mnt/blockSD/b30263e4-49d8-468b-9c71-d68d5f8d79e2/images/eb2241b7-a2d3-4047-b865-570176de709c/0a795045-6a48-41f9-8a3f-b6fa25682f98 -O qcow2 -o compat=0.10,backing_file=9dd0fb65-1881-4fea-bc7c-498ef367f1f8,backing_fmt=qcow2 /rhev/data-center/mnt/blockSD/061d3f67-c803-4a02-8a4b-06d6bc1802be/images/eb2241b7-a2d3-4047-b865-570176de709c/0a795045-6a48-41f9-8a3f-b6fa25682f98 (cwd /rhev/data-center/mnt/blockSD/b30263e4-49d8-468b-9c71-d68d5f8d79e2/images/eb2241b7-a2d3-4047-b865-570176de709c)
Comment 3 Allon Mureinik 2017-12-12 10:53:45 EST
Roman, can you share the qemu-kvm-rhev versions you're using?
Comment 4 Allon Mureinik 2017-12-12 10:54:46 EST
(In reply to Allon Mureinik from comment #3)
> Roman, can you share the qemu-kvm-rhev versions you're using?
qemu-img-rhev, of course.
Comment 5 Germano Veit Michel 2017-12-14 21:11:00 EST
One more, with ELS:

vdsm-4.17.35-1.el7ev.noarch
qemu-kvm-rhev-2.3.0-31.el7_2.21.x86_64
Comment 8 Allon Mureinik 2017-12-20 07:41:35 EST
Just to clarify our action plan:
There is a real gap in qemu-img[-[rh]ev]]. Fixing it is tracked by bug 1527898.

From RHV's side, we need two BZs:
1. A dependency bump to consume the fix for bug 1527898 once its ready
2. An actual handling in RHV which will recognize a situation where we know the copying will fail and block it with a validation message that instructs the user which snapshot(s) he should merge in order to allow the copying to succeed.
Comment 9 Allon Mureinik 2018-01-08 07:04:41 EST
Benny - the included patch clearly handles live and cold move flows. Does it also cover importing [without collapse] to a V3 block storage domain?
(if it doesn't, please open another BZ on it and link back here).

Regardless, this is a non-obvious patch - please add some doctext to explain it.
Comment 10 Benny Zlotnik 2018-01-09 03:42:29 EST
As discussed offline, this issue does not apply to the import scenario as the LV created large enough
Comment 11 RHV Bugzilla Automation and Verification Bot 2018-01-12 09:41:13 EST
INFO: Bug status (ON_QA) wasn't changed but the folowing should be fixed:

[Project 'ovirt-engine'/Component 'vdsm' mismatch]

For more info please contact: rhv-devops@redhat.com
Comment 12 Allon Mureinik 2018-01-15 08:36:53 EST
(In reply to RHV Bugzilla Automation and Verification Bot from comment #11)
> INFO: Bug status (ON_QA) wasn't changed but the folowing should be fixed:
> 
> [Project 'ovirt-engine'/Component 'vdsm' mismatch]
> 
> For more info please contact: rhv-devops@redhat.com
Moving to enigne.
Comment 13 Kevin Alon Goldblatt 2018-01-23 06:53:26 EST
Verified with the following code:
--------------------------------------
ovirt-engine-4.2.1.2-0.1.el7.noarch
vdsm-4.20.14-1.el7ev.x86_64


Verified with the following scenario:
--------------------------------------
Steps to Reproduce:
1. Create a VM with thin provisioned disk on block storage domain.
2. Create a snapshot.
3. Live migrate the disk to another block storage domain.

Actual results:
Live migration is successful. No errors reported


Moving to VERIFIED
Comment 19 errata-xmlrpc 2018-05-15 13:46:12 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2018:1488

Note You need to log in before you can comment on or make changes to this bug.