Bug 1707707 - Ovirt: Can't upload Disk Snapshots with size >1G to iSCSI storage using Java/Python SDK
Summary: Ovirt: Can't upload Disk Snapshots with size >1G to iSCSI storage using Java/...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.4.0
Hardware: Unspecified
OS: Unspecified
unspecified
high with 1 vote
Target Milestone: ovirt-4.4.1
: 4.4.1.1
Assignee: Daniel Erez
QA Contact: Evelina Shames
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-05-08 07:19 UTC by francisco.garcia
Modified: 2020-07-08 08:27 UTC (History)
7 users (show)

Fixed In Version: ovirt-engine-4.4.1.1
Clone Of:
Environment:
Last Closed: 2020-07-08 08:27:46 UTC
oVirt Team: Storage
Embargoed:
pm-rhel: ovirt-4.4+


Attachments (Terms of Use)
host's-imatgeio.log (19.17 KB, text/plain)
2019-05-08 07:19 UTC, francisco.garcia
no flags Details
qemu-imgs-NFS.txt (1.69 KB, text/plain)
2019-05-08 07:20 UTC, francisco.garcia
no flags Details
qemu-imgs-iSCSI.txt (1.82 KB, text/plain)
2019-05-08 07:22 UTC, francisco.garcia
no flags Details
Logs Ovirt (139.17 KB, application/gzip)
2019-05-20 09:10 UTC, francisco.garcia
no flags Details
Used python script for upload (12.38 KB, text/x-python)
2020-05-10 08:34 UTC, shani
no flags Details
Log for the python script used (12.38 KB, text/plain)
2020-05-10 08:35 UTC, shani
no flags Details
engine log (301.26 KB, text/plain)
2020-05-10 08:35 UTC, shani
no flags Details
deamon log (231.11 KB, text/plain)
2020-05-10 08:36 UTC, shani
no flags Details
proxy log (90.32 KB, text/plain)
2020-05-10 08:36 UTC, shani
no flags Details
cdsm log (346.74 KB, text/plain)
2020-05-10 08:37 UTC, shani
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 108991 0 master MERGED core: set image initial size on CreateSnapshot 2020-09-16 16:39:42 UTC

Description francisco.garcia 2019-05-08 07:19:33 UTC
Created attachment 1565497 [details]
host's-imatgeio.log

Description of the problem:
I am not able to restore (upload) a VM when its disk (Either Thin provisioning or Thick provisioned) is located in an iSCSI storage and this VM has at least one snapshot larger than 1Gb. I am using REST API through Java SDK. 
I am using the same upload procedure described here: https://ovirt.org/develop/release-management/features/storage/backup-restore-disk-snapshots.html, of course, making little adjustments to disk format (COW, Sparse.. Depending on the type of the VM to be restored).
When I try restore any VM with this conditions, all snapshot disks with a size smaller than 1Gb are uploaded correctly. But when I am uploading a snapshot disk with a size bigger than 1Gb, the following error ocurrs:

The error triggered from Java SDK is: "The server response was 403 in the range request {bytes 1073741824-1207959551/4831838208}"
In the host's imageio log: "2019-05-07 14:26:30,253 WARNING (Thread-71960) [web] ERROR [172.19.33.146] PUT /images/8d353735-0a29-463c-b772-ec37f451e2e9 [403] Requested range out of allowed range [request=0.000488]"


For example (real scenario), I have these snapshot disks from a vm:
2.1G	DiskSnap_47c4e157-edd8-4bfc-b838-4d57ac8396bd_33b3e74a-cc6b-474a-97cc-960891716110_0.img -> First snap chain -> OK upload
1.1G	DiskSnap_47c4e157-edd8-4bfc-b838-4d57ac8396bd_9349e4dd-85b3-4145-8674-bb3f39546020_1.img -> Ok upload
1.1G	DiskSnap_47c4e157-edd8-4bfc-b838-4d57ac8396bd_4d10cb58-9f52-4d64-b6e9-ead4bed4c6a6_2.img -> Ok upload
4.6G	DiskSnap_47c4e157-edd8-4bfc-b838-4d57ac8396bd_429cbe6d-1148-4a84-861a-56ad6902859e_3.img -> Last snap chain -> Fail upload with previous error
[Qemu-img info in attached file: qemu-imgs-iSCSI.txt]

In the restore process, when I create the disk(In this case), I put the following values:
Disk: {
	Name:T_iSCSI_Thin_Disk1_restore,
	Id:null,
	Interface:VIRTIO_SCSI,
	Format:COW,
	WipeAfterDelete: false,
	Shareable: false,
	Sparse: true,
	Boot:true,
	Activo:true,
	Sizes:{
		Initial:4294967296,
		Actual:null,
		Total:null,
		Provisioned: 4294967296
	}
}

And the disk created is:
Disk: {
	Name:T_iSCSI_Thin_Disk1_restore,
	Id:aac28ff3-abdb-4bf9-beed-c08e7e19b0ba,
	Image:c8654d05-a18c-47bd-ab0b-f4d746e23efb,
	Format:COW,
	WipeAfterDelete: false,
	Shareable: false,
	Sparse: true,
	Sizes:{
		Initial:null,
		Actual:0,
		Total:0,
		Provisioned: 4294967296
	}
}

Therefore, the first upload can finish correctly. However, when I create snapshots with this disk, the snapshot disks have these parameters:
DiskSnapshot: {
	Id: 29dd6a18-c17c-4938-be67-d7af6de713ec,
	Disk:aac28ff3-abdb-4bf9-beed-c08e7e19b0ba,
	Snapshot:ceb2f29e-1ae8-441c-a5a7-836308cfeb8d,
	Sizes:{
		Actual:1073741824
		Provisioned:4294967296
		Total: 0
		Initial: null
	}
}
DiskSnapshot: {
	Id: 69620911-d490-4872-b353-ba29a762ea3e,
	Disk:aac28ff3-abdb-4bf9-beed-c08e7e19b0ba,
	Snapshot:c35423d5-ddd6-46f2-a5d7-0080941c3f30,
	Sizes:{
		Actual:1073741824
		Provisioned:4294967296
		Total: 0
		Initial: null
	}
}
DiskSnapshot: {
	Id: c8654d05-a18c-47bd-ab0b-f4d746e23efb,
	Disk:aac28ff3-abdb-4bf9-beed-c08e7e19b0ba,
	Snapshot:fc135efd-a67b-4eb3-bfdb-9090aa3b267e,
	Sizes:{
		Actual:4831838208
		Provisioned:4294967296
		Total: 0
		Initial: null
	}
}
DiskSnapshot: {
	Id: e3389916-4e80-4593-a116-3484f295ff7f,
	Disk:aac28ff3-abdb-4bf9-beed-c08e7e19b0ba,
	Snapshot:74aff5e7-e1c1-4f51-ae31-ea44caa180ef,
	Sizes:{
		Actual:1073741824
		Provisioned:4294967296
		Total: 0
		Initial: null
	}
}

As we can observe with these values, the actual size of snapshot disks indicates 1Gb, and when I try upload a snapshot disk (Bigger than 1.1Gb), the system doesn't allow to complete the upload, launching the mentionated previous error.

However, using NFS storage, the previous error doesn't ocurr. The system allows me upload disks bigger than 1 Gb, and when the upload finishes, the system refreshs the actualSize to actual value.

For example (real scenario), I have these snapshot disks from a vm:
3.1G	DiskSnap_985a02a0-0752-420a-a2fb-a3e89d1ac3bc_3dffb5f4-3f9c-410c-8b97-133922721f9a_0.img -> First snap chain -> OK upload
22M	  DiskSnap_985a02a0-0752-420a-a2fb-a3e89d1ac3bc_d3307b3c-f930-4faa-90cb-6aedd38bea93_1.img -> Ok upload
31M	  DiskSnap_985a02a0-0752-420a-a2fb-a3e89d1ac3bc_aa8648dd-d6c0-4d9d-8373-6764e806359f_2.img -> Ok upload
1.3G	DiskSnap_985a02a0-0752-420a-a2fb-a3e89d1ac3bc_2cf31d95-3b86-4d97-b39b-672d0066f503_3.img -> Last snap chain -> OK upload
[Qemu-img info in attached file: qemu-imgs-NFS.txt]

The restore with NFS storage is the same process but I change the format disk and sparse. With following values:

Disk: {
	Name:985a02a0-0752-420a-a2fb-a3e89d1ac3bc_restore,
	Id:null,
	Interface:VIRTIO_SCSI,
	Format:RAW,
	WipeAfterDelete: false,
	Shareable: false,
	Sparse: true,
	Boot:true,
	Activo:true,
	Sizes:{
		Initial:3221225472,
		Actual:null,
		Total:null,
		Provisioned: 3221225472
	}
}

And the disk created was:
Disk: {
	Name:985a02a0-0752-420a-a2fb-a3e89d1ac3bc_restore,
	Id:f2a89f5a-d8ac-4ea4-b17c-591cd86855e9,
	Image:9c80aead-88de-48f9-be7e-0fde917e7f55,
	Format:RAW,
	WipeAfterDelete: false,
	Shareable: false,
	Sparse: true,
	Sizes:{
		Initial:null,
		Actual:0,
		Total:0,
		Provisioned: 3221225472
	}
}


When I create snapshots with this disk, the snapshot disks have these parameters:

DiskSnapshot: {
	Id: 87c4cb08-502a-4937-bbba-6d455395e990,
	Disk:f2a89f5a-d8ac-4ea4-b17c-591cd86855e9,
	Snapshot:dd257190-ef50-4990-a61f-40d84e2b08e9,
	Sizes:{
		Actual:200704
		Provisioned:3221225472
		Total: 0
		Initial: null
	}
}
DiskSnapshot: {
	Id: e7d6f1fb-ab37-45f8-8e23-5964ed194581,
	Disk:f2a89f5a-d8ac-4ea4-b17c-591cd86855e9,
	Snapshot:b9f3f3d3-710e-4ce2-b05f-563227f5ec04,
	Sizes:{
		Actual:200704
		Provisioned:3221225472
		Total: 0
		Initial: null
	}
}
DiskSnapshot: {
	Id: 8ef6f502-d00f-4fa5-b407-2cdc0e876045,
	Disk:f2a89f5a-d8ac-4ea4-b17c-591cd86855e9,
	Snapshot:274994de-5c2b-4b38-965f-4cadea3e0db3,
	Sizes:{
		Actual:200704
		Provisioned:3221225472
		Total: 0
		Initial: null
	}
}
DiskSnapshot: {
	Id: 9c80aead-88de-48f9-be7e-0fde917e7f55,
	Disk:f2a89f5a-d8ac-4ea4-b17c-591cd86855e9,
	Snapshot:f1d7382d-4427-4e21-9544-1b3cd85f23ae,
	Sizes:{
		Actual:0
		Provisioned:3221225472
		Total: 0
		Initial: null
	}
}


As we can observe with these values, the actual size of snapshot disks indicates 196Kb, and when I try upload any snapshot disk (Even more 1.1Gb), 
the system allows upload all content disk and when it is finalished, it changes actual and total size. When the restore ends, I obtain these values:

Disk: {
	Name:985a02a0-0752-420a-a2fb-a3e89d1ac3bc_restoreBacula,
	Id:f2a89f5a-d8ac-4ea4-b17c-591cd86855e9,
	Image:8ef6f502-d00f-4fa5-b407-2cdc0e876045,
	Format:COW,
	WipeAfterDelete: false,
	Shareable: false,
	Status: OK,
	Sparse: true,
	Sizes:{
		Initial:null,
		Actual:1318850560,
		Total:4593303552,
		Provisioned: 3221225472
	}
}

DiskSnapshot: {
	Id: 87c4cb08-502a-4937-bbba-6d455395e990,
	Disk:f2a89f5a-d8ac-4ea4-b17c-591cd86855e9,
	Snapshot:dd257190-ef50-4990-a61f-40d84e2b08e9,
	Sizes:{
		Actual:22282240
		Provisioned:3221225472
		Total: 0
		Initial: null
	}
}
DiskSnapshot: {
	Id: e7d6f1fb-ab37-45f8-8e23-5964ed194581,
	Disk:f2a89f5a-d8ac-4ea4-b17c-591cd86855e9,
	Snapshot:b9f3f3d3-710e-4ce2-b05f-563227f5ec04,
	Sizes:{
		Actual:32374784
		Provisioned:3221225472
		Total: 0
		Initial: null
	}
}
DiskSnapshot: {
	Id: 9c80aead-88de-48f9-be7e-0fde917e7f55,
	Disk:f2a89f5a-d8ac-4ea4-b17c-591cd86855e9,
	Snapshot:f1d7382d-4427-4e21-9544-1b3cd85f23ae,
	Sizes:{
		Actual:3219795968
		Provisioned:3221225472
		Total: 0
		Initial: null
	}
}

There is a disk snapshot less since at the end of the restoration I merge the last snapshot. But the values showed changes in actual Size in the DiskSnapshot.

Could you help me, please? Is there a different procedure to the one described in https://ovirt.org/develop/release-management/features/storage/backup-restore-disk-snapshots.html to work with iSCSI storage or is this simply a bug?



How reproducible:
Backup and restore VM with snapshots in an iSCSI storage, using Java SDK.

Steps to Reproduce:
1.- Follow steps indicated in https://ovirt.org/develop/release-management/features/storage/backup-restore-disk-snapshots.html to backup a VM allocated iSCSI Storage and containing snapshots bigger than 1.1GB
2.- Try to restore it using the instructions of that documment

Actual results:
Upload of images of disk snapshots bigger than 1.1Gb do not suceed ant the restore process fails.

Expected results:
Restored vm

Comment 1 francisco.garcia 2019-05-08 07:20:27 UTC
Created attachment 1565498 [details]
qemu-imgs-NFS.txt

qemu-img info to disks Snapshot in NFS restore

Comment 2 francisco.garcia 2019-05-08 07:22:38 UTC
Created attachment 1565499 [details]
qemu-imgs-iSCSI.txt

qemu-img info to disks Snapshot in iSCSI restore

Comment 3 francisco.garcia 2019-05-20 09:03:48 UTC
This bug has been discussed in the Ovirt-lists:
https://lists.ovirt.org/archives/list/devel@ovirt.org/thread/OZWADBWM2RWJ2DXKPEM2PIQNR2OBIVBJ/

Also, I wrote other bug with the same problem, but with python scripts of Ovirt's repository:
https://bugzilla.redhat.com/show_bug.cgi?id=1707372

Finally, it seems that it is indeed a bug.

I am available to provide any additional information that may be needed.

Comment 4 francisco.garcia 2019-05-20 09:10:20 UTC
Created attachment 1571246 [details]
Logs Ovirt

I attach ovirt-engine log, vdsm log and image-io log

Comment 6 shani 2020-05-10 08:32:42 UTC
Summing up the conclusions from bug https://bugzilla.redhat.com/1707372:
Using the Python SDK:

ovirt-engine-4.3.9.4

In the snapshot creation log[1], the size is properly updated (imageSizeInBytes='3221225472'),
so the upload_disk_snapshots.py script was modified for updating the snapshot size[2]. 
The script used was also uploaded.

Need to reproduce and check whether the size of the disk snapshots is indeed updated correctly in the DB.

[1]
2020-05-04 16:04:24,169+02 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.CreateVolumeVDSCommand] (default task-15) [746965a4-4f32-4481-ba81-b20a8d23d8f4] START, CreateVolumeVDSCommand( CreateVolumeVDSCommandParameters:{storagePoolId='7971ebcc-78b1-11ea-b17d-0800271130bc', ignoreFailoverLimit='false', storageDomainId='ecf8e071-59f4-4ff8-acfa-8fabee62e8d8', imageGroupId='5a214325-d8eb-4066-bef3-521484ba49db', imageSizeInBytes='3221225472', volumeFormat='COW', newImageId='5a214325-d8eb-4066-bef3-521484ba49db', imageType='Sparse', newImageDescription='', imageInitialSizeInBytes='0', imageId='4ca9464f-0aaf-471b-a1a4-d29de319a661', sourceImageGroupId='5a214325-d8eb-4066-bef3-521484ba49db'}), log id: 71815b92

[2]
    # Add the new snapshot:
    snapshot = snapshots_service.add(
        types.Snapshot(
            description=description,
            disk_attachments=[
                types.DiskAttachment(
                    disk=types.Disk(
                        id=disk_id,
                        image_id=image_id

                        # sizes for every snapshot should work here

                    )   
                )   
            ]   
        ),  
    )

Comment 7 shani 2020-05-10 08:34:32 UTC
Created attachment 1686965 [details]
Used python script for upload

Comment 8 shani 2020-05-10 08:35:17 UTC
Created attachment 1686966 [details]
Log for the python script used

Comment 9 shani 2020-05-10 08:35:51 UTC
Created attachment 1686967 [details]
engine log

Comment 10 shani 2020-05-10 08:36:14 UTC
Created attachment 1686968 [details]
deamon log

Comment 11 shani 2020-05-10 08:36:39 UTC
Created attachment 1686969 [details]
proxy log

Comment 12 shani 2020-05-10 08:37:07 UTC
Created attachment 1686970 [details]
cdsm log

Comment 13 Michal Skrivanek 2020-05-20 10:22:29 UTC
done, right?

Comment 14 shani 2020-05-20 12:24:50 UTC
A patch was merged: https://gerrit.ovirt.org/#/c/108991/

Comment 15 Evelina Shames 2020-06-01 18:00:36 UTC
After working with Daniel, verified with the following flow:
1. Create 2 images in 'disks/<VM's disk_id>' directory:
     qemu-img create -f raw a8af296c-6444-40ae-8bb5-8ce3fc5015fe 2G
     qemu-img info a8af296c-6444-40ae-8bb5-8ce3fc5015fe
     qemu-img create -f raw a3fe93c4-3fc0-476b-87db-8f165a347154.raw 2G
     dd if=/dev/urandom of=a3fe93c4-3fc0-476b-87db-8f165a347154.raw bs=1M count=2000
     qemu-img convert -O qcow2 a3fe93c4-3fc0-476b-87db-8f165a347154.raw a3fe93c4-3fc0-476b-87db-8f165a347154
     qemu-img rebase -f qcow2 -b a8af296c-6444-40ae-8bb5-8ce3fc5015fe -F raw a3fe93c4-3fc0-476b-87db-8f165a347154 -u
     qemu-img info a3fe93c4-3fc0-476b-87db-8f165a347154

2. Create VM with disk and download it's ovf using python sdk:
    python download_vm_ovf.py

3. Upload the images from (1) using python sdk:
     python upload_disk_snapshots.py

Verified on ovirt-engine-4.4.1.1-0.5.el8ev.noarch

Comment 16 francisco.garcia 2020-06-02 09:14:56 UTC
Hello Evelina,

That is very good news!

I will try to test it this week. :)

Thanks for the effort, both you and Daniel, Nir, Shani & comp. =D



Regards,
Fran

Comment 17 Sandro Bonazzola 2020-07-08 08:27:46 UTC
This bugzilla is included in oVirt 4.4.1 release, published on July 8th 2020.

Since the problem described in this bug report should be resolved in oVirt 4.4.1 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.