Created attachment 1226500 [details] script used, engine.log, vdsm logs Description of problem: Unable to upload qcow2 images using python sdk to thin-provision disks. Version-Release number of selected component (if applicable): ovirt-engine-4.0.6-0.1.el7ev.noarch vdsm-4.18.17-1.el7ev.x86_64 How reproducible: 100% Steps to Reproduce: Performed this test twice, for two storage domain types: iscsi and nfs 1. Create a *thin-provision* disk on a storage domain. 2. Upload a qcow2 (0.10) disk using python sdk (script attached) Actual results: * Disks upload to iscsi storage domain - paused when reaching 33% of the uploaded disk - which is approximately 1GB * Disks uploaded to nfs fail when reaching 100% transfer, with the following error displayed by webadmin: VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification failed: u"reason=Volume's format specified by QEMU is qcow2, while the format specified in VDSM metadata is raw" Expected results: For the upload to succeed Additional info: raw uploads succeed
(In reply to Natalie Gavrielov from comment #0) > Created attachment 1226500 [details] > script used, engine.log, vdsm logs > > Description of problem: > Unable to upload qcow2 images using python sdk to thin-provision disks. > > Version-Release number of selected component (if applicable): > ovirt-engine-4.0.6-0.1.el7ev.noarch > vdsm-4.18.17-1.el7ev.x86_64 > > How reproducible: > 100% > > Steps to Reproduce: > > Performed this test twice, for two storage domain types: iscsi and nfs > 1. Create a *thin-provision* disk on a storage domain. > 2. Upload a qcow2 (0.10) disk using python sdk (script attached) > > Actual results: > * Disks upload to iscsi storage domain - paused when reaching 33% of the > uploaded disk - which is approximately 1GB The created disk's *actual* size should be at least the size of the file you are uploading, so you need to create one using REST and specify it, and use this disk as a target. > * Disks uploaded to nfs fail when reaching 100% transfer, with the following > error displayed by webadmin: > VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification > failed: u"reason=Volume's format specified by QEMU is qcow2, while the > format specified in VDSM metadata is raw" Is the target disk is raw or qcow? is the file you are uploading is raw or qcow? > > Expected results: > For the upload to succeed > > Additional info: > raw uploads succeed
(In reply to Amit Aviram from comment #1) > (In reply to Natalie Gavrielov from comment #0) > > Created attachment 1226500 [details] > > script used, engine.log, vdsm logs > > > > Description of problem: > > Unable to upload qcow2 images using python sdk to thin-provision disks. > > > > Version-Release number of selected component (if applicable): > > ovirt-engine-4.0.6-0.1.el7ev.noarch > > vdsm-4.18.17-1.el7ev.x86_64 > > > > How reproducible: > > 100% > > > > Steps to Reproduce: > > > > Performed this test twice, for two storage domain types: iscsi and nfs > > 1. Create a *thin-provision* disk on a storage domain. > > 2. Upload a qcow2 (0.10) disk using python sdk (script attached) > > > > Actual results: > > * Disks upload to iscsi storage domain - paused when reaching 33% of the > > uploaded disk - which is approximately 1GB > > The created disk's *actual* size should be at least the size of the file you > are uploading, so you need to create one using REST and specify it, and use > this disk as a target. Currently it might not be an option actually, we will check that. > > > * Disks uploaded to nfs fail when reaching 100% transfer, with the following > > error displayed by webadmin: > > VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification > > failed: u"reason=Volume's format specified by QEMU is qcow2, while the > > format specified in VDSM metadata is raw" > > Is the target disk is raw or qcow? is the file you are uploading is raw or > qcow? > > > > > Expected results: > > For the upload to succeed > > > > Additional info: > > raw uploads succeed
Please disregard the above comments. I tested the env a little bit, and there are a couple of issues that made your tests fail: (In reply to Natalie Gavrielov from comment #0) > Created attachment 1226500 [details] > script used, engine.log, vdsm logs > > Description of problem: > Unable to upload qcow2 images using python sdk to thin-provision disks. > > Version-Release number of selected component (if applicable): > ovirt-engine-4.0.6-0.1.el7ev.noarch > vdsm-4.18.17-1.el7ev.x86_64 > > How reproducible: > 100% > > Steps to Reproduce: > > Performed this test twice, for two storage domain types: iscsi and nfs > 1. Create a *thin-provision* disk on a storage domain. > 2. Upload a qcow2 (0.10) disk using python sdk (script attached) > > Actual results: > * Disks upload to iscsi storage domain - paused when reaching 33% of the > uploaded disk - which is approximately 1GB For uploading to a block storage, the full size of the target disk should be allocated. This cannot be accomplished via the webadmin, and doing it from the REST doesn't work- which is the real bug here. To be precise, adding a new disk via REST, stating its domain is a block storage, and adding an "actual_size" attributes, E.g: <disk> <name>rest_disk</name> <actual_size>2147483648</actual_size> <storage_domain> ... </storage_domain> </disk> will return: <disk> <name>rest_disk</name> <actual_size>0</actual_size> <storage_domain> ... </storage_domain> </disk> This makes the target disk to be limited to 1GB. preventing an upload of bigger disks. > * Disks uploaded to nfs fail when reaching 100% transfer, with the following > error displayed by webadmin: > VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification > failed: u"reason=Volume's format specified by QEMU is qcow2, while the > format specified in VDSM metadata is raw" Via the webadmin, selecting a "thin provision" on file storage won't make your disk a QCOW disk, as file storages are thin anyway. so from the webadmin you can't create a COW on NFS. From the REST however, you are able to do that. so in this case I don't think we have a real issue. even if it is something to consider, it belongs in another bug. To conclude, Resolution for this bug should be just fixing the REST's add disk request to actually be effective.
(In reply to Amit Aviram from comment #3) > Please disregard the above comments. I tested the env a little bit, and > there are a couple of issues that made your tests fail: > > (In reply to Natalie Gavrielov from comment #0) > > Created attachment 1226500 [details] > > script used, engine.log, vdsm logs > > > > Description of problem: > > Unable to upload qcow2 images using python sdk to thin-provision disks. > > > > Version-Release number of selected component (if applicable): > > ovirt-engine-4.0.6-0.1.el7ev.noarch > > vdsm-4.18.17-1.el7ev.x86_64 > > > > How reproducible: > > 100% > > > > Steps to Reproduce: > > > > Performed this test twice, for two storage domain types: iscsi and nfs > > 1. Create a *thin-provision* disk on a storage domain. > > 2. Upload a qcow2 (0.10) disk using python sdk (script attached) > > > > Actual results: > > * Disks upload to iscsi storage domain - paused when reaching 33% of the > > uploaded disk - which is approximately 1GB > > For uploading to a block storage, the full size of the target disk should be > allocated. This cannot be accomplished via the webadmin, and doing it from > the REST doesn't work- which is the real bug here. > To be precise, adding a new disk via REST, stating its domain is a block > storage, and adding an "actual_size" attributes, E.g: > > <disk> > <name>rest_disk</name> > <actual_size>2147483648</actual_size> > <storage_domain> ... </storage_domain> > </disk> I think that the only attribute that we can actually set is the provisioned_size and set the sparse to false and by that combination we will get preallocated disk with size equal to provisioned_zise > > will return: > > <disk> > <name>rest_disk</name> > <actual_size>0</actual_size> > <storage_domain> ... </storage_domain> > </disk> > > This makes the target disk to be limited to 1GB. preventing an upload of > bigger disks. Why can't we ask for lvextend in a thin-provisioned disk? > > > * Disks uploaded to nfs fail when reaching 100% transfer, with the following > > error displayed by webadmin: > > VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification > > failed: u"reason=Volume's format specified by QEMU is qcow2, while the > > format specified in VDSM metadata is raw" > > Via the webadmin, selecting a "thin provision" on file storage won't make > your disk a QCOW disk, as file storages are thin anyway. so from the > webadmin you can't create a COW on NFS. From the REST however, you are able > to do that. so in this case I don't think we have a real issue. even if it > is something to consider, it belongs in another bug. > > To conclude, Resolution for this bug should be just fixing the REST's add > disk request to actually be effective.
(In reply to Raz Tamir from comment #4) > (In reply to Amit Aviram from comment #3) > > Please disregard the above comments. I tested the env a little bit, and > > there are a couple of issues that made your tests fail: > > > > (In reply to Natalie Gavrielov from comment #0) > > > Created attachment 1226500 [details] > > > script used, engine.log, vdsm logs > > > > > > Description of problem: > > > Unable to upload qcow2 images using python sdk to thin-provision disks. > > > > > > Version-Release number of selected component (if applicable): > > > ovirt-engine-4.0.6-0.1.el7ev.noarch > > > vdsm-4.18.17-1.el7ev.x86_64 > > > > > > How reproducible: > > > 100% > > > > > > Steps to Reproduce: > > > > > > Performed this test twice, for two storage domain types: iscsi and nfs > > > 1. Create a *thin-provision* disk on a storage domain. > > > 2. Upload a qcow2 (0.10) disk using python sdk (script attached) > > > > > > Actual results: > > > * Disks upload to iscsi storage domain - paused when reaching 33% of the > > > uploaded disk - which is approximately 1GB > > > > For uploading to a block storage, the full size of the target disk should be > > allocated. This cannot be accomplished via the webadmin, and doing it from > > the REST doesn't work- which is the real bug here. > > To be precise, adding a new disk via REST, stating its domain is a block > > storage, and adding an "actual_size" attributes, E.g: > > > > <disk> > > <name>rest_disk</name> > > <actual_size>2147483648</actual_size> > > <storage_domain> ... </storage_domain> > > </disk> > I think that the only attribute that we can actually set is the > provisioned_size and set the sparse to false and by that combination we will > get preallocated disk with size equal to provisioned_zise That will work in case you want a preallocated disk, but we also need to support thin: if a user has a QCOW disk of 1.3 GB, he is supposed to be able to add a thin disk with an actual size of 1.3 GB, then use the disk in a VM as a thin disk- which can be extended later on when the VM grows. > > > > > will return: > > > > <disk> > > <name>rest_disk</name> > > <actual_size>0</actual_size> > > <storage_domain> ... </storage_domain> > > </disk> > > > > This makes the target disk to be limited to 1GB. preventing an upload of > > bigger disks. > > Why can't we ask for lvextend in a thin-provisioned disk? It is much simple to have the user specify the size he needs for the disk he's uploading. we can have an RFE to do that automatically, calling lvextend but it will take time and it is generally just an improvement, not an urgent issue IMO. > > > > > > * Disks uploaded to nfs fail when reaching 100% transfer, with the following > > > error displayed by webadmin: > > > VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification > > > failed: u"reason=Volume's format specified by QEMU is qcow2, while the > > > format specified in VDSM metadata is raw" > > > > Via the webadmin, selecting a "thin provision" on file storage won't make > > your disk a QCOW disk, as file storages are thin anyway. so from the > > webadmin you can't create a COW on NFS. From the REST however, you are able > > to do that. so in this case I don't think we have a real issue. even if it > > is something to consider, it belongs in another bug. > > > > To conclude, Resolution for this bug should be just fixing the REST's add > > disk request to actually be effective.
(In reply to Amit Aviram from comment #5) > (In reply to Raz Tamir from comment #4) > > (In reply to Amit Aviram from comment #3) > > > Please disregard the above comments. I tested the env a little bit, and > > > there are a couple of issues that made your tests fail: > > > > > > (In reply to Natalie Gavrielov from comment #0) > > > > Created attachment 1226500 [details] > > > > script used, engine.log, vdsm logs > > > > > > > > Description of problem: > > > > Unable to upload qcow2 images using python sdk to thin-provision disks. > > > > > > > > Version-Release number of selected component (if applicable): > > > > ovirt-engine-4.0.6-0.1.el7ev.noarch > > > > vdsm-4.18.17-1.el7ev.x86_64 > > > > > > > > How reproducible: > > > > 100% > > > > > > > > Steps to Reproduce: > > > > > > > > Performed this test twice, for two storage domain types: iscsi and nfs > > > > 1. Create a *thin-provision* disk on a storage domain. > > > > 2. Upload a qcow2 (0.10) disk using python sdk (script attached) > > > > > > > > Actual results: > > > > * Disks upload to iscsi storage domain - paused when reaching 33% of the > > > > uploaded disk - which is approximately 1GB > > > > > > For uploading to a block storage, the full size of the target disk should be > > > allocated. This cannot be accomplished via the webadmin, and doing it from > > > the REST doesn't work- which is the real bug here. > > > To be precise, adding a new disk via REST, stating its domain is a block > > > storage, and adding an "actual_size" attributes, E.g: > > > > > > <disk> > > > <name>rest_disk</name> > > > <actual_size>2147483648</actual_size> > > > <storage_domain> ... </storage_domain> > > > </disk> > > I think that the only attribute that we can actually set is the > > provisioned_size and set the sparse to false and by that combination we will > > get preallocated disk with size equal to provisioned_zise > > That will work in case you want a preallocated disk, but we also need to > support thin: if a user has a QCOW disk of 1.3 GB, he is supposed to be able > to add a thin disk with an actual size of 1.3 GB, then use the disk in a VM > as a thin disk- which can be extended later on when the VM grows. After testing it again, provisioned_size with sparse set to True seems to allocate the size. so this can solve the problem, and makes sense in the API. Natalie, can you try uploading after adding a disk via REST with a provisioned size? (it is in bytes, E.G for a 4GB you need to specify <provisioned_size>4294967296</provisioned_size> ) Thanks Raz > > > > > > > > > will return: > > > > > > <disk> > > > <name>rest_disk</name> > > > <actual_size>0</actual_size> > > > <storage_domain> ... </storage_domain> > > > </disk> > > > > > > This makes the target disk to be limited to 1GB. preventing an upload of > > > bigger disks. > > > > Why can't we ask for lvextend in a thin-provisioned disk? > > It is much simple to have the user specify the size he needs for the disk > he's uploading. we can have an RFE to do that automatically, calling > lvextend but it will take time and it is generally just an improvement, not > an urgent issue IMO. > > > > > > > > > > * Disks uploaded to nfs fail when reaching 100% transfer, with the following > > > > error displayed by webadmin: > > > > VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification > > > > failed: u"reason=Volume's format specified by QEMU is qcow2, while the > > > > format specified in VDSM metadata is raw" > > > > > > Via the webadmin, selecting a "thin provision" on file storage won't make > > > your disk a QCOW disk, as file storages are thin anyway. so from the > > > webadmin you can't create a COW on NFS. From the REST however, you are able > > > to do that. so in this case I don't think we have a real issue. even if it > > > is something to consider, it belongs in another bug. > > > > > > To conclude, Resolution for this bug should be just fixing the REST's add > > > disk request to actually be effective.
I performed the same tests - comment #0 (storage types nfs and iscsi) Only this time, I created disks using rest, with the following config: <disk> <alias>...</alias> <format>cow</format> <sparse>true</sparse> <actual_size>5368709120</actual_size> <provisioned_size>5368709120</provisioned_size> <storage_domains> <storage_domain> <name>...</name> </storage_domain> </storage_domains> </disk> For nfs storage the upload finished successfully - disk status in UI shows OK. For iscsi storage type - the upload gets paused when reaching 21% (in this case it's 5gb disk that is uploaded), meaning it's stuck again at 1gb.
Created attachment 1227946 [details] logs: engine, vdsm
The issue is that currently the rest api doesn't support the actual_size attribute for added disks (so when creating the disk as sparse, it'll be always created with the default initial size) which will cause us to fail on the upload on block domain when the uploaded size is bigger then the default initial size. This bug is about adding support for initial size on disk creation using the api.
Aside from adding the initial size support, cloning this to two other issues: a. The engine needs to verify that there is sufficient space (or do our best effort to do so) before the upload takes place in order to fail the operation before data is uploaded. b. RFE- In order to ease the process of the upload to existing disk, the engine can extend the allocated image size of sparse disks on block domains by itself before the upload.
4.0.6 has been the last oVirt 4.0 release, please re-target this bug.
Created attachment 1246754 [details] logs, script I tried the same scenario for iscsi storage domain: 1. Create a thin-provision image on a storage domain (tried this out once using the python sdk and another time using the rest api.. which is basically the same ). 2. Upload a qcow2 (1.1) disk using python sdk (script attached) this is the body for the rest api disk creation: <disk> <alias>upload-qcow</alias> <format>cow</format> <sparse>true</sparse> <actual_size>8589934592</actual_size> <provisioned_size>8589934592</provisioned_size> <storage_domains> <storage_domain> <name>iscsi-2</name> </storage_domain> </storage_domains> </disk> Result: failure when reaching 1gb upload. Used: (rhv-4.1.0-11) rhevm-4.1.0.3-0.1.el7.noarch ovirt-imageio-common-1.0.0-0.el7ev.noarch ovirt-imageio-proxy-1.0.0-0.el7ev.noarch vdsm-4.19.4-7.gitc2f748c.el7.centos.x86_64 ovirt-imageio-daemon-1.0.0-1.el7.noarch
The added attribute is "initial_size" and not "actual_size" - sorry for not adding a comment specifying that here. Natalie, please try to verify with that instead of "actual_size".
Performed the same scenario described in comment 12, replacing "actual_size" with initial_size and it works now (uploads more than 1GB). Verified, using builds: ovirt-engine-4.1.0.3-0.1.el7.noarch ovirt-imageio-proxy-1.0.0-0.el7ev.noarch vdsm-4.19.4-15.git5b39b63.el7.centos.x86_64 ovirt-imageio-common-1.0.0-1.el7.noarch ovirt-imageio-daemon-1.0.0-1.el7.noarch