Bug 1400296 - REST: add disk request ignores "actual_size" attribute, causing disk uploads using the API to be limited to 1GB
Summary: REST: add disk request ignores "actual_size" attribute, causing disk uploads ...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.0.6
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.1.0-beta
: ---
Assignee: Liron Aravot
QA Contact: Natalie Gavrielov
URL:
Whiteboard:
Depends On:
Blocks: 1337077 1416346
TreeView+ depends on / blocked
 
Reported: 2016-11-30 20:27 UTC by Natalie Gavrielov
Modified: 2017-02-15 14:47 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1416346 (view as bug list)
Environment:
Last Closed: 2017-02-15 14:47:06 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-4.1+


Attachments (Terms of Use)
script used, engine.log, vdsm logs (11.54 MB, application/x-gzip)
2016-11-30 20:27 UTC, Natalie Gavrielov
no flags Details
logs: engine, vdsm (1.90 MB, application/x-gzip)
2016-12-04 21:48 UTC, Natalie Gavrielov
no flags Details
logs, script (1.63 MB, application/x-gzip)
2017-02-01 17:27 UTC, Natalie Gavrielov
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 69509 0 master MERGED adding initialSize to Disk 2017-01-04 11:30:29 UTC
oVirt gerrit 69510 0 master MERGED changing Disk.initialSize to a long 2017-01-04 11:33:59 UTC
oVirt gerrit 69519 0 master MERGED api: DisksResource - initial size support on add disk 2017-01-04 14:58:47 UTC
oVirt gerrit 69564 0 model_4.1 MERGED adding initialSize to Disk 2017-01-04 11:30:46 UTC
oVirt gerrit 69565 0 model_4.0 MERGED adding initialSize to Disk 2017-01-04 11:33:20 UTC
oVirt gerrit 69566 0 metamodel_1.1 MERGED changing Disk.initialSize to a long 2017-01-04 11:34:22 UTC
oVirt gerrit 69567 0 metamodel_1.0 MERGED changing Disk.initialSize to a long 2017-01-04 11:36:29 UTC
oVirt gerrit 69598 0 master MERGED restapi: Update to model 4.2.1 and metamodel 1.2.0 2017-01-04 14:22:57 UTC
oVirt gerrit 69605 0 ovirt-engine-4.1 MERGED api: DisksResource - initial size support on add disk 2017-01-05 09:49:53 UTC
oVirt gerrit 69617 0 ovirt-engine-4.1 MERGED restapi: Update to model 4.1.26 and metamodel 1.1.10 2017-01-04 18:00:13 UTC
oVirt gerrit 69640 0 ovirt-engine-4.0 MERGED restapi: Update to model 4.0.40 and metamodel 1.0.23 2017-01-05 13:34:19 UTC
oVirt gerrit 69647 0 ovirt-engine-4.0 MERGED api: DisksResource - initial size support on add disk 2017-01-08 19:03:18 UTC

Description Natalie Gavrielov 2016-11-30 20:27:13 UTC
Created attachment 1226500 [details]
script used, engine.log, vdsm logs

Description of problem:
Unable to upload qcow2 images using python sdk to thin-provision disks.

Version-Release number of selected component (if applicable):
ovirt-engine-4.0.6-0.1.el7ev.noarch
vdsm-4.18.17-1.el7ev.x86_64

How reproducible:
100%

Steps to Reproduce:

Performed this test twice, for two storage domain types: iscsi and nfs
1. Create a *thin-provision* disk on a storage domain.
2. Upload a qcow2 (0.10) disk using python sdk (script attached)

Actual results:
* Disks upload to iscsi storage domain - paused when reaching 33% of the   
  uploaded disk - which is approximately 1GB
* Disks uploaded to nfs fail when reaching 100% transfer, with the following error displayed by webadmin:
VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification failed: u"reason=Volume's format specified by QEMU is qcow2, while the format specified in VDSM metadata is raw"

Expected results:
For the upload to succeed

Additional info:
raw uploads succeed

Comment 1 Amit Aviram 2016-12-01 09:15:39 UTC
(In reply to Natalie Gavrielov from comment #0)
> Created attachment 1226500 [details]
> script used, engine.log, vdsm logs
> 
> Description of problem:
> Unable to upload qcow2 images using python sdk to thin-provision disks.
> 
> Version-Release number of selected component (if applicable):
> ovirt-engine-4.0.6-0.1.el7ev.noarch
> vdsm-4.18.17-1.el7ev.x86_64
> 
> How reproducible:
> 100%
> 
> Steps to Reproduce:
> 
> Performed this test twice, for two storage domain types: iscsi and nfs
> 1. Create a *thin-provision* disk on a storage domain.
> 2. Upload a qcow2 (0.10) disk using python sdk (script attached)
> 
> Actual results:
> * Disks upload to iscsi storage domain - paused when reaching 33% of the   
>   uploaded disk - which is approximately 1GB

The created disk's *actual* size should be at least the size of the file you are uploading, so you need to create one using REST and specify it, and use this disk as a target.

> * Disks uploaded to nfs fail when reaching 100% transfer, with the following
> error displayed by webadmin:
> VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification
> failed: u"reason=Volume's format specified by QEMU is qcow2, while the
> format specified in VDSM metadata is raw"

Is the target disk is raw or qcow? is the file you are uploading is raw or qcow?

> 
> Expected results:
> For the upload to succeed
> 
> Additional info:
> raw uploads succeed

Comment 2 Amit Aviram 2016-12-01 09:39:34 UTC
(In reply to Amit Aviram from comment #1)
> (In reply to Natalie Gavrielov from comment #0)
> > Created attachment 1226500 [details]
> > script used, engine.log, vdsm logs
> > 
> > Description of problem:
> > Unable to upload qcow2 images using python sdk to thin-provision disks.
> > 
> > Version-Release number of selected component (if applicable):
> > ovirt-engine-4.0.6-0.1.el7ev.noarch
> > vdsm-4.18.17-1.el7ev.x86_64
> > 
> > How reproducible:
> > 100%
> > 
> > Steps to Reproduce:
> > 
> > Performed this test twice, for two storage domain types: iscsi and nfs
> > 1. Create a *thin-provision* disk on a storage domain.
> > 2. Upload a qcow2 (0.10) disk using python sdk (script attached)
> > 
> > Actual results:
> > * Disks upload to iscsi storage domain - paused when reaching 33% of the   
> >   uploaded disk - which is approximately 1GB
> 
> The created disk's *actual* size should be at least the size of the file you
> are uploading, so you need to create one using REST and specify it, and use
> this disk as a target.

Currently it might not be an option actually, we will check that.

> 
> > * Disks uploaded to nfs fail when reaching 100% transfer, with the following
> > error displayed by webadmin:
> > VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification
> > failed: u"reason=Volume's format specified by QEMU is qcow2, while the
> > format specified in VDSM metadata is raw"
> 
> Is the target disk is raw or qcow? is the file you are uploading is raw or
> qcow?
> 
> > 
> > Expected results:
> > For the upload to succeed
> > 
> > Additional info:
> > raw uploads succeed

Comment 3 Amit Aviram 2016-12-01 12:40:15 UTC
Please disregard the above comments. I tested the env a little bit, and there are a couple of issues that made your tests fail:

(In reply to Natalie Gavrielov from comment #0)
> Created attachment 1226500 [details]
> script used, engine.log, vdsm logs
> 
> Description of problem:
> Unable to upload qcow2 images using python sdk to thin-provision disks.
> 
> Version-Release number of selected component (if applicable):
> ovirt-engine-4.0.6-0.1.el7ev.noarch
> vdsm-4.18.17-1.el7ev.x86_64
> 
> How reproducible:
> 100%
> 
> Steps to Reproduce:
> 
> Performed this test twice, for two storage domain types: iscsi and nfs
> 1. Create a *thin-provision* disk on a storage domain.
> 2. Upload a qcow2 (0.10) disk using python sdk (script attached)
> 
> Actual results:
> * Disks upload to iscsi storage domain - paused when reaching 33% of the   
>   uploaded disk - which is approximately 1GB

For uploading to a block storage, the full size of the target disk should be allocated. This cannot be accomplished via the webadmin, and doing it from the REST doesn't work- which is the real bug here. 
To be precise, adding a new disk via REST, stating its domain is a block storage, and adding an "actual_size" attributes, E.g:

<disk>
   <name>rest_disk</name>
   <actual_size>2147483648</actual_size>
   <storage_domain> ... </storage_domain>
</disk>

will return:

<disk>
   <name>rest_disk</name>
   <actual_size>0</actual_size>
   <storage_domain> ... </storage_domain>
</disk>

This makes the target disk to be limited to 1GB. preventing an upload of bigger disks.

> * Disks uploaded to nfs fail when reaching 100% transfer, with the following
> error displayed by webadmin:
> VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification
> failed: u"reason=Volume's format specified by QEMU is qcow2, while the
> format specified in VDSM metadata is raw"

Via the webadmin, selecting a "thin provision" on file storage won't make your disk a QCOW disk, as file storages are thin anyway. so from the webadmin you can't create a COW on NFS. From the REST however, you are able to do that. so in this case I don't think we have a real issue. even if it is something to consider, it belongs in another bug.

To conclude, Resolution for this bug should be just fixing the REST's add disk request to actually be effective.

Comment 4 Raz Tamir 2016-12-01 13:02:17 UTC
(In reply to Amit Aviram from comment #3)
> Please disregard the above comments. I tested the env a little bit, and
> there are a couple of issues that made your tests fail:
> 
> (In reply to Natalie Gavrielov from comment #0)
> > Created attachment 1226500 [details]
> > script used, engine.log, vdsm logs
> > 
> > Description of problem:
> > Unable to upload qcow2 images using python sdk to thin-provision disks.
> > 
> > Version-Release number of selected component (if applicable):
> > ovirt-engine-4.0.6-0.1.el7ev.noarch
> > vdsm-4.18.17-1.el7ev.x86_64
> > 
> > How reproducible:
> > 100%
> > 
> > Steps to Reproduce:
> > 
> > Performed this test twice, for two storage domain types: iscsi and nfs
> > 1. Create a *thin-provision* disk on a storage domain.
> > 2. Upload a qcow2 (0.10) disk using python sdk (script attached)
> > 
> > Actual results:
> > * Disks upload to iscsi storage domain - paused when reaching 33% of the   
> >   uploaded disk - which is approximately 1GB
> 
> For uploading to a block storage, the full size of the target disk should be
> allocated. This cannot be accomplished via the webadmin, and doing it from
> the REST doesn't work- which is the real bug here. 
> To be precise, adding a new disk via REST, stating its domain is a block
> storage, and adding an "actual_size" attributes, E.g:
> 
> <disk>
>    <name>rest_disk</name>
>    <actual_size>2147483648</actual_size>
>    <storage_domain> ... </storage_domain>
> </disk>
I think that the only attribute that we can actually set is the provisioned_size and set the sparse to false and by that combination we will get preallocated disk with size equal to provisioned_zise

> 
> will return:
> 
> <disk>
>    <name>rest_disk</name>
>    <actual_size>0</actual_size>
>    <storage_domain> ... </storage_domain>
> </disk>
> 
> This makes the target disk to be limited to 1GB. preventing an upload of
> bigger disks.

Why can't we ask for lvextend in a thin-provisioned disk?

> 
> > * Disks uploaded to nfs fail when reaching 100% transfer, with the following
> > error displayed by webadmin:
> > VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification
> > failed: u"reason=Volume's format specified by QEMU is qcow2, while the
> > format specified in VDSM metadata is raw"
> 
> Via the webadmin, selecting a "thin provision" on file storage won't make
> your disk a QCOW disk, as file storages are thin anyway. so from the
> webadmin you can't create a COW on NFS. From the REST however, you are able
> to do that. so in this case I don't think we have a real issue. even if it
> is something to consider, it belongs in another bug.
> 
> To conclude, Resolution for this bug should be just fixing the REST's add
> disk request to actually be effective.

Comment 5 Amit Aviram 2016-12-01 13:49:40 UTC
(In reply to Raz Tamir from comment #4)
> (In reply to Amit Aviram from comment #3)
> > Please disregard the above comments. I tested the env a little bit, and
> > there are a couple of issues that made your tests fail:
> > 
> > (In reply to Natalie Gavrielov from comment #0)
> > > Created attachment 1226500 [details]
> > > script used, engine.log, vdsm logs
> > > 
> > > Description of problem:
> > > Unable to upload qcow2 images using python sdk to thin-provision disks.
> > > 
> > > Version-Release number of selected component (if applicable):
> > > ovirt-engine-4.0.6-0.1.el7ev.noarch
> > > vdsm-4.18.17-1.el7ev.x86_64
> > > 
> > > How reproducible:
> > > 100%
> > > 
> > > Steps to Reproduce:
> > > 
> > > Performed this test twice, for two storage domain types: iscsi and nfs
> > > 1. Create a *thin-provision* disk on a storage domain.
> > > 2. Upload a qcow2 (0.10) disk using python sdk (script attached)
> > > 
> > > Actual results:
> > > * Disks upload to iscsi storage domain - paused when reaching 33% of the   
> > >   uploaded disk - which is approximately 1GB
> > 
> > For uploading to a block storage, the full size of the target disk should be
> > allocated. This cannot be accomplished via the webadmin, and doing it from
> > the REST doesn't work- which is the real bug here. 
> > To be precise, adding a new disk via REST, stating its domain is a block
> > storage, and adding an "actual_size" attributes, E.g:
> > 
> > <disk>
> >    <name>rest_disk</name>
> >    <actual_size>2147483648</actual_size>
> >    <storage_domain> ... </storage_domain>
> > </disk>
> I think that the only attribute that we can actually set is the
> provisioned_size and set the sparse to false and by that combination we will
> get preallocated disk with size equal to provisioned_zise

That will work in case you want a preallocated disk, but we also need to support thin: if a user has a QCOW disk of 1.3 GB, he is supposed to be able to add a thin disk with an actual size of 1.3 GB, then use the disk in a VM as a thin disk- which can be extended later on when the VM grows.

> 
> > 
> > will return:
> > 
> > <disk>
> >    <name>rest_disk</name>
> >    <actual_size>0</actual_size>
> >    <storage_domain> ... </storage_domain>
> > </disk>
> > 
> > This makes the target disk to be limited to 1GB. preventing an upload of
> > bigger disks.
> 
> Why can't we ask for lvextend in a thin-provisioned disk?

It is much simple to have the user specify the size he needs for the disk he's uploading. we can have an RFE to do that automatically, calling lvextend but it will take time and it is generally just an improvement, not an urgent issue IMO.

> 
> > 
> > > * Disks uploaded to nfs fail when reaching 100% transfer, with the following
> > > error displayed by webadmin:
> > > VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification
> > > failed: u"reason=Volume's format specified by QEMU is qcow2, while the
> > > format specified in VDSM metadata is raw"
> > 
> > Via the webadmin, selecting a "thin provision" on file storage won't make
> > your disk a QCOW disk, as file storages are thin anyway. so from the
> > webadmin you can't create a COW on NFS. From the REST however, you are able
> > to do that. so in this case I don't think we have a real issue. even if it
> > is something to consider, it belongs in another bug.
> > 
> > To conclude, Resolution for this bug should be just fixing the REST's add
> > disk request to actually be effective.

Comment 6 Amit Aviram 2016-12-01 14:09:56 UTC
(In reply to Amit Aviram from comment #5)
> (In reply to Raz Tamir from comment #4)
> > (In reply to Amit Aviram from comment #3)
> > > Please disregard the above comments. I tested the env a little bit, and
> > > there are a couple of issues that made your tests fail:
> > > 
> > > (In reply to Natalie Gavrielov from comment #0)
> > > > Created attachment 1226500 [details]
> > > > script used, engine.log, vdsm logs
> > > > 
> > > > Description of problem:
> > > > Unable to upload qcow2 images using python sdk to thin-provision disks.
> > > > 
> > > > Version-Release number of selected component (if applicable):
> > > > ovirt-engine-4.0.6-0.1.el7ev.noarch
> > > > vdsm-4.18.17-1.el7ev.x86_64
> > > > 
> > > > How reproducible:
> > > > 100%
> > > > 
> > > > Steps to Reproduce:
> > > > 
> > > > Performed this test twice, for two storage domain types: iscsi and nfs
> > > > 1. Create a *thin-provision* disk on a storage domain.
> > > > 2. Upload a qcow2 (0.10) disk using python sdk (script attached)
> > > > 
> > > > Actual results:
> > > > * Disks upload to iscsi storage domain - paused when reaching 33% of the   
> > > >   uploaded disk - which is approximately 1GB
> > > 
> > > For uploading to a block storage, the full size of the target disk should be
> > > allocated. This cannot be accomplished via the webadmin, and doing it from
> > > the REST doesn't work- which is the real bug here. 
> > > To be precise, adding a new disk via REST, stating its domain is a block
> > > storage, and adding an "actual_size" attributes, E.g:
> > > 
> > > <disk>
> > >    <name>rest_disk</name>
> > >    <actual_size>2147483648</actual_size>
> > >    <storage_domain> ... </storage_domain>
> > > </disk>
> > I think that the only attribute that we can actually set is the
> > provisioned_size and set the sparse to false and by that combination we will
> > get preallocated disk with size equal to provisioned_zise
> 
> That will work in case you want a preallocated disk, but we also need to
> support thin: if a user has a QCOW disk of 1.3 GB, he is supposed to be able
> to add a thin disk with an actual size of 1.3 GB, then use the disk in a VM
> as a thin disk- which can be extended later on when the VM grows.

After testing it again, provisioned_size with sparse set to True seems to allocate the size. so this can solve the problem, and makes sense in the API. Natalie, can you try uploading after adding a disk via REST with a provisioned size? (it is in bytes, E.G for a 4GB you need to specify <provisioned_size>4294967296</provisioned_size> )

Thanks Raz

> 
> > 
> > > 
> > > will return:
> > > 
> > > <disk>
> > >    <name>rest_disk</name>
> > >    <actual_size>0</actual_size>
> > >    <storage_domain> ... </storage_domain>
> > > </disk>
> > > 
> > > This makes the target disk to be limited to 1GB. preventing an upload of
> > > bigger disks.
> > 
> > Why can't we ask for lvextend in a thin-provisioned disk?
> 
> It is much simple to have the user specify the size he needs for the disk
> he's uploading. we can have an RFE to do that automatically, calling
> lvextend but it will take time and it is generally just an improvement, not
> an urgent issue IMO.
> 
> > 
> > > 
> > > > * Disks uploaded to nfs fail when reaching 100% transfer, with the following
> > > > error displayed by webadmin:
> > > > VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification
> > > > failed: u"reason=Volume's format specified by QEMU is qcow2, while the
> > > > format specified in VDSM metadata is raw"
> > > 
> > > Via the webadmin, selecting a "thin provision" on file storage won't make
> > > your disk a QCOW disk, as file storages are thin anyway. so from the
> > > webadmin you can't create a COW on NFS. From the REST however, you are able
> > > to do that. so in this case I don't think we have a real issue. even if it
> > > is something to consider, it belongs in another bug.
> > > 
> > > To conclude, Resolution for this bug should be just fixing the REST's add
> > > disk request to actually be effective.

Comment 7 Natalie Gavrielov 2016-12-04 21:44:09 UTC
I performed the same tests - comment #0 (storage types nfs and iscsi) 
Only this time, I created disks using rest, with the following config:
<disk>
	<alias>...</alias>
	<format>cow</format>
	<sparse>true</sparse>  
	<actual_size>5368709120</actual_size>
	<provisioned_size>5368709120</provisioned_size>
	<storage_domains>
		<storage_domain>
			<name>...</name>
		</storage_domain>
	</storage_domains>
</disk>

For nfs storage the upload finished successfully - disk status in UI shows OK.
For iscsi storage type - the upload gets paused when reaching 21% (in this case it's 5gb disk that is uploaded), meaning it's stuck again at 1gb.

Comment 8 Natalie Gavrielov 2016-12-04 21:48:44 UTC
Created attachment 1227946 [details]
logs: engine, vdsm

Comment 9 Liron Aravot 2017-01-02 14:54:06 UTC
The issue is that currently the rest api doesn't support the actual_size attribute for added disks (so when creating the disk as sparse, it'll be always created with the default initial size) which will cause us to fail on the upload on block domain when the uploaded size is bigger then the default initial size.

This bug is about adding support for initial size on disk creation using the api.

Comment 10 Liron Aravot 2017-01-03 17:46:16 UTC
Aside from adding the initial size support, cloning this to two other issues:

a. The engine needs to verify that there is sufficient space (or do our best effort to do so) before the upload takes place in order to fail the operation before data is uploaded.

b. RFE- In order to ease the process of the upload to existing disk, the engine can extend the allocated image size of sparse disks on block domains by itself before the upload.

Comment 11 Sandro Bonazzola 2017-01-25 07:54:09 UTC
4.0.6 has been the last oVirt 4.0 release, please re-target this bug.

Comment 12 Natalie Gavrielov 2017-02-01 17:27:48 UTC
Created attachment 1246754 [details]
logs, script

I tried the same scenario for iscsi storage domain:
1. Create a thin-provision image on a storage domain (tried this out once using the python sdk and another time using the rest api.. which is basically the same ).
2. Upload a qcow2 (1.1) disk using python sdk (script attached)

this is the body for the rest api disk creation:
<disk>
          <alias>upload-qcow</alias>
            <format>cow</format>
            <sparse>true</sparse>  
            <actual_size>8589934592</actual_size>
            <provisioned_size>8589934592</provisioned_size>
    <storage_domains>
      <storage_domain>
        <name>iscsi-2</name>
      </storage_domain>
    </storage_domains>
</disk>

Result: failure when reaching 1gb upload.
Used:
(rhv-4.1.0-11)
rhevm-4.1.0.3-0.1.el7.noarch
ovirt-imageio-common-1.0.0-0.el7ev.noarch
ovirt-imageio-proxy-1.0.0-0.el7ev.noarch
vdsm-4.19.4-7.gitc2f748c.el7.centos.x86_64
ovirt-imageio-daemon-1.0.0-1.el7.noarch

Comment 13 Liron Aravot 2017-02-02 07:12:44 UTC
The added attribute is "initial_size" and not "actual_size" - sorry for not adding a comment specifying that here.
Natalie, please try to verify with that instead of "actual_size".

Comment 14 Natalie Gavrielov 2017-02-06 16:33:08 UTC
Performed the same scenario described in comment 12, replacing "actual_size" with initial_size and it works now (uploads more than 1GB).

Verified, using builds:
ovirt-engine-4.1.0.3-0.1.el7.noarch
ovirt-imageio-proxy-1.0.0-0.el7ev.noarch
vdsm-4.19.4-15.git5b39b63.el7.centos.x86_64
ovirt-imageio-common-1.0.0-1.el7.noarch
ovirt-imageio-daemon-1.0.0-1.el7.noarch


Note You need to log in before you can comment on or make changes to this bug.