Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be unavailable on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1416346 - REST: add disk request doesn't support initial size, causing disk uploads using the API to be limited to 1GB
Summary: REST: add disk request doesn't support initial size, causing disk uploads usi...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.0.3
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ovirt-4.0.7
: ---
Assignee: Liron Aravot
QA Contact: Natalie Gavrielov
URL:
Whiteboard:
Depends On: 1400296
Blocks: 1337077
TreeView+ depends on / blocked
 
Reported: 2017-01-25 10:38 UTC by Tal Nisan
Modified: 2017-03-16 15:32 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
This release adds support for specifying the initial size through the API when creating a thin provisioned disk on block storage.
Clone Of: 1400296
Environment:
Last Closed: 2017-03-16 15:32:23 UTC
oVirt Team: Storage
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:0542 0 normal SHIPPED_LIVE Red Hat Virtualization Manager 4.0.7 2017-03-16 19:25:04 UTC
oVirt gerrit 69509 0 None None None 2017-01-25 10:38:17 UTC
oVirt gerrit 69510 0 None None None 2017-01-25 10:38:17 UTC
oVirt gerrit 69519 0 None None None 2017-01-25 10:38:17 UTC
oVirt gerrit 69564 0 None None None 2017-01-25 10:38:17 UTC
oVirt gerrit 69565 0 None None None 2017-01-25 10:38:17 UTC
oVirt gerrit 69566 0 None None None 2017-01-25 10:38:17 UTC
oVirt gerrit 69567 0 None None None 2017-01-25 10:38:17 UTC
oVirt gerrit 69598 0 None None None 2017-01-25 10:38:17 UTC
oVirt gerrit 69605 0 None None None 2017-01-25 10:38:17 UTC
oVirt gerrit 69617 0 None None None 2017-01-25 10:38:17 UTC
oVirt gerrit 69640 0 None None None 2017-01-25 10:38:17 UTC
oVirt gerrit 69647 0 None None None 2017-01-25 10:38:17 UTC

Description Tal Nisan 2017-01-25 10:38:18 UTC
+++ This bug was initially created as a clone of Bug #1400296 +++

Description of problem:
Unable to upload qcow2 images using python sdk to thin-provision disks.

Version-Release number of selected component (if applicable):
ovirt-engine-4.0.6-0.1.el7ev.noarch
vdsm-4.18.17-1.el7ev.x86_64

How reproducible:
100%

Steps to Reproduce:

Performed this test twice, for two storage domain types: iscsi and nfs
1. Create a *thin-provision* disk on a storage domain.
2. Upload a qcow2 (0.10) disk using python sdk (script attached)

Actual results:
* Disks upload to iscsi storage domain - paused when reaching 33% of the   
  uploaded disk - which is approximately 1GB
* Disks uploaded to nfs fail when reaching 100% transfer, with the following error displayed by webadmin:
VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification failed: u"reason=Volume's format specified by QEMU is qcow2, while the format specified in VDSM metadata is raw"

Expected results:
For the upload to succeed

Additional info:
raw uploads succeed

--- Additional comment from Amit Aviram on 2016-12-01 11:15:39 IST ---

(In reply to Natalie Gavrielov from comment #0)
> Created attachment 1226500 [details]
> script used, engine.log, vdsm logs
> 
> Description of problem:
> Unable to upload qcow2 images using python sdk to thin-provision disks.
> 
> Version-Release number of selected component (if applicable):
> ovirt-engine-4.0.6-0.1.el7ev.noarch
> vdsm-4.18.17-1.el7ev.x86_64
> 
> How reproducible:
> 100%
> 
> Steps to Reproduce:
> 
> Performed this test twice, for two storage domain types: iscsi and nfs
> 1. Create a *thin-provision* disk on a storage domain.
> 2. Upload a qcow2 (0.10) disk using python sdk (script attached)
> 
> Actual results:
> * Disks upload to iscsi storage domain - paused when reaching 33% of the   
>   uploaded disk - which is approximately 1GB

The created disk's *actual* size should be at least the size of the file you are uploading, so you need to create one using REST and specify it, and use this disk as a target.

> * Disks uploaded to nfs fail when reaching 100% transfer, with the following
> error displayed by webadmin:
> VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification
> failed: u"reason=Volume's format specified by QEMU is qcow2, while the
> format specified in VDSM metadata is raw"

Is the target disk is raw or qcow? is the file you are uploading is raw or qcow?

> 
> Expected results:
> For the upload to succeed
> 
> Additional info:
> raw uploads succeed

--- Additional comment from Amit Aviram on 2016-12-01 11:39:34 IST ---

(In reply to Amit Aviram from comment #1)
> (In reply to Natalie Gavrielov from comment #0)
> > Created attachment 1226500 [details]
> > script used, engine.log, vdsm logs
> > 
> > Description of problem:
> > Unable to upload qcow2 images using python sdk to thin-provision disks.
> > 
> > Version-Release number of selected component (if applicable):
> > ovirt-engine-4.0.6-0.1.el7ev.noarch
> > vdsm-4.18.17-1.el7ev.x86_64
> > 
> > How reproducible:
> > 100%
> > 
> > Steps to Reproduce:
> > 
> > Performed this test twice, for two storage domain types: iscsi and nfs
> > 1. Create a *thin-provision* disk on a storage domain.
> > 2. Upload a qcow2 (0.10) disk using python sdk (script attached)
> > 
> > Actual results:
> > * Disks upload to iscsi storage domain - paused when reaching 33% of the   
> >   uploaded disk - which is approximately 1GB
> 
> The created disk's *actual* size should be at least the size of the file you
> are uploading, so you need to create one using REST and specify it, and use
> this disk as a target.

Currently it might not be an option actually, we will check that.

> 
> > * Disks uploaded to nfs fail when reaching 100% transfer, with the following
> > error displayed by webadmin:
> > VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification
> > failed: u"reason=Volume's format specified by QEMU is qcow2, while the
> > format specified in VDSM metadata is raw"
> 
> Is the target disk is raw or qcow? is the file you are uploading is raw or
> qcow?
> 
> > 
> > Expected results:
> > For the upload to succeed
> > 
> > Additional info:
> > raw uploads succeed

--- Additional comment from Amit Aviram on 2016-12-01 14:40:15 IST ---

Please disregard the above comments. I tested the env a little bit, and there are a couple of issues that made your tests fail:

(In reply to Natalie Gavrielov from comment #0)
> Created attachment 1226500 [details]
> script used, engine.log, vdsm logs
> 
> Description of problem:
> Unable to upload qcow2 images using python sdk to thin-provision disks.
> 
> Version-Release number of selected component (if applicable):
> ovirt-engine-4.0.6-0.1.el7ev.noarch
> vdsm-4.18.17-1.el7ev.x86_64
> 
> How reproducible:
> 100%
> 
> Steps to Reproduce:
> 
> Performed this test twice, for two storage domain types: iscsi and nfs
> 1. Create a *thin-provision* disk on a storage domain.
> 2. Upload a qcow2 (0.10) disk using python sdk (script attached)
> 
> Actual results:
> * Disks upload to iscsi storage domain - paused when reaching 33% of the   
>   uploaded disk - which is approximately 1GB

For uploading to a block storage, the full size of the target disk should be allocated. This cannot be accomplished via the webadmin, and doing it from the REST doesn't work- which is the real bug here. 
To be precise, adding a new disk via REST, stating its domain is a block storage, and adding an "actual_size" attributes, E.g:

<disk>
   <name>rest_disk</name>
   <actual_size>2147483648</actual_size>
   <storage_domain> ... </storage_domain>
</disk>

will return:

<disk>
   <name>rest_disk</name>
   <actual_size>0</actual_size>
   <storage_domain> ... </storage_domain>
</disk>

This makes the target disk to be limited to 1GB. preventing an upload of bigger disks.

> * Disks uploaded to nfs fail when reaching 100% transfer, with the following
> error displayed by webadmin:
> VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification
> failed: u"reason=Volume's format specified by QEMU is qcow2, while the
> format specified in VDSM metadata is raw"

Via the webadmin, selecting a "thin provision" on file storage won't make your disk a QCOW disk, as file storages are thin anyway. so from the webadmin you can't create a COW on NFS. From the REST however, you are able to do that. so in this case I don't think we have a real issue. even if it is something to consider, it belongs in another bug.

To conclude, Resolution for this bug should be just fixing the REST's add disk request to actually be effective.

--- Additional comment from Raz Tamir on 2016-12-01 15:02:17 IST ---

(In reply to Amit Aviram from comment #3)
> Please disregard the above comments. I tested the env a little bit, and
> there are a couple of issues that made your tests fail:
> 
> (In reply to Natalie Gavrielov from comment #0)
> > Created attachment 1226500 [details]
> > script used, engine.log, vdsm logs
> > 
> > Description of problem:
> > Unable to upload qcow2 images using python sdk to thin-provision disks.
> > 
> > Version-Release number of selected component (if applicable):
> > ovirt-engine-4.0.6-0.1.el7ev.noarch
> > vdsm-4.18.17-1.el7ev.x86_64
> > 
> > How reproducible:
> > 100%
> > 
> > Steps to Reproduce:
> > 
> > Performed this test twice, for two storage domain types: iscsi and nfs
> > 1. Create a *thin-provision* disk on a storage domain.
> > 2. Upload a qcow2 (0.10) disk using python sdk (script attached)
> > 
> > Actual results:
> > * Disks upload to iscsi storage domain - paused when reaching 33% of the   
> >   uploaded disk - which is approximately 1GB
> 
> For uploading to a block storage, the full size of the target disk should be
> allocated. This cannot be accomplished via the webadmin, and doing it from
> the REST doesn't work- which is the real bug here. 
> To be precise, adding a new disk via REST, stating its domain is a block
> storage, and adding an "actual_size" attributes, E.g:
> 
> <disk>
>    <name>rest_disk</name>
>    <actual_size>2147483648</actual_size>
>    <storage_domain> ... </storage_domain>
> </disk>
I think that the only attribute that we can actually set is the provisioned_size and set the sparse to false and by that combination we will get preallocated disk with size equal to provisioned_zise

> 
> will return:
> 
> <disk>
>    <name>rest_disk</name>
>    <actual_size>0</actual_size>
>    <storage_domain> ... </storage_domain>
> </disk>
> 
> This makes the target disk to be limited to 1GB. preventing an upload of
> bigger disks.

Why can't we ask for lvextend in a thin-provisioned disk?

> 
> > * Disks uploaded to nfs fail when reaching 100% transfer, with the following
> > error displayed by webadmin:
> > VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification
> > failed: u"reason=Volume's format specified by QEMU is qcow2, while the
> > format specified in VDSM metadata is raw"
> 
> Via the webadmin, selecting a "thin provision" on file storage won't make
> your disk a QCOW disk, as file storages are thin anyway. so from the
> webadmin you can't create a COW on NFS. From the REST however, you are able
> to do that. so in this case I don't think we have a real issue. even if it
> is something to consider, it belongs in another bug.
> 
> To conclude, Resolution for this bug should be just fixing the REST's add
> disk request to actually be effective.

--- Additional comment from Amit Aviram on 2016-12-01 15:49:40 IST ---

(In reply to Raz Tamir from comment #4)
> (In reply to Amit Aviram from comment #3)
> > Please disregard the above comments. I tested the env a little bit, and
> > there are a couple of issues that made your tests fail:
> > 
> > (In reply to Natalie Gavrielov from comment #0)
> > > Created attachment 1226500 [details]
> > > script used, engine.log, vdsm logs
> > > 
> > > Description of problem:
> > > Unable to upload qcow2 images using python sdk to thin-provision disks.
> > > 
> > > Version-Release number of selected component (if applicable):
> > > ovirt-engine-4.0.6-0.1.el7ev.noarch
> > > vdsm-4.18.17-1.el7ev.x86_64
> > > 
> > > How reproducible:
> > > 100%
> > > 
> > > Steps to Reproduce:
> > > 
> > > Performed this test twice, for two storage domain types: iscsi and nfs
> > > 1. Create a *thin-provision* disk on a storage domain.
> > > 2. Upload a qcow2 (0.10) disk using python sdk (script attached)
> > > 
> > > Actual results:
> > > * Disks upload to iscsi storage domain - paused when reaching 33% of the   
> > >   uploaded disk - which is approximately 1GB
> > 
> > For uploading to a block storage, the full size of the target disk should be
> > allocated. This cannot be accomplished via the webadmin, and doing it from
> > the REST doesn't work- which is the real bug here. 
> > To be precise, adding a new disk via REST, stating its domain is a block
> > storage, and adding an "actual_size" attributes, E.g:
> > 
> > <disk>
> >    <name>rest_disk</name>
> >    <actual_size>2147483648</actual_size>
> >    <storage_domain> ... </storage_domain>
> > </disk>
> I think that the only attribute that we can actually set is the
> provisioned_size and set the sparse to false and by that combination we will
> get preallocated disk with size equal to provisioned_zise

That will work in case you want a preallocated disk, but we also need to support thin: if a user has a QCOW disk of 1.3 GB, he is supposed to be able to add a thin disk with an actual size of 1.3 GB, then use the disk in a VM as a thin disk- which can be extended later on when the VM grows.

> 
> > 
> > will return:
> > 
> > <disk>
> >    <name>rest_disk</name>
> >    <actual_size>0</actual_size>
> >    <storage_domain> ... </storage_domain>
> > </disk>
> > 
> > This makes the target disk to be limited to 1GB. preventing an upload of
> > bigger disks.
> 
> Why can't we ask for lvextend in a thin-provisioned disk?

It is much simple to have the user specify the size he needs for the disk he's uploading. we can have an RFE to do that automatically, calling lvextend but it will take time and it is generally just an improvement, not an urgent issue IMO.

> 
> > 
> > > * Disks uploaded to nfs fail when reaching 100% transfer, with the following
> > > error displayed by webadmin:
> > > VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification
> > > failed: u"reason=Volume's format specified by QEMU is qcow2, while the
> > > format specified in VDSM metadata is raw"
> > 
> > Via the webadmin, selecting a "thin provision" on file storage won't make
> > your disk a QCOW disk, as file storages are thin anyway. so from the
> > webadmin you can't create a COW on NFS. From the REST however, you are able
> > to do that. so in this case I don't think we have a real issue. even if it
> > is something to consider, it belongs in another bug.
> > 
> > To conclude, Resolution for this bug should be just fixing the REST's add
> > disk request to actually be effective.

--- Additional comment from Amit Aviram on 2016-12-01 16:09:56 IST ---

(In reply to Amit Aviram from comment #5)
> (In reply to Raz Tamir from comment #4)
> > (In reply to Amit Aviram from comment #3)
> > > Please disregard the above comments. I tested the env a little bit, and
> > > there are a couple of issues that made your tests fail:
> > > 
> > > (In reply to Natalie Gavrielov from comment #0)
> > > > Created attachment 1226500 [details]
> > > > script used, engine.log, vdsm logs
> > > > 
> > > > Description of problem:
> > > > Unable to upload qcow2 images using python sdk to thin-provision disks.
> > > > 
> > > > Version-Release number of selected component (if applicable):
> > > > ovirt-engine-4.0.6-0.1.el7ev.noarch
> > > > vdsm-4.18.17-1.el7ev.x86_64
> > > > 
> > > > How reproducible:
> > > > 100%
> > > > 
> > > > Steps to Reproduce:
> > > > 
> > > > Performed this test twice, for two storage domain types: iscsi and nfs
> > > > 1. Create a *thin-provision* disk on a storage domain.
> > > > 2. Upload a qcow2 (0.10) disk using python sdk (script attached)
> > > > 
> > > > Actual results:
> > > > * Disks upload to iscsi storage domain - paused when reaching 33% of the   
> > > >   uploaded disk - which is approximately 1GB
> > > 
> > > For uploading to a block storage, the full size of the target disk should be
> > > allocated. This cannot be accomplished via the webadmin, and doing it from
> > > the REST doesn't work- which is the real bug here. 
> > > To be precise, adding a new disk via REST, stating its domain is a block
> > > storage, and adding an "actual_size" attributes, E.g:
> > > 
> > > <disk>
> > >    <name>rest_disk</name>
> > >    <actual_size>2147483648</actual_size>
> > >    <storage_domain> ... </storage_domain>
> > > </disk>
> > I think that the only attribute that we can actually set is the
> > provisioned_size and set the sparse to false and by that combination we will
> > get preallocated disk with size equal to provisioned_zise
> 
> That will work in case you want a preallocated disk, but we also need to
> support thin: if a user has a QCOW disk of 1.3 GB, he is supposed to be able
> to add a thin disk with an actual size of 1.3 GB, then use the disk in a VM
> as a thin disk- which can be extended later on when the VM grows.

After testing it again, provisioned_size with sparse set to True seems to allocate the size. so this can solve the problem, and makes sense in the API. Natalie, can you try uploading after adding a disk via REST with a provisioned size? (it is in bytes, E.G for a 4GB you need to specify <provisioned_size>4294967296</provisioned_size> )

Thanks Raz

> 
> > 
> > > 
> > > will return:
> > > 
> > > <disk>
> > >    <name>rest_disk</name>
> > >    <actual_size>0</actual_size>
> > >    <storage_domain> ... </storage_domain>
> > > </disk>
> > > 
> > > This makes the target disk to be limited to 1GB. preventing an upload of
> > > bigger disks.
> > 
> > Why can't we ask for lvextend in a thin-provisioned disk?
> 
> It is much simple to have the user specify the size he needs for the disk
> he's uploading. we can have an RFE to do that automatically, calling
> lvextend but it will take time and it is generally just an improvement, not
> an urgent issue IMO.
> 
> > 
> > > 
> > > > * Disks uploaded to nfs fail when reaching 100% transfer, with the following
> > > > error displayed by webadmin:
> > > > VDSM green-vdsb.qa.lab.tlv.redhat.com command failed: Image verification
> > > > failed: u"reason=Volume's format specified by QEMU is qcow2, while the
> > > > format specified in VDSM metadata is raw"
> > > 
> > > Via the webadmin, selecting a "thin provision" on file storage won't make
> > > your disk a QCOW disk, as file storages are thin anyway. so from the
> > > webadmin you can't create a COW on NFS. From the REST however, you are able
> > > to do that. so in this case I don't think we have a real issue. even if it
> > > is something to consider, it belongs in another bug.
> > > 
> > > To conclude, Resolution for this bug should be just fixing the REST's add
> > > disk request to actually be effective.

--- Additional comment from Natalie Gavrielov on 2016-12-04 23:44:09 IST ---

I performed the same tests - comment #0 (storage types nfs and iscsi) 
Only this time, I created disks using rest, with the following config:
<disk>
	<alias>...</alias>
	<format>cow</format>
	<sparse>true</sparse>  
	<actual_size>5368709120</actual_size>
	<provisioned_size>5368709120</provisioned_size>
	<storage_domains>
		<storage_domain>
			<name>...</name>
		</storage_domain>
	</storage_domains>
</disk>

For nfs storage the upload finished successfully - disk status in UI shows OK.
For iscsi storage type - the upload gets paused when reaching 21% (in this case it's 5gb disk that is uploaded), meaning it's stuck again at 1gb.

--- Additional comment from Natalie Gavrielov on 2016-12-04 23:48 IST ---



--- Additional comment from Liron Aravot on 2017-01-02 16:54:06 IST ---

The issue is that currently the rest api doesn't support the actual_size attribute for added disks (so when creating the disk as sparse, it'll be always created with the default initial size) which will cause us to fail on the upload on block domain when the uploaded size is bigger then the default initial size.

This bug is about adding support for initial size on disk creation using the api.

--- Additional comment from Liron Aravot on 2017-01-03 19:46:16 IST ---

Aside from adding the initial size support, cloning this to two other issues:

a. The engine needs to verify that there is sufficient space (or do our best effort to do so) before the upload takes place in order to fail the operation before data is uploaded.

b. RFE- In order to ease the process of the upload to existing disk, the engine can extend the allocated image size of sparse disks on block domains by itself before the upload.

--- Additional comment from Sandro Bonazzola on 2017-01-25 09:54:09 IST ---

4.0.6 has been the last oVirt 4.0 release, please re-target this bug.

Comment 1 Natalie Gavrielov 2017-02-07 17:54:00 UTC
Scenario performed:
1. Create a thin-provision image on a storage domain - but with initial_size = disk size in bytes.
2. Upload a qcow2 (0.10) disk using python sdk.

this is the body for the rest api disk creation:
<disk>
          <alias>upload-qcow</alias>
            <format>cow</format>
            <sparse>true</sparse>  
            <initial_size>1923940352</initial_size>
            <provisioned_size>8589934592</provisioned_size>
    <storage_domains>
      <storage_domain>
        <name>iscsi_0</name>
      </storage_domain>
    </storage_domains>
</disk>

Verified using:
ovirt-engine-4.0.7-0.1.el7ev.noarch
ovirt-imageio-common-0.3.0-0.el7ev.noarch
ovirt-imageio-proxy-0.4.0-0.el7ev.noarch
vdsm-4.18.22-1.el7ev.x86_64
ovirt-imageio-daemon-0.4.0-0.el7ev.noarch

Comment 3 errata-xmlrpc 2017-03-16 15:32:23 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://rhn.redhat.com/errata/RHBA-2017-0542.html


Note You need to log in before you can comment on or make changes to this bug.