Bug 1120768 - [python-sdk] Creating a new disk as first attached directly to vm's disk collection fails
Summary: [python-sdk] Creating a new disk as first attached directly to vm's disk coll...
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: oVirt
Classification: Retired
Component: ovirt-engine-api
Version: 3.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 3.5.0
Assignee: Daniel Erez
QA Contact: Raz Tamir
URL:
Whiteboard: storage
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2014-07-17 15:35 UTC by Gadi Ickowicz
Modified: 2016-02-10 19:44 UTC (History)
11 users (show)

Fixed In Version: ovirt-3.5.0_rc1
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2014-10-17 12:30:35 UTC
oVirt Team: Storage


Attachments (Terms of Use)
engine logs and python-sdk log (170.00 KB, application/gzip)
2014-07-17 15:35 UTC, Gadi Ickowicz
no flags Details


Links
System ID Priority Status Summary Last Updated
oVirt gerrit 30633 master MERGED core: add disk - storage domain not specified error message Never
oVirt gerrit 30785 ovirt-engine-3.5 MERGED core: add disk - storage domain not specified error message Never

Description Gadi Ickowicz 2014-07-17 15:35:29 UTC
Created attachment 918776 [details]
engine logs and python-sdk log

Description of problem:
When attempting to create a new disk and adding directly from a specific vm's disks subcollection, if it is the first disk in the vm's disks collection the request fails with error message:

Traceback (most recent call last):
  File "<input>", line 1, in <module>
  File "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/brokers.py", line 20226, in add
    headers={"Expect":expect, "Correlation-Id":correlation_id}
  File "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/proxy.py", line 88, in add
    return self.request('POST', url, body, headers)
  File "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/proxy.py", line 118, in request
    persistent_auth=self._persistent_auth)
  File "/usr/lib/python2.6/site-packages/ovirtsdk/infrastructure/proxy.py", line 146, in __doRequest
    persistent_auth=persistent_auth
  File "/usr/lib/python2.6/site-packages/ovirtsdk/web/connection.py", line 134, in doRequest
    raise RequestError, response
RequestError: 
status: 400
reason: Bad Request
detail: Cannot add Virtual Machine Disk. Storage Domain doesn't exist.



Version-Release number of selected component (if applicable):
ovirt-engine-restapi-3.5.0-0.0.master.20140715172116.git4687dc1.el6.noarch
ovirt-engine-3.5.0-0.0.master.20140715172116.git4687dc1.el6.noarch

How reproducible:
100%

Steps to Reproduce:
1. Create new vm
2. Add new disk from vm's disks collection (see attached python sdk log file for exact operation)


Actual results:
Fails to create disk

Expected results:
Should succeed

Additional info:

Comment 1 Juan Hernández 2014-07-17 16:38:22 UTC
The operation to add a disk to the a VM doesn't receive a single storage domain parameter, but a list of storage domain objects. Currently you are calling it as follows:

  vm = api.vms.get("myvm")
  sd = api.storagedomains.get("mysd")
  disk = ovirtsdk.xml.params.Disk(storage_domain=sd, ...)
  vm.disks.add(disk)

But the storage_domain parameter will just be ignored by the server. The correct way to invoke the operation is to use the storage_domains parameter containing a list of the storage domains to use:

  vm = api.vms.get("myvm")
  sd = api.storagedomains.get("mysd")
  sds = ovirtsdk.xml.params.StorageDomains(storage_domain=[sd])
  disk = ovirtsdk.xml.params.Disk(storage_domains=sds, ...)
  vm.disks.add(disk)

This does work correctly.

In addition the error message isn't accurate. This is probably because the backend is using the empty GUID as the default value for the storage domain id, and thus it isn't finding it. It may be workt to check that.

Allon, can you confirm/reject this analysis?

Comment 2 Allon Mureinik 2014-07-17 19:18:15 UTC
Daniel, can you please answer Juan?

Comment 3 Gadi Ickowicz 2014-07-20 07:35:23 UTC
(In reply to Juan Hernández from comment #1)
> The operation to add a disk to the a VM doesn't receive a single storage
> domain parameter, but a list of storage domain objects. Currently you are
> calling it as follows:
> 
>   vm = api.vms.get("myvm")
>   sd = api.storagedomains.get("mysd")
>   disk = ovirtsdk.xml.params.Disk(storage_domain=sd, ...)
>   vm.disks.add(disk)
> 
> But the storage_domain parameter will just be ignored by the server. The
> correct way to invoke the operation is to use the storage_domains parameter
> containing a list of the storage domains to use:
> 
>   vm = api.vms.get("myvm")
>   sd = api.storagedomains.get("mysd")
>   sds = ovirtsdk.xml.params.StorageDomains(storage_domain=[sd])
>   disk = ovirtsdk.xml.params.Disk(storage_domains=sds, ...)
>   vm.disks.add(disk)
> 
> This does work correctly.
this may be true, yet invoking the command with the "wrong" syntax on a vm that already has a disk works. This fails in RESTAPI directly as well:
Request:
<disk>
<storage_domain id="117264d9-ff43-4fa8-9bd3-94a16fee7f2b"/>
<name>d1</name>
<alias>d1</alias>
<size>2147483648</size>
<interface>virtio</interface>
<format>cow</format>
<sparse>True</sparse>
</disk>

Response:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<fault>
    <reason>Operation Failed</reason>
    <detail>[Cannot add Virtual Machine Disk. Storage Domain doesn't exist.]</detail>
</fault>


Adding a disk to the vm and trying same request, it succeeds:
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<disk href="/api/vms/8a6fd7bc-3113-4888-aa7b-f0204d99a5b7/disks/a80890d4-1244-4ad3-9d43-886ab2f44aeb" id="a80890d4-1244-4ad3-9d43-886ab2f44aeb">
    <actions>
        <link href="/api/vms/8a6fd7bc-3113-4888-aa7b-f0204d99a5b7/disks/a80890d4-1244-4ad3-9d43-886ab2f44aeb/deactivate" rel="deactivate"/>
        <link href="/api/vms/8a6fd7bc-3113-4888-aa7b-f0204d99a5b7/disks/a80890d4-1244-4ad3-9d43-886ab2f44aeb/activate" rel="activate"/>
        <link href="/api/vms/8a6fd7bc-3113-4888-aa7b-f0204d99a5b7/disks/a80890d4-1244-4ad3-9d43-886ab2f44aeb/export" rel="export"/>
        <link href="/api/vms/8a6fd7bc-3113-4888-aa7b-f0204d99a5b7/disks/a80890d4-1244-4ad3-9d43-886ab2f44aeb/move" rel="move"/>
    </actions>
    <name>d1</name>
    <creation_status>
        <state>pending</state>
    </creation_status>
    <link href="/api/vms/8a6fd7bc-3113-4888-aa7b-f0204d99a5b7/disks/a80890d4-1244-4ad3-9d43-886ab2f44aeb/permissions" rel="permissions"/>
    <link href="/api/vms/8a6fd7bc-3113-4888-aa7b-f0204d99a5b7/disks/a80890d4-1244-4ad3-9d43-886ab2f44aeb/statistics" rel="statistics"/>
    <link href="/api/vms/8a6fd7bc-3113-4888-aa7b-f0204d99a5b7/disks/a80890d4-1244-4ad3-9d43-886ab2f44aeb/creation_status/b1d2a498-82fe-4ae6-8b07-32d10c95f396" rel="creation_status"/>
    <vm href="/api/vms/8a6fd7bc-3113-4888-aa7b-f0204d99a5b7" id="8a6fd7bc-3113-4888-aa7b-f0204d99a5b7"/>
    <alias>d1</alias>
    <image_id>821661fe-0132-420a-bbb8-7a0b05c12cdd</image_id>
    <storage_domains>
        <storage_domain id="117264d9-ff43-4fa8-9bd3-94a16fee7f2b"/>
    </storage_domains>
    <size>2147483648</size>
    <provisioned_size>2147483648</provisioned_size>
    <actual_size>0</actual_size>
    <status>
        <state>locked</state>
    </status>
    <interface>virtio</interface>
    <format>cow</format>
    <sparse>true</sparse>
    <bootable>false</bootable>
    <shareable>false</shareable>
    <wipe_after_delete>false</wipe_after_delete>
    <propagate_errors>false</propagate_errors>
    <active>true</active>
    <read_only>false</read_only>
</disk>

> 
> In addition the error message isn't accurate. This is probably because the
> backend is using the empty GUID as the default value for the storage domain
> id, and thus it isn't finding it. It may be workt to check that.
> 
> Allon, can you confirm/reject this analysis?

Comment 4 Daniel Erez 2014-07-20 11:02:14 UTC
It seems that the issue here is in the request syntax, i.e. add disk request should include storage domains collection (e.g. [1]). Since the collection is not a mandatory argument (to support attach disk on same action) and 'storage_domain' element exists in Disk schema (for back-linking), defining it in the request won't respond with 'Request syntactically incorrect' error.

@Juan - is there any other means to prevent this incorrect structure? or, perhaps we should support both; i.e. as a single element or a collection (though in such case we would need to consider a conflict between them...)?

[1]
<disk>
  <storage_domains>
    <storage_domain id="{id}"/>
  </storage_domains>
  <alias>disk1</alias>
  <size>2147483648</size>
  <interface>virtio</interface>
  <format>cow</format>
  <sparse>True</sparse>
</disk>

Comment 5 Juan Hernández 2014-07-21 10:02:07 UTC
Thanks Daniel. It is clear now that the request sent is incorrect, we shouldn't change anything there. We should probably close as NOTABUG. However, the error message isn't clear, it should say that the storage domain hasn't been specified, not that it is correct. Can we fix that?

Comment 6 Daniel Erez 2014-07-21 11:52:28 UTC
(In reply to Juan Hernández from comment #5)
> Thanks Daniel. It is clear now that the request sent is incorrect, we
> shouldn't change anything there. We should probably close as NOTABUG.
> However, the error message isn't clear, it should say that the storage
> domain hasn't been specified, not that it is correct. Can we fix that?

It's either that or validate the syntax in rest-api side. I've noticed that the python sdk uses a specific entity for vm disks (called 'VMDisk'), but couldn't find it in the api; do we have this notion in the api or is it specific for the sdk? If we do, then perhaps we could reject storage_domain tag when creating a VmDisk in rest-api level?

Comment 7 Juan Hernández 2014-07-21 12:14:18 UTC
In general we can't validate things that are ignored, as that would end up rejecting many requests that are otherwise correct, thus breaking backwards compatibility.

We do have the VMDisk notion in the RESTAPI, it is the BackendVmDiskResource class. But there isn't a correspondence in all cases.

We could add a validation in the RESTAPI, but I'm not sure if this is possible, as the decission if the storage domain is needed or not is deep in the backend. I'd suggest to just be a bit more explicit in the resulting error messagen. Instead of returning this:

  detail: Cannot add Virtual Machine Disk. Storage Domain doesn't exist.

We could return this:

  detail: Cannot add Virtual Machine Disk. Storage Domain hasn't been specified.

I think that we are currently generating that error message because the storage domain id is initialized to the all zeros UUID in the backend, and when we check we find that it doesn't exist. But what actually happens in this case is that it hasn't been provided.

Comment 8 Raz Tamir 2014-08-11 08:04:34 UTC
Verified - ovirt-engine-3.5.0-0.0.master.20140804172041.git23b558e.el6.noarch

Response message:

<fault>
<reason>Operation Failed</reason>
<detail>
[Cannot add Virtual Machine Disk. Storage Domain hasn't been specified.]
</detail>
</fault>

Comment 10 Sandro Bonazzola 2014-10-17 12:30:35 UTC
oVirt 3.5 has been released and should include the fix for this issue.


Note You need to log in before you can comment on or make changes to this bug.