Bug 1430447 - RESTAPI - after amend different qcow version is seen of the same snapshot disk from 2 different API paths
Summary: RESTAPI - after amend different qcow version is seen of the same snapshot dis...
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.1.1.3
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-4.1.2
: 4.1.2
Assignee: Maor
QA Contact: Avihai
URL:
Whiteboard:
Depends On: 1445950
Blocks: 1432493
TreeView+ depends on / blocked
 
Reported: 2017-03-08 15:39 UTC by Avihai
Modified: 2017-05-01 13:59 UTC (History)
5 users (show)

Fixed In Version:
Clone Of:
Environment:
Last Closed: 2017-04-30 12:17:32 UTC
oVirt Team: Storage
Embargoed:
rule-engine: ovirt-4.1+


Attachments (Terms of Use)
engine & vdsm logs (507.81 KB, application/x-gzip)
2017-03-08 15:39 UTC, Avihai
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 74008 0 master MERGED core: Remove compat from OVF. 2017-03-26 16:20:45 UTC
oVirt gerrit 74641 0 ovirt-engine-4.1 MERGED core: Remove compat from OVF. 2017-03-27 09:05:52 UTC

Description Avihai 2017-03-08 15:39:58 UTC
Created attachment 1261324 [details]
engine & vdsm logs

Description of problem:
After amend different qcow version is seen of the same snapshot disk from 2 different API paths


Version-Release number of selected component (if applicable):
Engine:
ovirt-engine-4.1.1.3-0.1.el7.noarch

VDSM:
4.19.6-1


How reproducible:
100%


Steps to Reproduce:
1.create Data center + cluster + storage domain on with compatibility version 4.0 .

2. upgrade DC + cluster .

3.create VM + 1 disk + 1 snapshot on storage domain with comparability version on

4. amend disk - success ( also all snapdisks should be amended)  

5.check RESTAPI the qcow version of the snapshot disks via 2 PATHS :

PATH 1 -from VM path you see from the same snap disk 'qcow_v2'

full path:
https://storage-ge-04.scl.lab.tlv.redhat.com/ovirt-engine/api/vms/73f21062-3fa5-4fc4-bd5a-8778d3240be9/snapshots/08decaa9-7727-458c-8471-3ec005be22f1/disks

GET API response:
<disks>
<disk id="e9d1b789-34b9-4479-9368-36dde94702a9">
<name>vm_TestCase18336_REST_ISCS_0811134095_Disk1</name>
<actual_size>1073741824</actual_size>
<alias>vm_TestCase18336_REST_ISCS_0811134095_Disk1</alias>
<format>cow</format>
<image_id>00033ccf-7c8d-4adf-928c-7b1452f67167</image_id>
<propagate_errors>false</propagate_errors>
<provisioned_size>6442450944</provisioned_size>
<qcow_version>qcow2_v2</qcow_version>
<read_only>false</read_only>
<shareable>false</shareable>
<sparse>true</sparse>
<status>ok</status>
<storage_type>image</storage_type>
<wipe_after_delete>false</wipe_after_delete>
<snapshot id="08decaa9-7727-458c-8471-3ec005be22f1"/>
<storage_domains>
<storage_domain id="0b54b1f7-a438-4690-b858-35ce3ba93123"/>
</storage_domains>
</disk>
</disks>

2) from SD(storage domain) path you see 'qcow_v3' for SAME snapshot disk (e9d1b789-34b9-4479-9368-36dde94702a9) 

full path:
https://storage-ge-04.scl.lab.tlv.redhat.com/ovirt-engine/api/storagedomains/0b54b1f7-a438-4690-b858-35ce3ba93123/disksnapshots

<disks>
<disk id="e9d1b789-34b9-4479-9368-36dde94702a9">
<name>vm_TestCase18336_REST_ISCS_0811134095_Disk1</name>
<actual_size>1073741824</actual_size>
<alias>vm_TestCase18336_REST_ISCS_0811134095_Disk1</alias>
<format>cow</format>
<image_id>00033ccf-7c8d-4adf-928c-7b1452f67167</image_id>
<propagate_errors>false</propagate_errors>
<provisioned_size>6442450944</provisioned_size>
<qcow_version>qcow2_v2</qcow_version>
<read_only>false</read_only>
<shareable>false</shareable>
<sparse>true</sparse>
<status>ok</status>
<storage_type>image</storage_type>
<wipe_after_delete>false</wipe_after_delete>
<snapshot id="08decaa9-7727-458c-8471-3ec005be22f1"/>
<storage_domains>
<storage_domain id="0b54b1f7-a438-4690-b858-35ce3ba93123"/>
</storage_domains>
</disk>



Actual results:
We get different qcow version of the same snapshot disk taken from 2 different API paths ( VM vs StorageDomain)  


Expected results:
We should get the same qcow version of the same snapshot disk object

Additional info:
When checking with a new DC(V4) on the 2 paths(VM , SD) we see consistent result , both are qcow_v3 .

So it looks like the amend only update 1 path which is the SD path and not the VM path .

Comment 1 Allon Mureinik 2017-03-12 11:50:20 UTC
Maor - IIUC, this should be covered by your recent work.

Comment 2 Maor 2017-03-12 12:27:52 UTC
I don't understand.
both outputs show the same qcow version qcow2_v2


full path:
https://storage-ge-04.scl.lab.tlv.redhat.com/ovirt-engine/api/vms/73f21062-3fa5-4fc4-bd5a-8778d3240be9/snapshots/08decaa9-7727-458c-8471-3ec005be22f1/disks

GET API response:
<disks>
....
<qcow_version>qcow2_v2</qcow_version>


2) from SD(storage domain) path you see 'qcow_v3' for SAME snapshot disk (e9d1b789-34b9-4479-9368-36dde94702a9) 

full path:
https://storage-ge-04.scl.lab.tlv.redhat.com/ovirt-engine/api/storagedomains/0b54b1f7-a438-4690-b858-35ce3ba93123/disksnapshots

<disks>
...
<qcow_version>qcow2_v2</qcow_version>
...
</disk>

Comment 3 Avihai 2017-03-13 08:10:41 UTC
(In reply to Maor from comment #2)
> I don't understand.
> both outputs show the same qcow version qcow2_v2
> 
> 
> full path:
> https://storage-ge-04.scl.lab.tlv.redhat.com/ovirt-engine/api/vms/73f21062-
> 3fa5-4fc4-bd5a-8778d3240be9/snapshots/08decaa9-7727-458c-8471-3ec005be22f1/
> disks
> 
> GET API response:
> <disks>
> ....
> <qcow_version>qcow2_v2</qcow_version>
> 
> 
> 2) from SD(storage domain) path you see 'qcow_v3' for SAME snapshot disk
> (e9d1b789-34b9-4479-9368-36dde94702a9) 
> 
> full path:
> https://storage-ge-04.scl.lab.tlv.redhat.com/ovirt-engine/api/storagedomains/
> 0b54b1f7-a438-4690-b858-35ce3ba93123/disksnapshots
> 
> <disks>
> ...
> <qcow_version>qcow2_v2</qcow_version>
> ...
> </disk>


Wrong copy-paste in second path (SD path)  , this is the right one with 
qcow_version = 'qcow2_v3' :

<disks>
<disk id="e9d1b789-34b9-4479-9368-36dde94702a9">
<name>vm_TestCase18336_REST_ISCS_0811134095_Disk1</name>
<actual_size>1073741824</actual_size>
<alias>vm_TestCase18336_REST_ISCS_0811134095_Disk1</alias>
<format>cow</format>
<image_id>00033ccf-7c8d-4adf-928c-7b1452f67167</image_id>
<propagate_errors>false</propagate_errors>
<provisioned_size>6442450944</provisioned_size>
<qcow_version>qcow2_v3</qcow_version>
<read_only>false</read_only>
<shareable>false</shareable>
<sparse>true</sparse>
<status>ok</status>
<storage_type>image</storage_type>
<wipe_after_delete>false</wipe_after_delete>
<snapshot id="08decaa9-7727-458c-8471-3ec005be22f1"/>
<storage_domains>
<storage_domain id="0b54b1f7-a438-4690-b858-35ce3ba93123"/>
</storage_domains>
</disk>
</disks>

Comment 4 Maor 2017-03-13 08:56:21 UTC
Thanks for the info, I understand now.
The API request vms/73f21062-3fa5-4fc4-bd5a-8778d3240be9/snapshots/08decaa9-7727-458c-8471-3ec005be22f1/disks
fetched the disks data from the OVF.
Since in that time of the snapshot creation, those disks were qcow version 0.10, even after you amend the disk you still see the version in the OVF of the snapshot, which I think is ok since it tells the user what was the qcow version in that time.
Regarding the qcow version in the OVF, functionality wise it is not relevant, since the qcow version is determined by the VDSM.

Based on the functionality section in http://www.ovirt.org/develop/release-management/features/storage/qcow2v3/:
"new QCOW volumes that will be created on a V4 Storage Domains will be created with 1.1 compatibility level. That also includes snapshots"

I can update the wiki that will also indicate this scenario more specifically.

Comment 5 Avihai 2017-03-13 09:56:37 UTC
(In reply to Maor from comment #4)
> Thanks for the info, I understand now.
> The API request
> vms/73f21062-3fa5-4fc4-bd5a-8778d3240be9/snapshots/08decaa9-7727-458c-8471-
> 3ec005be22f1/disks
> fetched the disks data from the OVF.
> Since in that time of the snapshot creation, those disks were qcow version
> 0.10, even after you amend the disk you still see the version in the OVF of
> the snapshot, which I think is ok since it tells the user what was the qcow
> version in that time.
> Regarding the qcow version in the OVF, functionality wise it is not
> relevant, since the qcow version is determined by the VDSM.
> 
> Based on the functionality section in
> http://www.ovirt.org/develop/release-management/features/storage/qcow2v3/:
> "new QCOW volumes that will be created on a V4 Storage Domains will be
> created with 1.1 compatibility level. That also includes snapshots"
> 
> I can update the wiki that will also indicate this scenario more
> specifically.

In amend you change QCOW version of both diskX + all disksnapshots of that diskX.

So when you use GET API from VM/<snapshot-id>/disks you get the snapshot disks at the point in time when the snapshot was taken , right ?

This is what is confusing here from my POV :
If you change the disksnapshots ( which is the same disk in different point in time) qcow_version its like you are changing its qcow_version of the disk in all of the points in time snapshot was taken .

Can you please clarify here or in wiki .

Comment 6 Avihai 2017-04-27 06:11:02 UTC
Can not Verify due to Bug 1445950 (qoow_version field does not exist from vms path)

Comment 7 Avihai 2017-04-27 07:42:08 UTC
Maor , This bug still can not be verified due to Bug 1445950 (qcow_version field does not exist from vms path).

If the fix is not to show qcow_version than from the vms path than please close this bug as won't fix .

Comment 8 Maor 2017-04-30 12:17:32 UTC
GET API from VM/<snapshot-id>/disks returns the snapshot's disks at a specific point in time when the snapshot was taken, based on the snapshot's OVF.
The QCOW compat from the OVF might be misleading for two reasons:
1. Once amend is being executed on the snapshot's volume, the volume's compatibility level is changed and the OVF will reflect wrong QCOW compat.
2. Reflecting the compatibility level of the snapshot might be misleading once previewing a snapshot, since a new volume will be created with compatibility level compatible to the storage domain's version.

Because of that reason, the compatibility level will not be part of the OVF.
The VM's API: /VM/<snapshot-id>/disks, reflects the data from the snapshot's OVF and therefore the compatibility level will not be reflected from this API.

The QCOW volume's compatibility level of the snapshots will be reflected only through the storage domain's API, for example:
   /storagedomains/1111/disksnapshots

Since /VM/<snapshot-id>/disks does not reflect any QCOW compat data based on Avihai's comment in https://bugzilla.redhat.com/show_bug.cgi?id=1430447#c6, we can assume this is the right approach.

After discussing this approach/behavior with Tal it was decided to close this bug as not a bug.
(Avihai, I think we can also mark this bug as verified since the QCOW compat is not reflected through the OVF.
Please feel free to change the status if you think otherwise.)


Note You need to log in before you can comment on or make changes to this bug.