Bug 1430447
| Summary: | RESTAPI - after amend different qcow version is seen of the same snapshot disk from 2 different API paths | ||||||
|---|---|---|---|---|---|---|---|
| Product: | [oVirt] ovirt-engine | Reporter: | Avihai <aefrat> | ||||
| Component: | BLL.Storage | Assignee: | Maor <mlipchuk> | ||||
| Status: | CLOSED NOTABUG | QA Contact: | Avihai <aefrat> | ||||
| Severity: | medium | Docs Contact: | |||||
| Priority: | unspecified | ||||||
| Version: | 4.1.1.3 | CC: | aefrat, amureini, bugs, lveyde, tnisan | ||||
| Target Milestone: | ovirt-4.1.2 | Flags: | rule-engine:
ovirt-4.1+
|
||||
| Target Release: | 4.1.2 | ||||||
| Hardware: | Unspecified | ||||||
| OS: | Unspecified | ||||||
| Whiteboard: | |||||||
| Fixed In Version: | Doc Type: | Bug Fix | |||||
| Doc Text: |
Cause:
GET API from VM/<snapshot-id>/disks returns the snapshot's disks at a specific point in time when the snapshot was taken based of the OVF of the snapshot.
Consequence:
The QCOW compat from the OVF might be misleading for two reasons:
1) Once amend is being executed on the snapshot's volume, the volume's compatibility level is changed and the OVF will reflect misleading data.
2) Reflecting the compatibility level of the snapshot might be misleading once previewing a snapshot since a a new volume will be created with compat level compatible to the storage domain's version.
Fix:
The compatibility level will not be part of the OVF and therefore will not be reflected through the VM's API :
/VM/<snapshot-id>/disks
The QCOW volume compatibility level of the snapshots will only be reflected through the storage domain's API.
For example:
/storagedomains/1111/disksnapshots
Result:
/VM/<snapshot-id>/disks should not reflect the compat level through the VM's OVF.
|
Story Points: | --- | ||||
| Clone Of: | Environment: | ||||||
| Last Closed: | 2017-04-30 12:17:32 UTC | Type: | Bug | ||||
| Regression: | --- | Mount Type: | --- | ||||
| Documentation: | --- | CRM: | |||||
| Verified Versions: | Category: | --- | |||||
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||
| Embargoed: | |||||||
| Bug Depends On: | 1445950 | ||||||
| Bug Blocks: | 1432493 | ||||||
| Attachments: |
|
||||||
|
Description
Avihai
2017-03-08 15:39:58 UTC
Maor - IIUC, this should be covered by your recent work. I don't understand. both outputs show the same qcow version qcow2_v2 full path: https://storage-ge-04.scl.lab.tlv.redhat.com/ovirt-engine/api/vms/73f21062-3fa5-4fc4-bd5a-8778d3240be9/snapshots/08decaa9-7727-458c-8471-3ec005be22f1/disks GET API response: <disks> .... <qcow_version>qcow2_v2</qcow_version> 2) from SD(storage domain) path you see 'qcow_v3' for SAME snapshot disk (e9d1b789-34b9-4479-9368-36dde94702a9) full path: https://storage-ge-04.scl.lab.tlv.redhat.com/ovirt-engine/api/storagedomains/0b54b1f7-a438-4690-b858-35ce3ba93123/disksnapshots <disks> ... <qcow_version>qcow2_v2</qcow_version> ... </disk> (In reply to Maor from comment #2) > I don't understand. > both outputs show the same qcow version qcow2_v2 > > > full path: > https://storage-ge-04.scl.lab.tlv.redhat.com/ovirt-engine/api/vms/73f21062- > 3fa5-4fc4-bd5a-8778d3240be9/snapshots/08decaa9-7727-458c-8471-3ec005be22f1/ > disks > > GET API response: > <disks> > .... > <qcow_version>qcow2_v2</qcow_version> > > > 2) from SD(storage domain) path you see 'qcow_v3' for SAME snapshot disk > (e9d1b789-34b9-4479-9368-36dde94702a9) > > full path: > https://storage-ge-04.scl.lab.tlv.redhat.com/ovirt-engine/api/storagedomains/ > 0b54b1f7-a438-4690-b858-35ce3ba93123/disksnapshots > > <disks> > ... > <qcow_version>qcow2_v2</qcow_version> > ... > </disk> Wrong copy-paste in second path (SD path) , this is the right one with qcow_version = 'qcow2_v3' : <disks> <disk id="e9d1b789-34b9-4479-9368-36dde94702a9"> <name>vm_TestCase18336_REST_ISCS_0811134095_Disk1</name> <actual_size>1073741824</actual_size> <alias>vm_TestCase18336_REST_ISCS_0811134095_Disk1</alias> <format>cow</format> <image_id>00033ccf-7c8d-4adf-928c-7b1452f67167</image_id> <propagate_errors>false</propagate_errors> <provisioned_size>6442450944</provisioned_size> <qcow_version>qcow2_v3</qcow_version> <read_only>false</read_only> <shareable>false</shareable> <sparse>true</sparse> <status>ok</status> <storage_type>image</storage_type> <wipe_after_delete>false</wipe_after_delete> <snapshot id="08decaa9-7727-458c-8471-3ec005be22f1"/> <storage_domains> <storage_domain id="0b54b1f7-a438-4690-b858-35ce3ba93123"/> </storage_domains> </disk> </disks> Thanks for the info, I understand now. The API request vms/73f21062-3fa5-4fc4-bd5a-8778d3240be9/snapshots/08decaa9-7727-458c-8471-3ec005be22f1/disks fetched the disks data from the OVF. Since in that time of the snapshot creation, those disks were qcow version 0.10, even after you amend the disk you still see the version in the OVF of the snapshot, which I think is ok since it tells the user what was the qcow version in that time. Regarding the qcow version in the OVF, functionality wise it is not relevant, since the qcow version is determined by the VDSM. Based on the functionality section in http://www.ovirt.org/develop/release-management/features/storage/qcow2v3/: "new QCOW volumes that will be created on a V4 Storage Domains will be created with 1.1 compatibility level. That also includes snapshots" I can update the wiki that will also indicate this scenario more specifically. (In reply to Maor from comment #4) > Thanks for the info, I understand now. > The API request > vms/73f21062-3fa5-4fc4-bd5a-8778d3240be9/snapshots/08decaa9-7727-458c-8471- > 3ec005be22f1/disks > fetched the disks data from the OVF. > Since in that time of the snapshot creation, those disks were qcow version > 0.10, even after you amend the disk you still see the version in the OVF of > the snapshot, which I think is ok since it tells the user what was the qcow > version in that time. > Regarding the qcow version in the OVF, functionality wise it is not > relevant, since the qcow version is determined by the VDSM. > > Based on the functionality section in > http://www.ovirt.org/develop/release-management/features/storage/qcow2v3/: > "new QCOW volumes that will be created on a V4 Storage Domains will be > created with 1.1 compatibility level. That also includes snapshots" > > I can update the wiki that will also indicate this scenario more > specifically. In amend you change QCOW version of both diskX + all disksnapshots of that diskX. So when you use GET API from VM/<snapshot-id>/disks you get the snapshot disks at the point in time when the snapshot was taken , right ? This is what is confusing here from my POV : If you change the disksnapshots ( which is the same disk in different point in time) qcow_version its like you are changing its qcow_version of the disk in all of the points in time snapshot was taken . Can you please clarify here or in wiki . Can not Verify due to Bug 1445950 (qoow_version field does not exist from vms path) Maor , This bug still can not be verified due to Bug 1445950 (qcow_version field does not exist from vms path). If the fix is not to show qcow_version than from the vms path than please close this bug as won't fix . GET API from VM/<snapshot-id>/disks returns the snapshot's disks at a specific point in time when the snapshot was taken, based on the snapshot's OVF. The QCOW compat from the OVF might be misleading for two reasons: 1. Once amend is being executed on the snapshot's volume, the volume's compatibility level is changed and the OVF will reflect wrong QCOW compat. 2. Reflecting the compatibility level of the snapshot might be misleading once previewing a snapshot, since a new volume will be created with compatibility level compatible to the storage domain's version. Because of that reason, the compatibility level will not be part of the OVF. The VM's API: /VM/<snapshot-id>/disks, reflects the data from the snapshot's OVF and therefore the compatibility level will not be reflected from this API. The QCOW volume's compatibility level of the snapshots will be reflected only through the storage domain's API, for example: /storagedomains/1111/disksnapshots Since /VM/<snapshot-id>/disks does not reflect any QCOW compat data based on Avihai's comment in https://bugzilla.redhat.com/show_bug.cgi?id=1430447#c6, we can assume this is the right approach. After discussing this approach/behavior with Tal it was decided to close this bug as not a bug. (Avihai, I think we can also mark this bug as verified since the QCOW compat is not reflected through the OVF. Please feel free to change the status if you think otherwise.) |