Bug 1648917 - New disks cloned from template get wrong quota-id, when quota is disabled on DC
Summary: New disks cloned from template get wrong quota-id, when quota is disabled on DC
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: General
Version: 4.2.5.1
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ovirt-4.3.1
: ---
Assignee: Andrej Krejcir
QA Contact: Liran Rotenberg
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-11-12 13:06 UTC by Florian Schmid
Modified: 2019-03-01 10:20 UTC (History)
3 users (show)

Fixed In Version: ovirt-engine-4.3.1.1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-03-01 10:20:20 UTC
oVirt Team: Virt
Embargoed:
rule-engine: ovirt-4.3+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 95988 0 master MERGED webadmin: Small cleanup of quota logic. 2019-02-05 16:39:23 UTC
oVirt gerrit 95989 0 master MERGED webadmin: Clean quota list in VM dialog 2019-02-05 18:45:44 UTC

Description Florian Schmid 2018-11-12 13:06:27 UTC
Description of problem:
After upgrade from 4.1.6 to 4.1.9 and to 4.2.5, when ever I create a new VM from a Template on a DC, where QoS is not enabled, new disks cloned by VM creation from the template, get the wrong QoS ID from default DC.

Disks created by VM creation interface are not able beieng edited, resized or deleted.
Only changing QoS ID in database will repair that issue.

Disks from Template have the correct QoS ID.

After VM creation is complete, adding a new disk to the VM will result with the right ID of the right DC.


Mailing list link:
https://lists.ovirt.org/archives/list/users@ovirt.org/thread/YIJCGKHVJXSTTVVTWQIRJH7KNEXAGBXX/

Version-Release number of selected component (if applicable):
4.2.5.2-1.el7


How reproducible:

Steps to Reproduce:
1. 2 different DCs, default DC with enabled QoS Policy and one without enabled QoS Policy
2. Create new VM from template with cloning disks on DC without QoS enabled. Template is located on DC w/o QoS and template disks have correct QoS ID
3. New disks on new VM have now the QoS ID from default DC, where QoS is enabled instead of having no QoS ID


Actual results:
-> Disks have wrong QoS ID -> No editing, resizing or deleting of these disks is possible


Expected results:
-> Disks should have no QoS policy assigned at all or at least "hidden" default policy of the DC, where QoS is not enabled


Additional info:
Example from engine DB:
select * from image_storage_domain_map where storage_domain_id = '73caedd0-6ef3-46e0-a705-fe268f04f9cc';
->
...
 a50b46ce-e350-40a4-8f00-968529777446 | 73caedd0-6ef3-46e0-a705-fe268f04f9cc | 58ab004a-0315-00d0-02b8-00000000011d | 4a7a0fea-9bc4-4c3a-b3f4-0e8444641ea3
 1cd8f9d9-e2b5-4dec-aa3b-ade2612ed3e7 | 73caedd0-6ef3-46e0-a705-fe268f04f9cc | 58ab004a-009a-00ea-031c-000000000182 | 4a7a0fea-9bc4-4c3a-b3f4-0e8444641ea3
 2f982856-7afa-4f18-a676-fe2cc44b14d6 | 73caedd0-6ef3-46e0-a705-fe268f04f9cc | 58ab004a-009a-00ea-031c-000000000182 | 4a7a0fea-9bc4-4c3a-b3f4-0e8444641ea3
...
->
ll ./6d979004-cb4c-468e-b89a-a292407abafb/
insgesamt 1420
-rw-rw----. 1 vdsm kvm 1073741824 Nov  6 15:46 a50b46ce-e350-40a4-8f00-968529777446
-rw-rw----. 1 vdsm kvm    1048576 Nov  6 15:46 a50b46ce-e350-40a4-8f00-968529777446.lease
-rw-r--r--. 1 vdsm kvm        271 Nov  6 15:46 a50b46ce-e350-40a4-8f00-968529777446.meta

As you see, the disk a50b46ce-e350-40a4-8f00-968529777446 was created some minutes ago, but it has a different default quota ID assigned as the other disks above.
58ab004a-0315-00d0-02b8-00000000011d instead of 58ab004a-009a-00ea-031c-000000000182


select * from quota;
                  id                  |           storage_pool_id            | quota_name |       description       |         _create_date          |         _update_date          | threshold_cluster_percent
age | threshold_storage_percentage | grace_cluster_percentage | grace_storage_percentage | is_default 
--------------------------------------+--------------------------------------+------------+-------------------------+-------------------------------+-------------------------------+--------------------------
----+------------------------------+--------------------------+--------------------------+------------
58ab004a-0315-00d0-02b8-00000000011d | 00000001-0001-0001-0001-000000000089 | Default    | Default unlimited quota | 2017-02-20 14:42:18.967236+00 |                               |                          
 80 |                           80 |                       20 |                       20 | t

58ab004a-009a-00ea-031c-000000000182 | 5507b0a6-9170-4f42-90a7-80d22d4238c6 | Default    | Default unlimited quota | 2017-02-20 14:42:18.967236+00 |                               |                          
 80 |                           80 |                       20 |                       20 | t

As you see here, both quota IDs are default IDs and therefore can't be on the same DC or storage domain.


Workaround:
Enable QoS on all DCs, even when not using it. This will solve wrong QoS ID assigning....

Comment 1 Ryan Barry 2019-01-21 14:53:52 UTC
Re-targeting to 4.3.1 since it is missing a patch, an acked blocker flag, or both

Comment 2 Andrej Krejcir 2019-02-12 11:57:12 UTC
Steps to verify:
1. Enable quota for the 'Default' DC. Or if 'Default' DC does not exist, enable quota for the DC that is selected by default in the New VM popup.
2. Have a different DC, that has quota disabled.
3. On the DC without quota, have a VM template with disks.
4. Create a new VM from this template.
5. Try to update or remove a disk from this VM.

Before fix:
The new VM's disks use quota from the 'Default' DC, which is invalid configuration.
As a result it is not possible to update or remove the disks.

After fix:
The VM's disks use correct quota and it is possible to update or remove them.

Comment 3 Liran Rotenberg 2019-02-12 12:13:56 UTC
Verified on:
ovirt-engine-4.3.0.5-0.0.master.20190210112640.git53b60e3.el7.noarch

Steps:
1. Enable quota for the 'Default' DC. Or if 'Default' DC does not exist, enable quota for the DC that is selected by default in the New VM popup.
2. Have a different DC, that has quota disabled.
3. On the DC without quota, have a VM template with disks.
4. Create a new VM from this template.
5. Test the DB quota ids:
# /usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "select * from quota;"
6. Try to update or remove a disk from this VM.

Results:
Tried twice once the default DC was set with quota: audit and once with quota: enforced.
On step 5, IDs were different for each DC:
# /usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "select * from quota;"                  id                  |           storage_pool_id            | quota_name |       description       |         _create_date          | _update_date | threshold_cluster_percentage | threshold_storage_percentage | grace_cluster_percentage | grace_storage_percentage | is_default 
--------------------------------------+--------------------------------------+------------+-------------------------+-------------------------------+--------------+------------------------------+------------------------------+--------------------------+--------------------------+------------
 d18b6ab8-1e30-11e9-953e-001a4a161064 | ccdbc35a-1e30-11e9-9895-001a4a161064 | Default    | Default unlimited quota | 2019-01-22 12:31:03.422017+02 |              |                           80 |                           80 |                       20 |                       20 | t
 a2d385e0-c087-4e29-b3dd-2813ac9df070 | 91fca4ae-2674-4177-9135-c5b64d2e752d | Default    | Default unlimited quota | 2019-01-22 12:32:37.314957+02 |              |                            0 |                            0 |                        0 |                        0 | t
 46b24a3b-d818-454a-8cea-51c3c2d594f5 | 311737c0-61ac-4702-a87d-0da36ec4b2aa | Default    | Default unlimited quota | 2019-02-12 10:32:07.641698+02 |              |                            0 |                            0 |                        0 |                        0 | t
(3 rows)

From step 6, changes to the VM's disk were successfully made, I succeed to remove the VM's disk.

Comment 4 Sandro Bonazzola 2019-03-01 10:20:20 UTC
This bugzilla is included in oVirt 4.3.1 release, published on February 28th 2019.

Since the problem described in this bug report should be
resolved in oVirt 4.3.1 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.