Bug 1547162 - VM pool auto storage domain selection doesn't work for latest template
Summary: VM pool auto storage domain selection doesn't work for latest template
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Virt
Version: 4.2.1
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ovirt-4.2.2
: 4.2.2.3
Assignee: Shmuel Melamud
QA Contact: Vladimir
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-02-20 16:07 UTC by Vladimir
Modified: 2019-04-28 13:37 UTC (History)
8 users (show)

Fixed In Version: ovirt-engine-4.2.2.3
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-04-18 12:24:45 UTC
oVirt Team: Virt
Embargoed:
rule-engine: ovirt-4.2+


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 88318 0 master MERGED core: Auto SD selection on template version update 2018-03-01 12:22:16 UTC
oVirt gerrit 88404 0 ovirt-engine-4.2 MERGED core: Auto SD selection on template version update 2018-03-04 13:53:51 UTC

Description Vladimir 2018-02-20 16:07:26 UTC
Description of problem:
VM pool auto storage domain selection doesn't work for latest template

Version-Release number of selected component (if applicable):
4.2.1.4-0.1.el7


How reproducible: 100%


Steps to Reproduce:
1. Create template from any VM with disk on NFS_0 storage
2. Copy template disk to nfs_1 storage
3. Create VM pool from this template using latest template version and select auto storage select under Resource allocation
4. Create new template version from one pool VM
5. Copy new temlate version's disk to the nfs_2 storage
6. Stop pool VMS

Actual results: All VMS are recreated on master storage domain

Expected results: VMs are recreated according to latest template version, e.g. VMs disks are distributed evenly between stopped VM storage domain and nfs_2


Additional info:

Comment 1 Michal Skrivanek 2018-02-21 07:51:16 UTC
for oVirt issues please file oVirt bugs, not RHV

Comment 2 Vladimir 2018-02-21 13:35:44 UTC
Same happens for template with two disks.
E.g. template has two disks: disk_1 has copies on the nfs_0 and nfs_1, and disk_2 only on nfs_0
Disk_1 should be distributed between nfs_0 and nfs_1 whereas all disksa are created on nfs_1(Master) storage domain

Comment 3 Shmuel Melamud 2018-03-01 12:31:38 UTC
Please note when testing that the disks of the VMs must not be distributed just evenly between the storage domains. Disks are created on the storage domain that has more space first to make usage of the space on the domains more equal (see bug 1081536).

Comment 4 Vladimir 2018-03-20 09:08:25 UTC
Bug is still present on  RHEV 4.2.2.4-0.1.el7

Comment 5 Shmuel Melamud 2018-03-20 09:25:59 UTC
(In reply to Vladimir from comment #4)
> Bug is still present on  RHEV 4.2.2.4-0.1.el7

Can you please describe in detail the verification scenario?

Comment 6 Vladimir 2018-03-20 15:25:42 UTC
Version-Release number of selected component (if applicable):
4.2.2.4-0.1.el7

How reproducible: 100%

Steps to Reproduce:


1. Create template from any VM with disk on NFS_0 storage
2. Copy template disk to nfs_1 storage
3. Create VM pool with 8 VMS from this template using latest template version and select auto storage select under Resource allocation
4. Run pool VMs ( VMsdisks are distributed between nfs_0 and nfs_1 domains
5. Create new template version from any VM with disk on nfs_0
6. Copy new temlate version's disk to the nfs_2 storage
7. Stop pool VMS

Actual result: via REST API: All VMs disks are recreated on nfs_0 domain
via UI: disks are redistributed 5 on nfs_2 and 3 on nfs_)

Expected result: VMs disks are distributed evenly between nfs_0 and nfs_2 domains

Comment 7 Shmuel Melamud 2018-03-20 15:43:26 UTC
(In reply to Vladimir from comment #6)
> Expected result: VMs disks are distributed evenly between nfs_0 and nfs_2
> domains

Did you check the free space on nfs_0 and nfs_2? Please, read what I've written in comment 3:

The disks of the VMs must not be distributed just evenly between the storage domains. Disks are created on the storage domain that has more space.

Comment 9 Michal Skrivanek 2018-03-22 11:54:26 UTC
(In reply to Vladimir from comment #8)
> Since it is our NFS storage under all domains, all three of them always have
> equal free space. 

then it allocates the same one again and again. For proper test you'd need to use different independent SDs

> Also it doesn't explain why in the case of REST API all
> disks are created on one domain.

sure, that's covered bu bug 1547163

Comment 10 Vladimir 2018-03-22 13:01:27 UTC
(In reply to Michal Skrivanek from comment #9)
> (In reply to Vladimir from comment #8)
> > Since it is our NFS storage under all domains, all three of them always have
> > equal free space. 
> 
> then it allocates the same one again and again. For proper test you'd need
> to use different independent SDs
> 
> > Also it doesn't explain why in the case of REST API all
> > disks are created on one domain.
> 
> sure, that's covered bu bug 1547163

What do you mean by that? As far as I'm concerned for RHEV those domains are quite independent, it's just that they are located on one nfs storage? Am I wrong?

Comment 11 Michal Skrivanek 2018-03-22 13:06:39 UTC
(In reply to Vladimir from comment #10)

> What do you mean by that? As far as I'm concerned for RHEV those domains are
> quite independent, it's just that they are located on one nfs storage? Am I
> wrong?

they're sharing the same storage, so at any given point in time there is exactly the same amount of available space on each. So each allocation ends up on the first one. If you have different space available on each then it would have worked.

Comment 12 Vladimir 2018-03-27 07:06:12 UTC
(In reply to Michal Skrivanek from comment #11)
> (In reply to Vladimir from comment #10)
> 
> > What do you mean by that? As far as I'm concerned for RHEV those domains are
> > quite independent, it's just that they are located on one nfs storage? Am I
> > wrong?
> 
> they're sharing the same storage, so at any given point in time there is
> exactly the same amount of available space on each. So each allocation ends
> up on the first one. If you have different space available on each then it
> would have worked.

Why does distribution work fine for cases without latest template? E.g. disks are distibuted evenly between nfs_0 and nfs_1 if template has copies on both of them upon pool creation?

Comment 13 Michal Skrivanek 2018-03-27 07:34:09 UTC
because they are allocated at the time you create the pool. With "latest" you're allocating one every time you shut a VM down.

Comment 15 Sandro Bonazzola 2018-04-18 12:24:45 UTC
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018.

Since the problem described in this bug report should be
resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.