Description of problem: VM pool auto storage domain selection doesn't work for latest template Version-Release number of selected component (if applicable): 4.2.1.4-0.1.el7 How reproducible: 100% Steps to Reproduce: 1. Create template from any VM with disk on NFS_0 storage 2. Copy template disk to nfs_1 storage 3. Create VM pool from this template using latest template version and select auto storage select under Resource allocation 4. Create new template version from one pool VM 5. Copy new temlate version's disk to the nfs_2 storage 6. Stop pool VMS Actual results: All VMS are recreated on master storage domain Expected results: VMs are recreated according to latest template version, e.g. VMs disks are distributed evenly between stopped VM storage domain and nfs_2 Additional info:
for oVirt issues please file oVirt bugs, not RHV
Same happens for template with two disks. E.g. template has two disks: disk_1 has copies on the nfs_0 and nfs_1, and disk_2 only on nfs_0 Disk_1 should be distributed between nfs_0 and nfs_1 whereas all disksa are created on nfs_1(Master) storage domain
Please note when testing that the disks of the VMs must not be distributed just evenly between the storage domains. Disks are created on the storage domain that has more space first to make usage of the space on the domains more equal (see bug 1081536).
Bug is still present on RHEV 4.2.2.4-0.1.el7
(In reply to Vladimir from comment #4) > Bug is still present on RHEV 4.2.2.4-0.1.el7 Can you please describe in detail the verification scenario?
Version-Release number of selected component (if applicable): 4.2.2.4-0.1.el7 How reproducible: 100% Steps to Reproduce: 1. Create template from any VM with disk on NFS_0 storage 2. Copy template disk to nfs_1 storage 3. Create VM pool with 8 VMS from this template using latest template version and select auto storage select under Resource allocation 4. Run pool VMs ( VMsdisks are distributed between nfs_0 and nfs_1 domains 5. Create new template version from any VM with disk on nfs_0 6. Copy new temlate version's disk to the nfs_2 storage 7. Stop pool VMS Actual result: via REST API: All VMs disks are recreated on nfs_0 domain via UI: disks are redistributed 5 on nfs_2 and 3 on nfs_) Expected result: VMs disks are distributed evenly between nfs_0 and nfs_2 domains
(In reply to Vladimir from comment #6) > Expected result: VMs disks are distributed evenly between nfs_0 and nfs_2 > domains Did you check the free space on nfs_0 and nfs_2? Please, read what I've written in comment 3: The disks of the VMs must not be distributed just evenly between the storage domains. Disks are created on the storage domain that has more space.
(In reply to Vladimir from comment #8) > Since it is our NFS storage under all domains, all three of them always have > equal free space. then it allocates the same one again and again. For proper test you'd need to use different independent SDs > Also it doesn't explain why in the case of REST API all > disks are created on one domain. sure, that's covered bu bug 1547163
(In reply to Michal Skrivanek from comment #9) > (In reply to Vladimir from comment #8) > > Since it is our NFS storage under all domains, all three of them always have > > equal free space. > > then it allocates the same one again and again. For proper test you'd need > to use different independent SDs > > > Also it doesn't explain why in the case of REST API all > > disks are created on one domain. > > sure, that's covered bu bug 1547163 What do you mean by that? As far as I'm concerned for RHEV those domains are quite independent, it's just that they are located on one nfs storage? Am I wrong?
(In reply to Vladimir from comment #10) > What do you mean by that? As far as I'm concerned for RHEV those domains are > quite independent, it's just that they are located on one nfs storage? Am I > wrong? they're sharing the same storage, so at any given point in time there is exactly the same amount of available space on each. So each allocation ends up on the first one. If you have different space available on each then it would have worked.
(In reply to Michal Skrivanek from comment #11) > (In reply to Vladimir from comment #10) > > > What do you mean by that? As far as I'm concerned for RHEV those domains are > > quite independent, it's just that they are located on one nfs storage? Am I > > wrong? > > they're sharing the same storage, so at any given point in time there is > exactly the same amount of available space on each. So each allocation ends > up on the first one. If you have different space available on each then it > would have worked. Why does distribution work fine for cases without latest template? E.g. disks are distibuted evenly between nfs_0 and nfs_1 if template has copies on both of them upon pool creation?
because they are allocated at the time you create the pool. With "latest" you're allocating one every time you shut a VM down.
Verified on 4.2.2.6-0.1.el7 Checked latest template pool actions according to the: https://polarion.engineering.redhat.com/polarion/#/project/RHEVM3/workitem?id=RHEVM-17558&revision=1574866 https://polarion.engineering.redhat.com/polarion/#/project/RHEVM3/workitem?id=RHEVM-17386&revision=1574866 https://polarion.engineering.redhat.com/polarion/#/project/RHEVM3/workitem?id=RHEVM-17385&revision=1574866 Scenario described in this bugzilla is not a bug, but a feature
This bugzilla is included in oVirt 4.2.2 release, published on March 28th 2018. Since the problem described in this bug report should be resolved in oVirt 4.2.2 release, it has been closed with a resolution of CURRENT RELEASE. If the solution does not work for you, please open a new bug report.