Created attachment 1231519 [details] engine & vdsm logs (host1 - HSM , host2 - SPM) Description of problem: WebGUI&API Force updateOVF does not update VM with disk Version-Release number of selected component (if applicable): OVF store of the NFS doamin should be updated & ovf file should be there with disk info in it . How reproducible: 100% Setup : 1 DC 1 cluster (C1) 2 Hosts in cluster C1 ( Host1 - HSM , Host2 - SPM ) 3 storage domains (NFS , gluster , iscsi) Steps to Reproduce: 1. Can be reproduced creating ONLY 1 SD . 2. Scedualed updateOVF initiated to avoid bug 1403581 ( last update was 2016-12-14 07:46:23) 3. Create a VM1 without disks (VM is down) . 4. Force updateOVF on the NFS storage domain -> OVF not updated (bug 1404565) 5. Scedualed updateOVF initiated - VM updated & OVF created 6. Remove VM1 . 7. Initiated force updateOVF on the NFS storage domain (Dec 14, 2016 8:57:37 AM- OVF_STORE for domain nfs_dom was updated ) 9. Check OVF store of NFS domain -> indeed OVF file removed as expected . 10. Add disk (NFS domain iscsi bootable) to VM1 (EVENT LOG : Dec 14, 2016 9:17:10 AM -The disk VM1_Disk1 was successfully added to VM VM1) 11. update initiated force updateOVF via API (Dec 14, 2016 9:20:29 AM) on the NFS storage domain - (Dec 14, 2016 9:26:54 AM ) 12.check OVF file Actual results: OVF store of the NFS domain seems to be updated by engine & VDSM logs BUT OVF is not update with neither VM or disk info . Expected results: OVF store of the NFS doamin should be updated & ovf file should be there with disk info in it . Additional info: 1.It looks like even after scheduled OVF store update . 2.VM was in down state . 3.After the Next scheduled OVF update OVF store updated as expected .
correction -> Version-Release number of selected component (if applicable): oVirt Engine Version: 4.1.0-0.2.master.20161212172238.gitea103bd.el7.centos
I tried also to add an additional disk VM1_Disk2(the first disk was VM1_Disk1 ) & force updateOVF but I do not see its updated . In conclusion: What Works : Remove VM Not Working : 1)If not schedualed update Nothing works ! - bug 2) After schedualed update , modify VM/disk + force update: - create VM No disk - bug #1404565 ( you can - create VM +disk (or 2disks) - bug - delete disk from VM
(In reply to Avihai from comment #2) > I tried also to add an additional disk VM1_Disk2(the first disk was > VM1_Disk1 ) & force updateOVF but I do not see its updated . > > In conclusion: > > What Works : > Remove VM > > Not Working : > 1)If not scheduled update Nothing works ! - bug > > 2) After scheduled update , modify VM/disk + force update: > > - create VM No disk - bug #1404565 ( you can > - create VM +disk (or 2disks) - bug > - delete disk from VM Last info missed some bug id's so this is the corrected version : Works (meaning OVF store content is updated) : Remove VM Not Working (meaning OVF store content is NOT updated) : 1)If not scheduled update Nothing works ! - bug 1403581 2) After scheduled update , modify VM/disk + force update: - create VM No disk - bug #1404565 - create VM +disk (or 2disks) - bug - delete disk from VM
Created attachment 1231574 [details] Full scenario logs This logs correlate to this full scenario from clean configuration including all issues found . Setup : 1 DC 1 cluster (C1) 2 Hosts in cluster C1 ( Host1 - HSM , Host2 - SPM ) 3 storage domains (NFS , gluster , iscsi) Steps to Reproduce: 1. Can be reproduced creating ONLY 1 SD . 2. Scedualed updateOVF initiated to avoid bug 1403581 ( last update was 2016-12-14 07:46:23) 3. Create a VM1 without disks (VM is down) . 4. Force updateOVF on the NFS storage domain -> OVF not updated (bug 1404565) 5. Scedualed updateOVF initiated - VM updated & OVF created 6. Remove VM1 . 7. Initiated force updateOVF on the NFS storage domain (Dec 14, 2016 8:57:37 AM- OVF_STORE for domain nfs_dom was updated ) 9. Check OVF store of NFS domain -> indeed OVF file removed as expected . 10. Add disk (VM1_Disk1 ,NFS domain iscsi bootable) to VM1 (EVENT LOG : Dec 14, 2016 9:17:10 AM -The disk VM1_Disk1 was successfully added to VM VM1) 11. update initiated force updateOVF via API (Dec 14, 2016 9:20:29 AM) on the NFS storage domain - (Dec 14, 2016 9:26:54 AM ) 12.check OVF file -> Not update (opened bug XXX) 13. schedualed OVF (Dec 14, 2016 9:46 AM) -> OVF updated 14.added another disk (VM1_Disk2 ) from the same SD(NFS) 15. update initiated force updateOVF (Dec 14, 2016 11:02:21) 16.check OVF file -> Not jupdate 17. delete both Disks 18. update initiated force updateOVF (Dec 14, 2016 11:11:28) 19.check OVF file -> OVF NOT UPDATED !
Tal , you can aggregate/dup both bugs ( #1404565 , #1403581 ) to this bug as they are all with the same issue & solve it there .
Why is it high severity? Reduced to Medium, unless I can understand the context.
(In reply to Yaniv Kaul from comment #6) > Why is it high severity? Reduced to Medium, unless I can understand the > context. This bug aggregates both bug #1404565 & bug #1403581( which is also in high priority ) as it looks the source problem (modify VM/disk not updating OVF store) is the same . This means that most of force OVF feature ( RFE bug #1270562) does not work . IMHO a customer not having force OVF backup means the following & looks like high severity to me for the following reasons : 1) No REAL VM backup/export + import storage domain not working properly for the first hour as OVF's are not updated when the costumer needs it .(until scheduled OVF update occurs which is a very disruptive workaround) 2) Even after scheduled OVF update occurs the costumer can not update OVF store until the next scheduled update , meaning OVF store is not updated for 1H interval . This means import storage domain + VM backup/export will be not relevant if any changes were made until next scheduled OVF update and if the costumer will try to import storage domain + VM backup/export he will have a stale version then he expected . 3) If faulty scheduled OVF update occurs there is no way to recouperate from it except waiting for an entire hour until next scheduled OVF .
*** Bug 1404565 has been marked as a duplicate of this bug. ***
Moving out all non blocker\exceptions.
(In reply to Yaniv Dary from comment #9) > Moving out all non blocker\exceptions. Same, now moving to 4.1.4.
All these patches are included in 4.1.3.3
verified at 4.1.3.4