Bug 1404607
| Summary: | Force updateOVF does not update OVF store when Adding VM/Disk or removing disk (Only removing VM works) | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | [oVirt] ovirt-engine | Reporter: | Avihai <aefrat> | ||||||
| Component: | BLL.Storage | Assignee: | Maor <mlipchuk> | ||||||
| Status: | CLOSED CURRENTRELEASE | QA Contact: | Avihai <aefrat> | ||||||
| Severity: | medium | Docs Contact: | |||||||
| Priority: | high | ||||||||
| Version: | 4.1.0 | CC: | amureini, bugs, stirabos, tnisan | ||||||
| Target Milestone: | ovirt-4.1.3 | Flags: | rule-engine:
ovirt-4.1+
|
||||||
| Target Release: | 4.1.3.3 | ||||||||
| Hardware: | Unspecified | ||||||||
| OS: | Unspecified | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |||||||
| Doc Text: | Story Points: | --- | |||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2017-07-06 13:22:19 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | Storage | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Bug Depends On: | |||||||||
| Bug Blocks: | 1270562 | ||||||||
| Attachments: |
|
||||||||
|
Description
Avihai
2016-12-14 08:56:38 UTC
correction -> Version-Release number of selected component (if applicable): oVirt Engine Version: 4.1.0-0.2.master.20161212172238.gitea103bd.el7.centos I tried also to add an additional disk VM1_Disk2(the first disk was VM1_Disk1 ) & force updateOVF but I do not see its updated . In conclusion: What Works : Remove VM Not Working : 1)If not schedualed update Nothing works ! - bug 2) After schedualed update , modify VM/disk + force update: - create VM No disk - bug #1404565 ( you can - create VM +disk (or 2disks) - bug - delete disk from VM (In reply to Avihai from comment #2) > I tried also to add an additional disk VM1_Disk2(the first disk was > VM1_Disk1 ) & force updateOVF but I do not see its updated . > > In conclusion: > > What Works : > Remove VM > > Not Working : > 1)If not scheduled update Nothing works ! - bug > > 2) After scheduled update , modify VM/disk + force update: > > - create VM No disk - bug #1404565 ( you can > - create VM +disk (or 2disks) - bug > - delete disk from VM Last info missed some bug id's so this is the corrected version : Works (meaning OVF store content is updated) : Remove VM Not Working (meaning OVF store content is NOT updated) : 1)If not scheduled update Nothing works ! - bug 1403581 2) After scheduled update , modify VM/disk + force update: - create VM No disk - bug #1404565 - create VM +disk (or 2disks) - bug - delete disk from VM Created attachment 1231574 [details] Full scenario logs This logs correlate to this full scenario from clean configuration including all issues found . Setup : 1 DC 1 cluster (C1) 2 Hosts in cluster C1 ( Host1 - HSM , Host2 - SPM ) 3 storage domains (NFS , gluster , iscsi) Steps to Reproduce: 1. Can be reproduced creating ONLY 1 SD . 2. Scedualed updateOVF initiated to avoid bug 1403581 ( last update was 2016-12-14 07:46:23) 3. Create a VM1 without disks (VM is down) . 4. Force updateOVF on the NFS storage domain -> OVF not updated (bug 1404565) 5. Scedualed updateOVF initiated - VM updated & OVF created 6. Remove VM1 . 7. Initiated force updateOVF on the NFS storage domain (Dec 14, 2016 8:57:37 AM- OVF_STORE for domain nfs_dom was updated ) 9. Check OVF store of NFS domain -> indeed OVF file removed as expected . 10. Add disk (VM1_Disk1 ,NFS domain iscsi bootable) to VM1 (EVENT LOG : Dec 14, 2016 9:17:10 AM -The disk VM1_Disk1 was successfully added to VM VM1) 11. update initiated force updateOVF via API (Dec 14, 2016 9:20:29 AM) on the NFS storage domain - (Dec 14, 2016 9:26:54 AM ) 12.check OVF file -> Not update (opened bug XXX) 13. schedualed OVF (Dec 14, 2016 9:46 AM) -> OVF updated 14.added another disk (VM1_Disk2 ) from the same SD(NFS) 15. update initiated force updateOVF (Dec 14, 2016 11:02:21) 16.check OVF file -> Not jupdate 17. delete both Disks 18. update initiated force updateOVF (Dec 14, 2016 11:11:28) 19.check OVF file -> OVF NOT UPDATED ! Tal , you can aggregate/dup both bugs ( #1404565 , #1403581 ) to this bug as they are all with the same issue & solve it there . Why is it high severity? Reduced to Medium, unless I can understand the context. (In reply to Yaniv Kaul from comment #6) > Why is it high severity? Reduced to Medium, unless I can understand the > context. This bug aggregates both bug #1404565 & bug #1403581( which is also in high priority ) as it looks the source problem (modify VM/disk not updating OVF store) is the same . This means that most of force OVF feature ( RFE bug #1270562) does not work . IMHO a customer not having force OVF backup means the following & looks like high severity to me for the following reasons : 1) No REAL VM backup/export + import storage domain not working properly for the first hour as OVF's are not updated when the costumer needs it .(until scheduled OVF update occurs which is a very disruptive workaround) 2) Even after scheduled OVF update occurs the costumer can not update OVF store until the next scheduled update , meaning OVF store is not updated for 1H interval . This means import storage domain + VM backup/export will be not relevant if any changes were made until next scheduled OVF update and if the costumer will try to import storage domain + VM backup/export he will have a stale version then he expected . 3) If faulty scheduled OVF update occurs there is no way to recouperate from it except waiting for an entire hour until next scheduled OVF . *** Bug 1404565 has been marked as a duplicate of this bug. *** Moving out all non blocker\exceptions. (In reply to Yaniv Dary from comment #9) > Moving out all non blocker\exceptions. Same, now moving to 4.1.4. All these patches are included in 4.1.3.3 verified at 4.1.3.4 |