Bug 1705338 - Ghost OVFs are written when using floating SD to migrate VMs between 2 RHV environments.
Summary: Ghost OVFs are written when using floating SD to migrate VMs between 2 RHV en...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Enterprise Virtualization Manager
Classification: Red Hat
Component: ovirt-engine
Version: 4.4.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ovirt-4.5.3
: ---
Assignee: Pavel Bar
QA Contact: Evelina Shames
URL:
Whiteboard:
Depends On:
Blocks: 1541529
TreeView+ depends on / blocked
 
Reported: 2019-05-02 05:55 UTC by Germano Veit Michel
Modified: 2022-11-16 12:17 UTC (History)
11 users (show)

Fixed In Version: ovirt-engine-4.5.3.1
Doc Type: Bug Fix
Doc Text:
Previously, stale data sometimes appeared in the DB "unregistered_ovf_of_entities" DB table. As a result, when importing a floating Storage Domain with a VM and disks from a source RHV to destination RHV. After importing the floating Storage Domain back into the source RHV, the VM is listed under the "VM Import" tab, but can't be imported because all its disks are now located on another Storage Domain (the destination RHV). In addition, after the first OVF update, the OVF of the VM reappears on the floating Storage Domain as a "ghost" OVF. In this release, after the floating Storage Domain is re-attached in the source RHV, the VM does not appear under the "VM Import" tab and no "ghost" OVF is re-created after the OVF update, and the DB table is filled correctly during Storage Domain attachment. This ensures that the "unregistered_ovf_of_entities" DB table contains the most up-to-date data, and no irrelevant entries are present.
Clone Of:
Environment:
Last Closed: 2022-11-16 12:17:27 UTC
oVirt Team: Storage
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github oVirt ovirt-engine pull 678 0 None Merged Trigger OvfDataUpdater when detaching a storage domain 2022-09-28 11:51:56 UTC
Github oVirt ovirt-engine pull 681 0 None Merged Fix SD detach flow ("unregistered_ovf_of_entities" DB table) 2022-09-28 11:51:55 UTC
Red Hat Issue Tracker RHV-36981 0 None None None 2021-08-31 12:14:23 UTC
Red Hat Product Errata RHSA-2022:8502 0 None None None 2022-11-16 12:17:37 UTC

Description Germano Veit Michel 2019-05-02 05:55:37 UTC
Description of problem:

When using floating Data Domains to migrate VMs, the source RHV keeps the OVFs in the Database for already detached SDs. The VM is removed on detach, but the OVFs are stored. Then the floating SD is attached to the destination RHV, the VMs imported and their disks moved to a permanent storage domain. Detach the Floating SD and attach it back to the source RHV, and the Source RHV fills the floating SD OVFs with the VMs that were already imported and moved to another SD on the destination. The floating SD is empty (disks), but contains now contains all the OVFs of the VMs that it exported previously.

The problem happens on the Source RHV, upon reattaching the floating SD.

The outcome is the "VM Import" tab contains VMs to import that were already imported, as the source RHV writes incorrect OVFs to that storage. If the floating SD is used for several VMs in several iterations, it can become very annoying, and its also incorrect.

Version-Release number of selected component (if applicable):
rhvm-4.2.8.7-0.1.el7ev.noarch

How reproducible:
Always

Steps to Reproduce:
1. Create test VM on an empty floating SD on RHV A
VM ID: 204bde10-b5a8-42d4-8aa4-2c26727ab209

2. Detach the floating SD, the VM is gone from RHV A

* But note the OVF is still in the DB (doesn't look right)
               vm_guid                | ovf_generation 
--------------------------------------+----------------
 204bde10-b5a8-42d4-8aa4-2c26727ab209 |              2

3. Check the OVFs on the storage, the VM is there as expected, so it can be imported on the other RHV
# tar -tf 1f96147a-caac-42f8-a251-84416a8e35ae
info.json
204bde10-b5a8-42d4-8aa4-2c26727ab209.ovf
metadata.json

4. Import floating SD into RHV B

5. Import the VM into RHV B

6. Move the VM's disks to another SD (out of floating SD) in RHV B
* At this point we expect the OVFs to be present only on the "another SD" and not on the floating SD anymore.

7. Detach SD from RHV B

8. Check OVFs on the floating storage, VM is not there:
# tar -tf 1f96147a-caac-42f8-a251-84416a8e35ae
info.json
metadata.json

All good until here (maybe except the source RHV storing OVFs for VMs that do not exist on it)

9. At this point we import the floating SD back to RHV A
PROBLEMS:
-> Test VM shows up in the Import Tab (the VM disk is not there, it was moved to another SD and this storage did not contain the OVF for this VM)
-> Then on next OVF update, it writes the VM OVF back to the SD, the VM does not even exist on this RHV, nor its disk on the floating SD

10. Check OVFs on the floating storage, VM is there, the Disk is not:
# tar -tf 1f96147a-caac-42f8-a251-84416a8e35ae
info.json
metadata.json
204bde10-b5a8-42d4-8aa4-2c26727ab209.ovf

Actual results:
OVFs for VMs that are not on the SD but were previously imported are present on the floating SD.

Expected results:
No OVFs for VMs that are not on the SD

Comment 1 Germano Veit Michel 2019-05-02 06:04:55 UTC
Source RHV importing back the SD (step 9) - no VM OVF yet
2019-05-02 15:43:37,918+10 INFO  [org.ovirt.engine.core.utils.archivers.tar.TarInMemoryExport] (EE-ManagedThreadFactory-engine-Thread-1966) [78420b6c] Finish to fetch OVF files from tar file. The number of OVF entities are 0

Source RHV updating OVFs and writing the Ghost OVF happens here (step 10)
2019-05-02 15:45:31,970+10 INFO  [org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand] (EE-ManagedThreadFactory-engine-Thread-2013) [4a84a42b-1787-407e-b3c0-dc41a80e7267] FINISH, UploadStreamVDSCommand, log id: 3ce0ea06

Comment 3 Michal Skrivanek 2019-05-02 12:07:34 UTC
Tal, this sounds more Storage than Virt

Comment 4 Ryan Barry 2019-05-02 13:04:31 UTC
Shmuel, any thoughts here?

I'm not sure if we're intentionally leaving this in the DB on removal, but it should not be possible to generate an OVA without a matching disk -- no exceptions thrown?

Comment 5 Shmuel Melamud 2019-05-28 13:33:05 UTC
The record in vm_ovf_generations table should be deleted when the storage domain is detached, together with the corresponding VMs. It is a part of the Storage task, I think.

Comment 9 Daniel Gur 2019-08-28 13:14:00 UTC
sync2jira

Comment 10 Daniel Gur 2019-08-28 13:18:15 UTC
sync2jira

Comment 13 Germano Veit Michel 2019-12-13 06:24:07 UTC
I did not have time to do the whole thing again on master, but I reproduced what we understand is the root cause of the problem on ovirt-engine-4.3.7.2-1.el7.noarch.

After detaching the SD which contains the VM disk, vm_ovf_generations still has an entry for the VM that is gone due to the detach

engine=# select vm_guid from vm_ovf_generations where vm_guid = '5010dcc9-2573-469d-93c7-47be2cd5d7db';
               vm_guid                
--------------------------------------
 5010dcc9-2573-469d-93c7-47be2cd5d7db
(1 row)

engine=# select vm_name from vm_static where vm_guid = '5010dcc9-2573-469d-93c7-47be2cd5d7db';
 vm_name 
---------
(0 rows)

Comment 25 Arik 2022-09-27 14:12:32 UTC
Pavel, please set the doc text

Comment 26 Pavel Bar 2022-09-28 13:11:47 UTC
QE instructions:
The reproduction is described in the "Description of problem" by Germano and also in the Doc Text that I added.

Comment 33 Evelina Shames 2022-10-18 12:51:18 UTC
Verified on ovirt-engine-4.5.3.1 with the flow described in the description/Doc Text.

Comment 37 errata-xmlrpc 2022-11-16 12:17:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: RHV Manager (ovirt-engine) [ovirt-4.5.3] bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:8502


Note You need to log in before you can comment on or make changes to this bug.