Bug 1573600 - Handle registering of a VM with snapshots containing memory disks correctly
Summary: Handle registering of a VM with snapshots containing memory disks correctly
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: ovirt-engine
Classification: oVirt
Component: BLL.Storage
Version: 4.2.2.3
Hardware: Unspecified
OS: Unspecified
high
medium
Target Milestone: ovirt-4.4.1
: 4.4.1.5
Assignee: shani
QA Contact: Ilan Zuckerman
URL:
Whiteboard:
: 1150249 1828236 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-01 18:40 UTC by Tal Nisan
Modified: 2020-08-17 08:12 UTC (History)
7 users (show)

Fixed In Version: ovirt-engine-4.4.1.5
Doc Type: Bug Fix
Doc Text:
Previously, importing a virtual machine (VM) from a snapshot that included the memory disk failed if you imported it to a storage domain that is different from the storage domain where the snapshot was created. This happened because the memory disk depended on the storage domain remaining unchanged. The current release fixes this issue. Registration of the VM with its memory disks succeeds. If the memory disk is not in the RHV Manager database, the VM creates a new one.
Clone Of:
Environment:
Last Closed: 2020-07-08 08:26:56 UTC
oVirt Team: Storage
pm-rhel: ovirt-4.4+
ylavi: exception+


Attachments (Terms of Use)
snap with memory verification (57.25 KB, image/png)
2020-07-02 10:31 UTC, Ilan Zuckerman
no flags Details


Links
System ID Private Priority Status Summary Last Updated
oVirt gerrit 109655 0 master MERGED core: handle registered VM memory disks 2021-02-14 13:11:17 UTC

Description Tal Nisan 2018-05-01 18:40:29 UTC
Description of problem:
The implementation of registering a VM containing snapshots with memory disks is broken, the parameters created for copying the memory always assume the source domain is the domain of which the VM is registered from which is not always the case.
This is not a regression but it used to work mainly because the backend logic is placing the memory disks in the same domain where most of the VM disks reside but it's not guaranteed to be there especially now when you can move them to other domains.
This logic is based on the import from export domain but should be changed when registering a VM from data domain.

Comment 1 Tal Nisan 2018-05-21 09:48:11 UTC
*** Bug 1150249 has been marked as a duplicate of this bug. ***

Comment 2 Tal Nisan 2018-08-15 13:15:08 UTC
The change in this flow seems to be too complicated for 4.2.z and might cause more regressions than being useful, moving to 4.3

Comment 3 Sandro Bonazzola 2019-01-28 09:34:31 UTC
This bug has not been marked as blocker for oVirt 4.3.0.
Since we are releasing it tomorrow, January 29th, this bug has been re-targeted to 4.3.1.

Comment 5 shani 2020-06-17 08:44:09 UTC
*** Bug 1828236 has been marked as a duplicate of this bug. ***

Comment 7 Sandro Bonazzola 2020-06-25 11:25:05 UTC
this bug is in modified state and targeting 4.4.3, shouldn't it be re-targeted to 4.4.1?

Comment 10 Ilan Zuckerman 2020-07-02 10:30:50 UTC
Verified on rhv-release-4.4.1-5-001.noarch according those steps:

1. Create a blank VM with disk
2. Run the VM  
3. Create a snapshot with memory and power off the VM
4. Move one of the memory disks AND vm disk to a different storage domain
5. Deactivate and detach the storage domain which holds the memory disks and vm disk
6. Attach the storage back
7. Import the VM back to the environment

Expected:
Imported vm from step 7 should have a snapshot including memory.

Actual:
As expected
(see image attached)

Comment 11 Ilan Zuckerman 2020-07-02 10:31:20 UTC
Created attachment 1699613 [details]
snap with memory verification

Comment 12 Sandro Bonazzola 2020-07-08 08:26:56 UTC
This bugzilla is included in oVirt 4.4.1 release, published on July 8th 2020.

Since the problem described in this bug report should be resolved in oVirt 4.4.1 release, it has been closed with a resolution of CURRENT RELEASE.

If the solution does not work for you, please open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.