Bug 2087916 - RHV Warm migration fails if VM has 2 disks
Summary: RHV Warm migration fails if VM has 2 disks
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Migration Toolkit for Virtualization
Classification: Red Hat
Component: Controller
Version: 2.3.1
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 2.3.2
Assignee: Matthew Arnold
QA Contact: Igor Braginsky
Richard Hoch
URL:
Whiteboard: regression
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-05-18 14:24 UTC by Igor Braginsky
Modified: 2022-07-21 13:48 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-07-21 13:48:39 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Imported log pod (5.76 KB, text/plain)
2022-05-18 14:24 UTC, Igor Braginsky
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github konveyor forklift-controller pull 429 0 None open Bug 2087916: Request oVirt DataVolume of ActualSize if larger than ProvisionedSize. 2022-05-26 15:44:25 UTC
Red Hat Product Errata RHBA-2022:5679 0 None None None 2022-07-21 13:48:46 UTC

Description Igor Braginsky 2022-05-18 14:24:18 UTC
Created attachment 1880920 [details]
Imported log pod

Created attachment 1880920 [details]
Imported log pod

Created attachment 1880920 [details]
Imported log pod

Created attachment 1880920 [details]
Imported log pod

Description of problem: Warm migration from RHV fails if a RHEL8 VM has 2 disks, both when target storage is CEPH and NFS.
Incremental copy is created successfully, but after starting cutover - process fails

Version-Release number of selected component (if applicable): 
OCP-4.11.0
CNV-4.11.0
MTV 2.3.1

How reproducible: 100%

Steps to Reproduce:
1. Create new warm plan
2. Select VM with 2 disks
3. Select target storage as CEPH or NFS
Important: Source VM should run during migration process, otherwise bug will not reproduce!

Actual results:
1. Migration plan is stuck in progress
2. CNV UI shows that VM has Data volume error
3. Importer pod is in CrashLoopBack state, see the log attached

Expected results:
VM should be migrated as usual

Additional info:
1. Warm migration of a VM with 2 disks from VMware PASSED for both NFS and Ceph target.
2. Warm migration of a VM with 1 disk from RHV PASSED for both NFS and Ceph target
3. Cold migration of a VM with 2 disks from RHV PASSED for Ceph target

Comment 1 Ilanit Stein 2022-05-27 06:06:18 UTC
Matthew Arnold:

"This bug was always there, it's just kind of unusual to hit. 
It happens when a snapshot's Actual Size is bigger than its Virtual Size

Work around - It looks like expanding the source disk should work, it's kind of an annoying workaround though.
But as long as the Virtual Size is bigger than the Actual Size, you shouldn't hit this bug.
Also, it might also help to create the second disk starting at something bigger than 1GB."

Comment 2 Igor Braginsky 2022-06-29 15:32:48 UTC
Verified this bug on MTV 2.3.2-7/iib:261342. VM migration was successful and VM looks good after migration was completed

Comment 5 errata-xmlrpc 2022-07-21 13:48:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (MTV 2.3.2 Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:5679


Note You need to log in before you can comment on or make changes to this bug.