Bug 1845604 - [v2v] RHV to CNV VM import: Prevent a second vm-import from starting.
Summary: [v2v] RHV to CNV VM import: Prevent a second vm-import from starting.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: V2V
Version: 2.4.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 2.4.0
Assignee: Ondra Machacek
QA Contact: Ilanit Stein
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-09 15:56 UTC by Ilanit Stein
Modified: 2020-07-28 19:10 UTC (History)
5 users (show)

Fixed In Version: v2.4.0-17
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-28 19:10:33 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
vm-import-controller.log (1.08 MB, text/plain)
2020-06-09 15:57 UTC, Ilanit Stein
no flags Details
Step1 (47.76 KB, image/png)
2020-06-09 16:02 UTC, Ilanit Stein
no flags Details
Step2 (40.13 KB, image/png)
2020-06-09 16:03 UTC, Ilanit Stein
no flags Details
Step3 (43.10 KB, image/png)
2020-06-09 16:04 UTC, Ilanit Stein
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt vm-import-operator pull 288 0 None closed Check disk status before import 2020-07-10 04:06:53 UTC
Red Hat Product Errata RHSA-2020:3194 0 None None None 2020-07-28 19:10:48 UTC

Description Ilanit Stein 2020-06-09 15:56:09 UTC
Description of problem:
1. Start VM import for a VM.
   -> VM import is in progress, and the VM is listed in the VMs view.
2. Start a second VM import, while first import is in progress.
   -> The second VM import fails immediately on Import error that disk is in 
     lock state - that's expected.
3. After several seconds both VM imports become in a same Import error state:
DataVolumeCreationFailed: Error while importing disk image: v2vmigrationvm0-1-d3d56dd1-e890-44a6-b7b3-bff9f8cafe40
and both are listed as VM import resources.

Screenshot of all 3 steps is attached.
  
Please block a second VM import for the same source VM.

Version-Release number of selected component (if applicable):
CNV-2.4

How reproducible:
100%

Comment 1 Ilanit Stein 2020-06-09 15:57:12 UTC
Created attachment 1696348 [details]
vm-import-controller.log

Comment 2 Ilanit Stein 2020-06-09 16:02:29 UTC
Created attachment 1696349 [details]
Step1

Comment 3 Ilanit Stein 2020-06-09 16:03:11 UTC
Created attachment 1696350 [details]
Step2

Comment 4 Ilanit Stein 2020-06-09 16:04:11 UTC
Created attachment 1696351 [details]
Step3

Comment 5 Piotr Kliczewski 2020-06-10 10:48:59 UTC
Daniel, do you have any ideas how to handle it?

Comment 6 Ondra Machacek 2020-06-10 11:14:09 UTC
The reason both VMs end in failed state is because, there is not enough storage on your default storage class. If your storage would have enough space, the first VM would succeed.

But we have a race condition that we validate disk status and it may succeed, for both vm-imports, and so both imports are executed, and not stopped by validate. So the first import pod succeed, but the second one will end in crashloopbackoff, and we fail this import after some time. So we need to make sure we check the state of the disk right before we execute the disk import.

Comment 7 Ilanit Stein 2020-07-12 20:57:16 UTC
On a newly deployed CNV,
VM import from RHV to CNV using Ceph as target storage class worked fine on this new environment.

I tested the flow of importing the same VM twice, while the disk on the first VM was still being copied,
using Ceph, for both imports. Source VM was RHEL-7.
The result was that Second VM import failed on the disk lock. 
The first VM import ended successfully, without having the second VM import interfering. 
When the first VM import ended, the second VM import started, and ended up successfully. 

Then I tested the same flow for NFS. It had the same result as in Ceph, but the import was much slower.

The above 2 tests actually verify the bug in the subject, for Ceph and NFS.  
The resulting behavior is a bit different than what was discussed, as expected behavior - 
The second VM import is triggered once the disk is free,
and not staying in import failed on disk lock forever, which seems even better behavior IMO.

For "standard" Cinder storage, there were issues encountered - Bug 1856111 - [v2v][RHV to CNV VM import] Problems encountered when picking "standard" Cinder.
However, as the supported storage types are Ceph & NFS, moving bug to verified.

Comment 10 errata-xmlrpc 2020-07-28 19:10:33 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:3194


Note You need to log in before you can comment on or make changes to this bug.