Bug 1885124

Summary: [v2v][UI][VM import from RHV to CNV] A 2nd VM import runs over 1st VM import of the same source VM
Product: OpenShift Container Platform Reporter: Ilanit Stein <istein>
Component: Console Kubevirt PluginAssignee: Tomas Jelinek <tjelinek>
Status: CLOSED DUPLICATE QA Contact: Ilanit Stein <istein>
Severity: high Docs Contact:
Priority: high    
Version: 4.6CC: aos-bugs, yzamir
Target Milestone: ---   
Target Release: 4.7.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-12-16 13:10:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
1st-vm-import yaml
none
vm import controller log none

Description Ilanit Stein 2020-10-05 07:00:18 UTC
Description of problem:
When running VM import from RHV to CNV for the same VM, using different VM import resource name, the 2nd VM import is displayed in UI, and "runs over" the 1st VM import. 
The 2nd VM import fail on:

Import error (RHV)
cirros-import could not be imported.
DataVolumeCreationFailed: Error while importing disk image: . VirtualMachine.kubevirt.io "cirros-import" not found

Cancelling the 2nd VM import from UI, makes the 1st VM import displayed in UI.
which is failing status.

1st VM import status:
status:
  conditions:
  - lastHeartbeatTime: "2020-10-05T06:37:29Z"
    lastTransitionTime: "2020-10-05T06:37:29Z"
    message: Validation completed successfully
    reason: ValidationCompleted
    status: "True"
    type: Valid
  - lastHeartbeatTime: "2020-10-05T06:37:29Z"
    lastTransitionTime: "2020-10-05T06:37:29Z"
    message: 'VM specifies IO Threads: 1, VM has NUMA tune mode secified: interleave,
      Interface b7fb2701-0bae-4005-8cd4-629309eaa631 uses profile with a network filter
      with ID: d2370ab4-fee3-11e9-a310-8c1645ce738e.'
    reason: MappingRulesVerificationReportedWarnings
    status: "True"
    type: MappingRulesVerified
  - lastHeartbeatTime: "2020-10-05T06:40:01Z"
    lastTransitionTime: "2020-10-05T06:40:01Z"
    message: 'Error while importing disk image: cirros-import-a1b9d00c-1872-4875-871d-5b2479194884.
      pod CrashLoopBackoff restart exceeded'
    reason: ProcessingFailed
    status: "False"
    type: Processing
  - lastHeartbeatTime: "2020-10-05T06:40:01Z"
    lastTransitionTime: "2020-10-05T06:40:01Z"
    message: 'Error while importing disk image: cirros-import-a1b9d00c-1872-4875-871d-5b2479194884.
      pod CrashLoopBackoff restart exceeded'
    reason: DataVolumeCreationFailed
    status: "False"
    type: Succeeded
  dataVolumes:
  - name: cirros-import-a1b9d00c-1872-4875-871d-5b2479194884
  targetVmName: cirros-import


Version-Release number of selected component (if applicable):
CNV-2.5

Expected results:
Both VM imports should be displayed.
1st import should succeed.
2nd import - should first fail on locked disk, and once disk lock gets free, import should end successfully.

Steps:
1. Create a secret with oVirt credentials:
cat <<EOF | oc create -f -
---
apiVersion: v1
kind: Secret
metadata:
  name: blue-secret
  namespace: default
type: Opaque
stringData:
  ovirt: |
    apiUrl: "https://<RHV FQDN>/ovirt-engine/api"
    username: <username>
    password: <password>
    caCert: |
      -----BEGIN CERTIFICATE-----
...
      -----END CERTIFICATE-----
EOF
 
2. Create oVirt resource mappings:
cat <<EOF | oc create -f -
apiVersion: v2v.kubevirt.io/v1beta1
kind: ResourceMapping
metadata:
  name: example-resourcemappings
  namespace: default
spec:
  ovirt:
    networkMappings:
      - source:
          name: ovirtmgmt/ovirtmgmt
        target:
          name: pod
        type: pod
    storageMappings:
      - source:
          name: v2v-fc
        target:
          name: ocs-storagecluster-ceph-rbd
    volumeMode: Block
EOF
 

3. Create VM Import resource
cat <<EOF | oc create -f -
apiVersion: v2v.kubevirt.io/v1beta1
kind: VirtualMachineImport
metadata:
  name: example-virtualmachineimport
  namespace: default
spec:
  providerCredentialsSecret:
    name: blue-secret
    namespace: default # optional, if not specified, use CR's namespace
  resourceMapping:
    name: example-resourcemappings
    namespace: default
  targetVmName: cirros-import
  startVm: false
  source:
    ovirt:
      vm:
        id: c3da5646-29a5-43c7-839a-d46480eae0c4
EOF
4. Run this step quickly after step 3
Same as step 4 with one diff of using a different name for the VM import resource.

metadata:
  name: example-virtualmachineimport1 <===

Comment 1 Yaacov Zamir 2020-10-05 07:14:44 UTC
Note:
https://bugzilla.redhat.com/show_bug.cgi?id=1884982 - wmware import runs over

setting target to 4.7

@Ilanit will we need to backport to 4.6.z ?

Comment 2 Ilanit Stein 2020-10-05 07:18:51 UTC
Created attachment 1718918 [details]
1st-vm-import yaml

Comment 3 Ilanit Stein 2020-10-05 07:19:20 UTC
Created attachment 1718919 [details]
vm import controller log

Comment 4 Yaacov Zamir 2020-10-05 07:20:19 UTC
looking at comments in bug 1884982 - we will backport this one to 4.6.z once it has a verified fix

Comment 5 Yaacov Zamir 2020-10-05 07:25:42 UTC
the operator allow to use the same "targetVmName" in more then one running VirtualMachineImport ?

@Ilanit, this sound like an operator bug, do we have a bug on the operator side ?

Comment 6 Yaacov Zamir 2020-10-05 08:34:17 UTC
Note:
sound like, targetVmName should not be an existing VM name, or an existing targetVmName from another Import ?

Comment 7 Ilanit Stein 2020-10-06 16:53:48 UTC
Yaacov, thanks.
We have this bug on the operator side: Bug 1885226 - [v2v] [api] VM import RHV to CNV Import deploying a 2nd vmimport with the same targetVmName should not be allowed.

Once this operator bug is solved, I guess this UI bug will not reproduce.

Comment 8 Yaacov Zamir 2020-10-14 06:45:07 UTC
match priority to sevirity

Comment 9 Yaacov Zamir 2020-12-16 13:10:33 UTC

*** This bug has been marked as a duplicate of bug 1885226 ***

Comment 10 Yaacov Zamir 2020-12-16 13:12:16 UTC
*** Bug 1884982 has been marked as a duplicate of this bug. ***