Description of problem: When running VM import from RHV to CNV for the same VM, using different VM import resource name, the 2nd VM import is displayed in UI, and "runs over" the 1st VM import. The 2nd VM import fail on: Import error (RHV) cirros-import could not be imported. DataVolumeCreationFailed: Error while importing disk image: . VirtualMachine.kubevirt.io "cirros-import" not found Cancelling the 2nd VM import from UI, makes the 1st VM import displayed in UI. which is failing status. 1st VM import status: status: conditions: - lastHeartbeatTime: "2020-10-05T06:37:29Z" lastTransitionTime: "2020-10-05T06:37:29Z" message: Validation completed successfully reason: ValidationCompleted status: "True" type: Valid - lastHeartbeatTime: "2020-10-05T06:37:29Z" lastTransitionTime: "2020-10-05T06:37:29Z" message: 'VM specifies IO Threads: 1, VM has NUMA tune mode secified: interleave, Interface b7fb2701-0bae-4005-8cd4-629309eaa631 uses profile with a network filter with ID: d2370ab4-fee3-11e9-a310-8c1645ce738e.' reason: MappingRulesVerificationReportedWarnings status: "True" type: MappingRulesVerified - lastHeartbeatTime: "2020-10-05T06:40:01Z" lastTransitionTime: "2020-10-05T06:40:01Z" message: 'Error while importing disk image: cirros-import-a1b9d00c-1872-4875-871d-5b2479194884. pod CrashLoopBackoff restart exceeded' reason: ProcessingFailed status: "False" type: Processing - lastHeartbeatTime: "2020-10-05T06:40:01Z" lastTransitionTime: "2020-10-05T06:40:01Z" message: 'Error while importing disk image: cirros-import-a1b9d00c-1872-4875-871d-5b2479194884. pod CrashLoopBackoff restart exceeded' reason: DataVolumeCreationFailed status: "False" type: Succeeded dataVolumes: - name: cirros-import-a1b9d00c-1872-4875-871d-5b2479194884 targetVmName: cirros-import Version-Release number of selected component (if applicable): CNV-2.5 Expected results: Both VM imports should be displayed. 1st import should succeed. 2nd import - should first fail on locked disk, and once disk lock gets free, import should end successfully. Steps: 1. Create a secret with oVirt credentials: cat <<EOF | oc create -f - --- apiVersion: v1 kind: Secret metadata: name: blue-secret namespace: default type: Opaque stringData: ovirt: | apiUrl: "https://<RHV FQDN>/ovirt-engine/api" username: <username> password: <password> caCert: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- EOF 2. Create oVirt resource mappings: cat <<EOF | oc create -f - apiVersion: v2v.kubevirt.io/v1beta1 kind: ResourceMapping metadata: name: example-resourcemappings namespace: default spec: ovirt: networkMappings: - source: name: ovirtmgmt/ovirtmgmt target: name: pod type: pod storageMappings: - source: name: v2v-fc target: name: ocs-storagecluster-ceph-rbd volumeMode: Block EOF 3. Create VM Import resource cat <<EOF | oc create -f - apiVersion: v2v.kubevirt.io/v1beta1 kind: VirtualMachineImport metadata: name: example-virtualmachineimport namespace: default spec: providerCredentialsSecret: name: blue-secret namespace: default # optional, if not specified, use CR's namespace resourceMapping: name: example-resourcemappings namespace: default targetVmName: cirros-import startVm: false source: ovirt: vm: id: c3da5646-29a5-43c7-839a-d46480eae0c4 EOF 4. Run this step quickly after step 3 Same as step 4 with one diff of using a different name for the VM import resource. metadata: name: example-virtualmachineimport1 <===
Note: https://bugzilla.redhat.com/show_bug.cgi?id=1884982 - wmware import runs over setting target to 4.7 @Ilanit will we need to backport to 4.6.z ?
Created attachment 1718918 [details] 1st-vm-import yaml
Created attachment 1718919 [details] vm import controller log
looking at comments in bug 1884982 - we will backport this one to 4.6.z once it has a verified fix
the operator allow to use the same "targetVmName" in more then one running VirtualMachineImport ? @Ilanit, this sound like an operator bug, do we have a bug on the operator side ?
Note: sound like, targetVmName should not be an existing VM name, or an existing targetVmName from another Import ?
Yaacov, thanks. We have this bug on the operator side: Bug 1885226 - [v2v] [api] VM import RHV to CNV Import deploying a 2nd vmimport with the same targetVmName should not be allowed. Once this operator bug is solved, I guess this UI bug will not reproduce.
match priority to sevirity
*** This bug has been marked as a duplicate of bug 1885226 ***
*** Bug 1884982 has been marked as a duplicate of this bug. ***