Description of problem: Import a VM with 2 ovirtmgmt/ovirtmgmt nics on RHV side. In UI VM import wizard, map the 2 networks as follows (attachment "network map setting screenshot"): nic1: Name: ovn-kubernetes1 Type: bridge (Set by default and can't be changed) nic2: Pod Networking Type: masquerade (Set by default and can't be changed) The VM import is resulted with setting network ovn-kubernetes1 to both nics (attachment "network map outcome screenshot"). Version-Release number of selected component (if applicable): CNV-2.4 How reproducible: Same behavior Repeated on 3 trials.
Created attachment 1698578 [details] "network map setting screenshot"
Created attachment 1698580 [details] "network map outcome screenshot"
This was used to create the new network: $ oc apply -f second_network.yaml second_network.yaml: apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: ovn-kubernetes1 spec: config: '{ "cniVersion": "0.3.1", "type": "cnv-bridge", "bridge": "br1" }'
Created attachment 1698581 [details] VM yaml
Created attachment 1698583 [details] vm-import-controller.log
Please provide also VM import CR YAML that was created by the wizard.
Created attachment 1699028 [details] vm import cr yaml
Pod network is not part of the mapping. networkMappings: - source: id: 324d9357-4d0b-41c0-a28a-8aaae904b3ae target: name: ovn-kubernetes1 type: multus
Source VM defines two network interfaces (nic1 and nic2), which both use the same vnic profile (ovirtmgmt/ovirtmgmt). The network resource mapping is translating source VM's nic's vnic profile to target network so if two nics use the same vnic profile, they will be mapped to the same target network. In this case: 1. although UI shows mapping to a pod network for nic2, it doesn't add it to the mapping in import yaml; 2. Multus mapping for nic1 is present in the yaml and because both nic1 and nic2 use the same vnic profile, they are both mapped to the multus network; 3. Even if UI put the mapping for nic2 in the import yaml, it wouldn't work either - the same vnic profile id would be mapped to two different targets. Please re-test with two nics that use different vnic profiles.
Import VM with 2 nics on 2 different vnic profiles via UI: Map network1 to new network added network2 to pod VM import worked fine. The VM created on CNV had these networks, which are exactly as set in the wizard: nic2: ovn-kubernetes1 bridge nic1: Pod Networking masquerade
*** Bug 1850509 has been marked as a duplicate of this bug. ***
The conclusion for the UI: - UI should detect if nics have same vnicprofile and react to that. - Pod network should not be possible to set for these nics and allow only the same multus network
@Filip, please create a UI bug which would track changes from comment #12.
Validation has been added on the back-end side that would block any VM import with more than one network interface mapped to a pod network (either because that was specified explicitly or the nics share the same vnic profile).
@Piotr done https://bugzilla.redhat.com/show_bug.cgi?id=1852530 and https://bugzilla.redhat.com/show_bug.cgi?id=1852473
Fixed in https://github.com/kubevirt/vm-import-operator/pull/312
Verified on CNV-2.4 from July 07 2020. For 2 source networks mapped to target pod network got this import error: "The virtual machine could not be imported. VMCreationFailed: Error while creating virtual machine default/vm-istein: admission webhook "virtualmachine-validator.kubevirt.io" denied the request: more than one interface is connected to a pod network in spec.template.spec.interfaces" @Jakub, Is there a way to remove the "in spec.template.spec.interfaces" from the error message here?
Ilanit, the part of the message after "VMCreationFailed: Error while creating virtual machine default/vm-istein:" comes from KubeVirt and is added by generic code handling VM creation errors. There may be many other cases when VM CR path is present in the message and they may be helpful in detecting and solving either environment issues or bugs.
@Jakub, OK, thanks for explaining.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:3194