Description of problem: VM import from VMware to CNV via API (vmio), with mapping * VM disk #1 to Ceph-RBD/Block, * VM disk #2 to NFS. Result: Disk #1 is mapped to Ceph-RBD/Filesystem Disk #2 is mapped to NFS Problem: Disk #1 was suppose to be set to volumeMode: Block, but was mapped to volumeMode: Filesystem. How the VM import was run via API: Secret: ------ cat <<EOF | oc create -f - --- apiVersion: v1 kind: Secret metadata: name: vmw-secret type: Opaque stringData: vmware: |- # API URL of the vCenter or ESXi host apiUrl: "https://<VMware IP address>/sdk" # Username provided in the format of username@domain. username: <username> password: <password> # The certificate thumbprint of the vCenter or ESXi host, in colon-separated hexidecimal octets. thumbprint: 31:...:30 EOF External mapping ---------------- cat <<EOF | oc create -f - apiVersion: v2v.kubevirt.io/v1beta1 kind: ResourceMapping metadata: name: example-vmware-resourcemappings namespace: default spec: vmware: networkMappings: - source: name: VM Network # map network name to network attachment definition target: name: pod type: pod storageMappings: - source: id: datastore-12 target: name: nfs EOF VM import create: ---------------- cat <<EOF | oc create -f - apiVersion: v2v.kubevirt.io/v1beta1 kind: VirtualMachineImport metadata: name: vmware-import-1 namespace: default spec: providerCredentialsSecret: name: vmw-secret namespace: default # optional, if not specified, use CR's namespace resourceMapping: name: example-vmware-resourcemappings namespace: default targetVmName: vmw-import-1 startVm: false source: vmware: vm: id: 42036ba6-3d51-e023-3ae3-63ba3352bbd7 mappings: diskMappings: - source: id: 401-2000 target: name: ocs-storagecluster-ceph-rbd volumeMode: Block <==== Set volumeMode to Block - source: id: 401-2001 target: name: nfs EOF Version-Release number of selected component (if applicable): CNV-2.5 (from Oct 11)
Created attachment 1722611 [details] vmimport.v2v.kubevirt.log
Created attachment 1722612 [details] vm-import-controller.log
Created attachment 1722613 [details] oc_describe_pvc
Created attachment 1722614 [details] oc_get_pvc
Created attachment 1722615 [details] cdi-deployment.log
Currently, the AccessMode is always ReadWriteOnce and the VolumeMode is not set, which means it's always Filesystem. This is the least common configuration that will always succeed. The AccessMode is currently decided based on the ability for the source VM to live migrate. In my opinion, it should be ReadWriteMany for all storage providers that support it. This brings us to the question of having a feature matrix that we maintain in VMIO. This has not been decided yet, but I'm in favor of this solution and with each release we'll need revisit that matrix to ensure it's aligned with Kubernetes / Kubevirt / CDI capabilities. The VolumeMode is Filesystem by default because it works with all storage backends, and that's a Kubernetes choice. However, block storage backends allow using block mode (since k8s 1.9) to avoid the filesystem overhead. Again, the storage provider doesn't tell whether the backend is filesystem or block, so in theory a user could request block mode even on a NFS storage class and the migration would fail. And I'll pull again the feature matrix card :) Because the solution has not been flushed out, we can't commit to implement it before CNV 2.6.0. So, I'll set the Target Release to 2.6.0.
Implemented in https://github.com/kubevirt/vm-import-operator/pull/438
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Virtualization 2.6.0 security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:0799