Description of problem: VM import from RHV to CNV using Ceph block based storage (ocs-storagecluster-ceph-rbd) fail: importer pending pod event: "Failed to provision volume with StorageClass "standard": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce] are supported" Problem seem to be that the DV created is missing Volume Mode: Block For a VM created with this Ceph storage this 'Volume Mode: Block' exists. Please add support for Ceph block based. Attached: "successful_vm_dv" - DV for a VM that was created successfully on the Ceph storage. "pending_dv" - and DV for the "pending" VM import. Version-Release number of selected component (if applicable): CNV-2.4
Created attachment 1701447 [details] successful_vm_dv
Created attachment 1701448 [details] pending_dv.yaml
$ oc describe pvc cloud-init-vm-3910103b-bedd-453b-a4aa-a5ec2ecac55d Name: cloud-init-vm-3910103b-bedd-453b-a4aa-a5ec2ecac55d Namespace: default StorageClass: ocs-storagecluster-ceph-rbd Status: Pending Volume: Labels: app=containerized-data-importer Annotations: cdi.kubevirt.io/storage.import.certConfigMap: vmimport.v2v.kubevirt.ioc2xxj cdi.kubevirt.io/storage.import.diskId: 3910103b-bedd-453b-a4aa-a5ec2ecac55d cdi.kubevirt.io/storage.import.endpoint: https://jenkins-vm-10.lab.eng.tlv2.redhat.com/ovirt-engine/api cdi.kubevirt.io/storage.import.importPodName: importer-cloud-init-vm-3910103b-bedd-453b-a4aa-a5ec2ecac55d cdi.kubevirt.io/storage.import.secretName: vmimport.v2v.kubevirt.iox749x cdi.kubevirt.io/storage.import.source: imageio cdi.kubevirt.io/storage.pod.phase: Pending cdi.kubevirt.io/storage.pod.restarts: 0 volume.beta.kubernetes.io/storage-provisioner: openshift-storage.rbd.csi.ceph.com Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Mounted By: importer-cloud-init-vm-3910103b-bedd-453b-a4aa-a5ec2ecac55d Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 93m openshift-storage.rbd.csi.ceph.com_csi-rbdplugin-provisioner-5794d4754b-j9xm7_736d99b3-4c72-43cf-8e66-628da2c53f6c failed to provision volume with StorageClass "ocs-storagecluster-ceph-rbd": rpc error: code = DeadlineExceeded desc = context deadline exceeded Warning ProvisioningFailed 60m (x15 over 93m) openshift-storage.rbd.csi.ceph.com_csi-rbdplugin-provisioner-5794d4754b-j9xm7_736d99b3-4c72-43cf-8e66-628da2c53f6c failed to provision volume with StorageClass "ocs-storagecluster-ceph-rbd": rpc error: code = Aborted desc = an operation with the given Volume ID pvc-0e2924f1-6df0-4f1f-8676-64d36c126a62 already exists Normal ExternalProvisioning 83s (x390 over 96m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "openshift-storage.rbd.csi.ceph.com" or manually created by system administrator Normal Provisioning 21s (x28 over 96m) openshift-storage.rbd.csi.ceph.com_csi-rbdplugin-provisioner-5794d4754b-j9xm7_736d99b3-4c72-43cf-8e66-628da2c53f6c External provisioner is provisioning volume for claim "default/cloud-init-vm-3910103b-bedd-453b-a4aa-a5ec2ecac55d"
In my opinion it is an RFE. We did not intend to support block mode. We should create documentation Bug for the users to be aware of it.
I think we should let user define the volumeMode on the resource mapping. I would suggest following change in the API: apiVersion: v2v.kubevirt.io/v1alpha1 kind: ResourceMapping metadata: name: myvm-mapping spec: ovirt: storageMappings: - source: name: sourceDomain target: name: targetStorageClass volumeMode: Block/Filesystem # for all disks in storage domain diskMappings: - source: name: disk1 target: name: targetStorageClass volumeMode: Block/Filesystem # overrite specific disk, if it should be different then for whole storage domain
I like the solution at the api level. We need something similar in the UI which may be a bit more difficult to change at this stage. That is why I think this should be RFE.
I tested today a couple of VM imports from RHV to CNV using Ceph-rbd (Block based). That was on CNV-2.4 deployed from stage. I found there were 2 kinds of results for the VM import, that may happen, but both end up with VM imports not accomplished: 1. The VM import stuck on 10% - and import pod is pending to bind PVC, like detailed in this bug description. 2. The VM import starts, binds to the storage created however eventually it fails (because of storage mismatch), VM is deleted, and a VM import resource exists error DV creation failure - that does not indicate the issue in any way. For both PVC yaml contains VolumeMode: Filesystem
Further to comment #7, The results are related to which VM disks are imported to ceph-rbd: 1. For VM disk: preallocated - PVC will remain pending forever. 2. For VM disk: Thin provision - Disk is copied and at the end (showing 80% progress), it will then remove the VM and DV, and show error in UI VMs view: "The virtual machine could not be imported. DataVolumeCreationFailed: Error while importing disk image: fedora32-b870c429-11e0-4630-b3df-21da551a48c0"
Ilanit, can you please open new bug the scenario in comment #8 we use this bug as RFE for block volumes. And please attach all relevant logs, thank you.
Please add fixed in version
Verified on CNV-2.5 deployed from osbs (Sep 30 2020). Tested the storage class and the volume mode Storage is mapped as set in Resource mapping and VM import creation.