Bug 1859018 - [v2v][doc][RHV to CNV VM import] VM import using Ceph-RBD is not supported.
Summary: [v2v][doc][RHV to CNV VM import] VM import using Ceph-RBD is not supported.
Keywords:
Status: CLOSED NOTABUG
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Documentation
Version: 2.4.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 2.4.0
Assignee: Avital Pinnick
QA Contact: Amos Mastbaum
URL:
Whiteboard:
Depends On: 1857926
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-07-21 05:55 UTC by Ilanit Stein
Modified: 2020-07-28 08:54 UTC (History)
8 users (show)

Fixed In Version: 2.4.0
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1857926
Environment:
Last Closed: 2020-07-28 08:54:09 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Ilanit Stein 2020-07-21 05:55:54 UTC
Please add documentation that for RHV to CNV VM import, Ceph-block based is not supported.

The supported storage is NFS only.

In case this storage is used for VM import from RHV it will either hang our at 10% pending for storage binnd, or start import, and eventually fail with error: DV create failure.

+++ This bug was initially created as a clone of Bug #1857926 +++

Description of problem:
VM import from RHV to CNV using Ceph block based storage (ocs-storagecluster-ceph-rbd) fail:

importer pending pod event:
"Failed to provision volume with StorageClass "standard": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce] are supported"

Problem seem to be that the DV created is missing
Volume Mode:  Block

For a VM created with this Ceph storage this 'Volume Mode:  Block' exists.

Please add support for Ceph block based.

Attached:
"successful_vm_dv" - 
DV for a VM that was created successfully on the Ceph storage.
"pending_dv" - 
and DV for the "pending" VM import.

Version-Release number of selected component (if applicable):
CNV-2.4

--- Additional comment from Ilanit Stein on 2020-07-16 18:26:19 UTC ---



--- Additional comment from Ilanit Stein on 2020-07-16 18:26:48 UTC ---



--- Additional comment from Ilanit Stein on 2020-07-16 18:47:32 UTC ---

$ oc describe pvc cloud-init-vm-3910103b-bedd-453b-a4aa-a5ec2ecac55d
Name:          cloud-init-vm-3910103b-bedd-453b-a4aa-a5ec2ecac55d
Namespace:     default
StorageClass:  ocs-storagecluster-ceph-rbd
Status:        Pending
Volume:        
Labels:        app=containerized-data-importer
Annotations:   cdi.kubevirt.io/storage.import.certConfigMap: vmimport.v2v.kubevirt.ioc2xxj
               cdi.kubevirt.io/storage.import.diskId: 3910103b-bedd-453b-a4aa-a5ec2ecac55d
               cdi.kubevirt.io/storage.import.endpoint: https://jenkins-vm-10.lab.eng.tlv2.redhat.com/ovirt-engine/api
               cdi.kubevirt.io/storage.import.importPodName: importer-cloud-init-vm-3910103b-bedd-453b-a4aa-a5ec2ecac55d
               cdi.kubevirt.io/storage.import.secretName: vmimport.v2v.kubevirt.iox749x
               cdi.kubevirt.io/storage.import.source: imageio
               cdi.kubevirt.io/storage.pod.phase: Pending
               cdi.kubevirt.io/storage.pod.restarts: 0
               volume.beta.kubernetes.io/storage-provisioner: openshift-storage.rbd.csi.ceph.com
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    importer-cloud-init-vm-3910103b-bedd-453b-a4aa-a5ec2ecac55d
Events:
  Type     Reason                Age                  From                                                                                                                Message
  ----     ------                ----                 ----                                                                                                                -------
  Warning  ProvisioningFailed    93m                  openshift-storage.rbd.csi.ceph.com_csi-rbdplugin-provisioner-5794d4754b-j9xm7_736d99b3-4c72-43cf-8e66-628da2c53f6c  failed to provision volume with StorageClass "ocs-storagecluster-ceph-rbd": rpc error: code = DeadlineExceeded desc = context deadline exceeded
  Warning  ProvisioningFailed    60m (x15 over 93m)   openshift-storage.rbd.csi.ceph.com_csi-rbdplugin-provisioner-5794d4754b-j9xm7_736d99b3-4c72-43cf-8e66-628da2c53f6c  failed to provision volume with StorageClass "ocs-storagecluster-ceph-rbd": rpc error: code = Aborted desc = an operation with the given Volume ID pvc-0e2924f1-6df0-4f1f-8676-64d36c126a62 already exists
  Normal   ExternalProvisioning  83s (x390 over 96m)  persistentvolume-controller                                                                                         waiting for a volume to be created, either by external provisioner "openshift-storage.rbd.csi.ceph.com" or manually created by system administrator
  Normal   Provisioning          21s (x28 over 96m)   openshift-storage.rbd.csi.ceph.com_csi-rbdplugin-provisioner-5794d4754b-j9xm7_736d99b3-4c72-43cf-8e66-628da2c53f6c  External provisioner is provisioning volume for claim "default/cloud-init-vm-3910103b-bedd-453b-a4aa-a5ec2ecac55d"

--- Additional comment from Piotr Kliczewski on 2020-07-16 18:54:56 UTC ---

In my opinion it is an RFE. We did not intend to support block mode. We should create documentation Bug for the users to be aware of it.

--- Additional comment from Ondra Machacek on 2020-07-17 06:20:25 UTC ---

I think we should let user define the volumeMode on the resource mapping. I would suggest following change in the API:

apiVersion: v2v.kubevirt.io/v1alpha1
kind: ResourceMapping
metadata:
  name: myvm-mapping
spec:
  ovirt:
    storageMappings:
      - source:
          name: sourceDomain
        target:
          name: targetStorageClass
        volumeMode: Block/Filesystem # for all disks in storage domain
    diskMappings:
      - source:
          name: disk1
        target:
          name: targetStorageClass 
        volumeMode: Block/Filesystem # overrite specific disk, if it should be different then for whole storage domain

--- Additional comment from Piotr Kliczewski on 2020-07-17 10:28:10 UTC ---

I like the solution at the api level. We need something similar in the UI which may be a bit more difficult to change at this stage.
That is why I think this should be RFE.

--- Additional comment from Ilanit Stein on 2020-07-20 17:48:23 UTC ---

I tested today a couple of VM imports from RHV to CNV using Ceph-rbd (Block based). 
That was on CNV-2.4 deployed from stage.

I found there were 2 kinds of results for the VM import, that may happen, but both end up with VM imports not accomplished:

1. The VM import stuck on 10% - and import pod is pending to bind PVC, like detailed in this bug description.
2. The VM import starts, binds to the storage created however eventually it fails (because of storage mismatch), VM is deleted, and a VM import resource exists error DV creation failure - that does not indicate the issue in any way.

For both PVC yaml contains VolumeMode: Filesystem

Comment 2 Ilanit Stein 2020-07-21 16:12:21 UTC
Tuning this a bit, after I learned that ceph-rbd may support block and filesystem volume modes:

RHV to CNV VM import do not support ceph-rbd.

Avital,
would you please update it?

Comment 3 Avital Pinnick 2020-07-22 15:10:52 UTC
I have updated the docs. For virt 2.4, only NFS is supported. 

Please review the updated preview build. Same links as above

Comment 4 Nelly Credi 2020-07-23 10:34:17 UTC
i do not consider doc bug as urgent. lowering to high

Comment 5 Avital Pinnick 2020-07-23 10:46:18 UTC
I don't see how this is a bug. For 2.4, only NFS is supported. So Ceph RBD non-support does not seem relevant for this release.

Comment 6 Avital Pinnick 2020-07-28 08:54:09 UTC
CNV 2.4 doc says only NFS is supported for RHV VM import. Closing this bug.


Note You need to log in before you can comment on or make changes to this bug.