Bug 2222451

Summary: VMExport manifests: DV External population is incompatible with Source and SourceRef
Product: Container Native Virtualization (CNV) Reporter: Jenia Peimer <jpeimer>
Component: StorageAssignee: Alexander Wels <awels>
Status: VERIFIED --- QA Contact: Jenia Peimer <jpeimer>
Severity: unspecified Docs Contact:
Priority: high    
Version: 4.14.0CC: awels
Target Milestone: ---   
Target Release: 4.14.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: CNV-v4.14.0.rhel9-1322 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Jenia Peimer 2023-07-12 19:42:32 UTC
Description of problem:
When VM uses an existing populated PVC, DV's export manifest can't be applied. 

Version-Release number of selected component (if applicable):
4.14

How reproducible:
Always

Steps to Reproduce:

1. Create a DV/PVC on CSI storage (it will use populators)

   $ virtctl image-upload dv cirros-dv --image-path=./cirros-0.4.0-x86_64-disk.qcow2 --size=1Gi --storage-class=ocs-storagecluster-ceph-rbd --insecure

2. Create a VM that will use this PVC 

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: vm-from-uploaded-dv
spec:
  running: true
  template:
    spec:
      domain:
        devices:
          disks:
          - disk:
              bus: virtio
            name: pvc-edd0ce9a-2c84-46b3-bc42-7450c2f8602b
          rng: {}
        resources:
          requests:
            memory: 100Mi
      volumes:
      - name: pvc-edd0ce9a-2c84-46b3-bc42-7450c2f8602b
        persistentVolumeClaim:
          claimName: cirros-dv

   
3. Create a VMExport and save manifests

   $ virtctl vmexport download export --manifest --vm=vm-from-uploaded-dv --include-secret --output=manifest.yaml

4. Add/replace the 'namespace: target' in the manifest.yaml

5. Try to create a target VM: 

   $ oc create -f manifest.yaml 
   configmap/export-ca-cm-export created
   virtualmachine.kubevirt.io/vm-from-uploaded-dv created
   secret/header-secret-export created
   Error from server: error when creating "manifest.yaml": admission webhook "datavolume-validate.cdi.kubevirt.io" denied the request:  External population is incompatible with Source and SourceRef DataSourceRef and DataSource must match

6. DataVolume from manifest.yaml:

---
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
  creationTimestamp: null
  name: cirros-dv
  namespace: target
spec:
  pvc:
    accessModes:
    - ReadWriteMany
    dataSource:
      apiGroup: cdi.kubevirt.io
      kind: VolumeUploadSource
      name: volume-upload-source-aa30e962-d15e-4a5f-a88d-95e1989c6fbd
    dataSourceRef:
      apiGroup: cdi.kubevirt.io
      kind: VolumeUploadSource
      name: volume-upload-source-aa30e962-d15e-4a5f-a88d-95e1989c6fbd
    resources:
      requests:
        storage: "1073741824"
    volumeMode: Block
  source:
    http:
      certConfigMap: export-ca-cm-export
      secretExtraHeaders:
      - header-secret-export
      url: https://virt-exportproxy-openshift-cnv.<mypath>/api/export.kubevirt.io/v1alpha1/namespaces/default/virtualmachineexports/export/volumes/cirros-dv/disk.img.gz
status: {}
---


Actual results:
Error from server: error when creating "manifest.yaml": admission webhook "datavolume-validate.cdi.kubevirt.io" denied the request:  External population is incompatible with Source and SourceRef DataSourceRef and DataSource must match


Expected results:
VM and DV/PVC were created successfully in the target namespace/cluster


Additional info:
When VM uses dataVolumeTemplates - we don't hit this issue and it works fine.

Comment 1 Jenia Peimer 2023-08-13 12:52:46 UTC
Verified on CNV v4.14.0.rhel9-1576


Just a note: 

If you're using this feature, you might want to add a storage class to the DataVolume yaml, 
because it has a 'pvc' api, and 'accessModes' and 'volumeMode' are specified. 
So if you will not specify a 'storageClassName', and the default storage class will not support those 'accessModes' and 'volumeMode', import will fail. 

apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
  creationTimestamp: null
  name: cirros-dv-target
  namespace: target
spec:
  pvc:
    accessModes:
    - ReadWriteMany
    resources:
      requests:
        storage: "1073741824"
    volumeMode: Block
    storageClassName: ocs-storagecluster-ceph-rbd    <--------- might want to add

Or you can use a 'storage' api, then you can completely remove the 'accessModes' and 'volumeMode', they'll be taken from the storageProfile:

spec:
  storage:
    resources:
      requests:
        storage: "1073741824"



And this would be a better yaml for a VM, that is using an existing DV (dataVolume instead of persistentVolumeClaim, like in the bug description)

$ cat vm-from-uploaded-dv-dv.yaml 
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: vm-from-uploaded-dv
spec:
  running: true
  template:
    spec:
      domain:
        devices:
          disks:
          - disk:
              bus: virtio
            name: dv-disk
          rng: {}
        resources:
          requests:
            memory: 100Mi
      volumes:
      - dataVolume:
          name: cirros-dv
        name: dv-disk