Bug 2222451 - VMExport manifests: DV External population is incompatible with Source and SourceRef
Summary: VMExport manifests: DV External population is incompatible with Source and So...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Storage
Version: 4.14.0
Hardware: Unspecified
OS: Unspecified
high
unspecified
Target Milestone: ---
: 4.14.0
Assignee: Alexander Wels
QA Contact: Jenia Peimer
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-07-12 19:42 UTC by Jenia Peimer
Modified: 2023-11-08 14:06 UTC (History)
1 user (show)

Fixed In Version: CNV-v4.14.0.rhel9-1322
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-11-08 14:05:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt kubevirt pull 10102 0 None Merged Export create populator compatible datavolumes from VM 2023-07-17 10:30:40 UTC
Github kubevirt kubevirt pull 10122 0 None Merged [release-1.0] Export create populator compatible datavolumes from VM 2023-08-03 20:46:06 UTC
Red Hat Issue Tracker CNV-30895 0 None None None 2023-07-12 19:45:16 UTC
Red Hat Product Errata RHSA-2023:6817 0 None None None 2023-11-08 14:06:08 UTC

Description Jenia Peimer 2023-07-12 19:42:32 UTC
Description of problem:
When VM uses an existing populated PVC, DV's export manifest can't be applied. 

Version-Release number of selected component (if applicable):
4.14

How reproducible:
Always

Steps to Reproduce:

1. Create a DV/PVC on CSI storage (it will use populators)

   $ virtctl image-upload dv cirros-dv --image-path=./cirros-0.4.0-x86_64-disk.qcow2 --size=1Gi --storage-class=ocs-storagecluster-ceph-rbd --insecure

2. Create a VM that will use this PVC 

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: vm-from-uploaded-dv
spec:
  running: true
  template:
    spec:
      domain:
        devices:
          disks:
          - disk:
              bus: virtio
            name: pvc-edd0ce9a-2c84-46b3-bc42-7450c2f8602b
          rng: {}
        resources:
          requests:
            memory: 100Mi
      volumes:
      - name: pvc-edd0ce9a-2c84-46b3-bc42-7450c2f8602b
        persistentVolumeClaim:
          claimName: cirros-dv

   
3. Create a VMExport and save manifests

   $ virtctl vmexport download export --manifest --vm=vm-from-uploaded-dv --include-secret --output=manifest.yaml

4. Add/replace the 'namespace: target' in the manifest.yaml

5. Try to create a target VM: 

   $ oc create -f manifest.yaml 
   configmap/export-ca-cm-export created
   virtualmachine.kubevirt.io/vm-from-uploaded-dv created
   secret/header-secret-export created
   Error from server: error when creating "manifest.yaml": admission webhook "datavolume-validate.cdi.kubevirt.io" denied the request:  External population is incompatible with Source and SourceRef DataSourceRef and DataSource must match

6. DataVolume from manifest.yaml:

---
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
  creationTimestamp: null
  name: cirros-dv
  namespace: target
spec:
  pvc:
    accessModes:
    - ReadWriteMany
    dataSource:
      apiGroup: cdi.kubevirt.io
      kind: VolumeUploadSource
      name: volume-upload-source-aa30e962-d15e-4a5f-a88d-95e1989c6fbd
    dataSourceRef:
      apiGroup: cdi.kubevirt.io
      kind: VolumeUploadSource
      name: volume-upload-source-aa30e962-d15e-4a5f-a88d-95e1989c6fbd
    resources:
      requests:
        storage: "1073741824"
    volumeMode: Block
  source:
    http:
      certConfigMap: export-ca-cm-export
      secretExtraHeaders:
      - header-secret-export
      url: https://virt-exportproxy-openshift-cnv.<mypath>/api/export.kubevirt.io/v1alpha1/namespaces/default/virtualmachineexports/export/volumes/cirros-dv/disk.img.gz
status: {}
---


Actual results:
Error from server: error when creating "manifest.yaml": admission webhook "datavolume-validate.cdi.kubevirt.io" denied the request:  External population is incompatible with Source and SourceRef DataSourceRef and DataSource must match


Expected results:
VM and DV/PVC were created successfully in the target namespace/cluster


Additional info:
When VM uses dataVolumeTemplates - we don't hit this issue and it works fine.

Comment 1 Jenia Peimer 2023-08-13 12:52:46 UTC
Verified on CNV v4.14.0.rhel9-1576


Just a note: 

If you're using this feature, you might want to add a storage class to the DataVolume yaml, 
because it has a 'pvc' api, and 'accessModes' and 'volumeMode' are specified. 
So if you will not specify a 'storageClassName', and the default storage class will not support those 'accessModes' and 'volumeMode', import will fail. 

apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
  creationTimestamp: null
  name: cirros-dv-target
  namespace: target
spec:
  pvc:
    accessModes:
    - ReadWriteMany
    resources:
      requests:
        storage: "1073741824"
    volumeMode: Block
    storageClassName: ocs-storagecluster-ceph-rbd    <--------- might want to add

Or you can use a 'storage' api, then you can completely remove the 'accessModes' and 'volumeMode', they'll be taken from the storageProfile:

spec:
  storage:
    resources:
      requests:
        storage: "1073741824"



And this would be a better yaml for a VM, that is using an existing DV (dataVolume instead of persistentVolumeClaim, like in the bug description)

$ cat vm-from-uploaded-dv-dv.yaml 
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: vm-from-uploaded-dv
spec:
  running: true
  template:
    spec:
      domain:
        devices:
          disks:
          - disk:
              bus: virtio
            name: dv-disk
          rng: {}
        resources:
          requests:
            memory: 100Mi
      volumes:
      - dataVolume:
          name: cirros-dv
        name: dv-disk

Comment 3 errata-xmlrpc 2023-11-08 14:05:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Virtualization 4.14.0 Images security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6817


Note You need to log in before you can comment on or make changes to this bug.