Description of the problem: When we create a VMExport object for VMSnapshot – volume names in VMExport object are export_name + volume_name, but we expect the volume names to be the same as the VMs volume names. This will let us use the 'virtctl vmexport download --volume=' command without looking at the vmexport's yaml. It's only an issue when VM has multiple volumes. When there's only one volume – it'll be successfully downloaded. Version-Release number of selected component (if applicable): 4.12 How reproducible: Always Steps to Reproduce: 1. Create a VM with one DV 2. Create a VM's memory-dump (to have one more volume) $ virtctl memory-dump get vm-cirros-source-ocs --claim-name=mem-dump-pvc --create-claim --storage-class=ocs-storagecluster-ceph-rbd PVC default/mem-dump-pvc created Successfully submitted memory dump request of VM vm-cirros-source-ocs 3. Create a VMSnapshot 4. Create a VMExport for the snapshot $ virtctl vmexport create snap1-export --snapshot=my-vmsnapshot VirtualMachineExport 'default/snap1-export' created succesfully *Please fix the typo in the "succesfully" word ^^* Actual results: The name of the volume is export_name + volume_name $ oc get vmexport snap1-export -oyaml ... volumes: - name: snap1-export-cirros-dv-source-ocs ... - name: snap1-export-mem-dump-pvc $ virtctl vmexport download snap1-export --volume=cirros-dv-source-ocs --output=disk.img --keep-vme Processing completed successfully Unable to get a valid URL from 'default/snap1-export' VirtualMachineExport Expected results: The name of the volume is the same as the VM's volume_name – cirros-dv-source-ocs, virtctl vmexport download worked with --volume=cirros-dv-source-ocs Additional info: VMs yaml: $ cat vm-ocs.yaml apiVersion: kubevirt.io/v1alpha3 kind: VirtualMachine metadata: name: vm-cirros-source-ocs labels: kubevirt.io/vm: vm-cirros-source-ocs spec: dataVolumeTemplates: - metadata: name: cirros-dv-source-ocs spec: storage: resources: requests: storage: 1Gi storageClassName: ocs-storagecluster-ceph-rbd source: http: url: <cirros-0.4.0-x86_64-disk.qcow2> running: true template: metadata: labels: kubevirt.io/vm: vm-cirros-source-ocs spec: domain: devices: disks: - disk: bus: virtio name: datavolumev-ocs machine: type: "" resources: requests: memory: 100M terminationGracePeriodSeconds: 0 volumes: - dataVolume: name: cirros-dv-source-ocs name: datavolumev-ocs Snapshot yaml: $ cat snapshot.yaml apiVersion: snapshot.kubevirt.io/v1alpha1 kind: VirtualMachineSnapshot metadata: name: my-vmsnapshot spec: source: apiGroup: kubevirt.io kind: VirtualMachine name: vm-cirros-source-ocs
Verified on CNV-v4.13.0.rhel9-1689 $ oc get vmexport export-vmsnapshot -oyaml ... volumes: - name: cirros-dv-source-ocs ... - name: mem-dump-pvc $ oc get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE cirros-dv-source-ocs Bound pvc-1f61a020-1411-438b-ab33-0be877d54a71 1Gi RWX ocs-storagecluster-ceph-rbd 25m export-vmsnapshot-cirros-dv-source-ocs Bound pvc-cf1e0633-5374-4efa-b779-2d93a89626c8 1Gi RWX ocs-storagecluster-ceph-rbd 6m43s export-vmsnapshot-mem-dump-pvc Bound pvc-6a76c33a-bcfb-4831-b0b5-ccff730ed75a 207Mi RWO ocs-storagecluster-ceph-rbd 6m43s mem-dump-pvc Bound pvc-926f7719-4443-4fbc-8cb6-a3056441f96a 207Mi RWO ocs-storagecluster-ceph-rbd 11m
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Virtualization 4.13.0 Images security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:3205