Bug 2150653 - VMExport for VMSnapshot - volume names should be the same as the VMs volume names
Summary: VMExport for VMSnapshot - volume names should be the same as the VMs volume n...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Storage
Version: 4.12.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.13.0
Assignee: Alexander Wels
QA Contact: Jenia Peimer
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-12-04 20:44 UTC by Jenia Peimer
Modified: 2023-05-18 02:56 UTC (History)
1 user (show)

Fixed In Version: CNV-v4.13.0.rhel9-1808
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-05-18 02:56:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt kubevirt pull 9123 0 None Merged Fix inconsistency between export-server and vmexport status links 2023-03-20 15:03:17 UTC
Github kubevirt kubevirt pull 9226 0 None Merged [release-0.59] Fix inconsistency between export-server and vmexport status links 2023-03-20 16:12:59 UTC
Red Hat Issue Tracker CNV-23109 0 None None None 2022-12-04 20:52:02 UTC
Red Hat Product Errata RHSA-2023:3205 0 None None None 2023-05-18 02:56:27 UTC

Description Jenia Peimer 2022-12-04 20:44:37 UTC
Description of the problem:
When we create a VMExport object for VMSnapshot – volume names in VMExport object are export_name + volume_name, but we expect the volume names to be the same as the VMs volume names. This will let us use the 'virtctl vmexport download --volume=' command without looking at the vmexport's yaml. It's only an issue when VM has multiple volumes. When there's only one volume – it'll be successfully downloaded.
 
Version-Release number of selected component (if applicable):
4.12

How reproducible:
Always

Steps to Reproduce:
1. Create a VM with one DV
2. Create a VM's memory-dump (to have one more volume)
   $ virtctl memory-dump get vm-cirros-source-ocs --claim-name=mem-dump-pvc --create-claim --storage-class=ocs-storagecluster-ceph-rbd
   PVC default/mem-dump-pvc created
   Successfully submitted memory dump request of VM vm-cirros-source-ocs

3. Create a VMSnapshot
4. Create a VMExport for the snapshot
   $ virtctl vmexport create snap1-export --snapshot=my-vmsnapshot
   VirtualMachineExport 'default/snap1-export' created succesfully

*Please fix the typo in the "succesfully" word ^^*


Actual results:
The name of the volume is export_name + volume_name
$ oc get vmexport snap1-export -oyaml
  ...
    volumes:      
         - name: snap1-export-cirros-dv-source-ocs
           ...
         - name: snap1-export-mem-dump-pvc

$ virtctl vmexport download snap1-export --volume=cirros-dv-source-ocs --output=disk.img --keep-vme
Processing completed successfully
Unable to get a valid URL from 'default/snap1-export' VirtualMachineExport


Expected results:
The name of the volume is the same as the VM's volume_name – cirros-dv-source-ocs,
virtctl vmexport download worked with --volume=cirros-dv-source-ocs


Additional info:

VMs yaml:

$ cat vm-ocs.yaml
apiVersion: kubevirt.io/v1alpha3
kind: VirtualMachine
metadata:
  name: vm-cirros-source-ocs
  labels:
    kubevirt.io/vm: vm-cirros-source-ocs
spec:
  dataVolumeTemplates:
  - metadata:
      name: cirros-dv-source-ocs
    spec:
      storage:
        resources:
          requests:
            storage: 1Gi
        storageClassName: ocs-storagecluster-ceph-rbd
      source:
        http:
          url: <cirros-0.4.0-x86_64-disk.qcow2>
  running: true
  template:
    metadata:
      labels:
        kubevirt.io/vm: vm-cirros-source-ocs
    spec:
      domain:
        devices:
          disks:
          - disk:
              bus: virtio
            name: datavolumev-ocs
        machine:
          type: ""
        resources:
          requests:
            memory: 100M
      terminationGracePeriodSeconds: 0
      volumes:
      - dataVolume:
          name: cirros-dv-source-ocs
        name: datavolumev-ocs

Snapshot yaml:

$ cat snapshot.yaml
apiVersion: snapshot.kubevirt.io/v1alpha1
kind: VirtualMachineSnapshot
metadata:
  name: my-vmsnapshot
spec:
  source:
    apiGroup: kubevirt.io
    kind: VirtualMachine
    name: vm-cirros-source-ocs

Comment 2 Jenia Peimer 2023-03-22 08:32:22 UTC
Verified on CNV-v4.13.0.rhel9-1689

$ oc get vmexport export-vmsnapshot -oyaml
  ...
      volumes:
        - name: cirros-dv-source-ocs
        ...
        - name: mem-dump-pvc

$ oc get pvc
NAME                                     STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                  AGE
cirros-dv-source-ocs                     Bound    pvc-1f61a020-1411-438b-ab33-0be877d54a71   1Gi        RWX            ocs-storagecluster-ceph-rbd   25m
export-vmsnapshot-cirros-dv-source-ocs   Bound    pvc-cf1e0633-5374-4efa-b779-2d93a89626c8   1Gi        RWX            ocs-storagecluster-ceph-rbd   6m43s
export-vmsnapshot-mem-dump-pvc           Bound    pvc-6a76c33a-bcfb-4831-b0b5-ccff730ed75a   207Mi      RWO            ocs-storagecluster-ceph-rbd   6m43s
mem-dump-pvc                             Bound    pvc-926f7719-4443-4fbc-8cb6-a3056441f96a   207Mi      RWO            ocs-storagecluster-ceph-rbd   11m

Comment 5 errata-xmlrpc 2023-05-18 02:56:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Virtualization 4.13.0 Images security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:3205


Note You need to log in before you can comment on or make changes to this bug.