This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
Bug 2158322 - [console] When deleting VM, 'Delete disks' checkbox includes PVC of cd-rom with ISO in the count
Summary: [console] When deleting VM, 'Delete disks' checkbox includes PVC of cd-rom wi...
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: User Experience
Version: 4.11.0
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: ---
: 4.14.0
Assignee: Ugo Palatucci
QA Contact: Guohua Ouyang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-01-05 04:31 UTC by Germano Veit Michel
Modified: 2023-08-22 10:16 UTC (History)
1 user (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-08-22 10:14:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker   CNV-23927 0 None None None 2023-08-22 10:14:24 UTC

Description Germano Veit Michel 2023-01-05 04:31:06 UTC
Description of problem:

Open the OCP Web Console and:

1. Virtualization
2. Catalog
3. Select 8.x (or any OS actually)
4. Click 'Customize Virtual Machine'
5. Click 'Review and create virtual machine'
6. Go to the Disks tab
7. Detach all Disks
8. Add a new Disk, a CD-ROM that is bootable for installing Guest
   Source: Use an existing PVC
   Type: CD-ROM
   [x] Use this disk as a boot source
9. Click Save and add the CDROM
10. Add a new Disk, now the actual Disk to install the OS
   Source: Blank (Creates new PVC)
   Type: Disk
   [ ] Use this disk as a boot source
11. Click 'Create VM'
12. Delete the VM (Actions -> Delete)

The following is displayed:

~~~
Delete rhel8-worthy-guan VirtualMachine
Are you sure you want to delete rhel8-worthy-guan in namespace openshift-cnv?
The following resources will be deleted along with this VirtualMachine. Unchecked items will not be deleted.

Delete disks (2x)
~~~

At this point the user assumes both the OS (Disk) and the ISO (CD-ROM PVC) will be deleted, as it says 2x (one understands 2 Disks, and the VM only has a Disk and a CD)

But clicking Delete only deletes the OS Disk. So why 2x is displayed? Initially I was unchecking that and deleting the disks manually, afraid of losing my ISO, which is not very productive to clean up manually.

Ideally let the user select which disks should be removed (checkbox would be nice), and unselect any ISO/CD-ROM device as those are usually shared for many VMs.

At minimum, it should say (1x) and specify what disk is being deleted.

When "deleting" things the product needs to be very clear, deleting is a critical operation.

Version-Release number of selected component (if applicable):
4.11.20

How reproducible:
Always

Steps to Reproduce:
As above

Comment 1 Guohua Ouyang 2023-01-05 06:31:22 UTC
The current implementation is it deletes all disks of the VM, user can uncheck the checkbox and then delete the disks manually.
An improvement here can be listing the disks in the delete modal to let users choose what disks they want to delete along with the VM, as the bug suggested.

Comment 2 Ugo Palatucci 2023-04-19 08:41:15 UTC
@gveitmic I think the issue here would be the same even if we create a list of disks

When someone selects cd-rom in the customization page, we create a different datavolumetemplate under the hood.
So the resulting VM would have two volumes.

Even if the ISO is shared across vms, the data volume creates the PVC from the source provided (the iso).
VMs that shares ISO do not actually share the same PVC. But those PVCs share the same source. 

So for this reason you see 2x

Even if we create a list, the user will uncheck the CD ROM disk thinking that he doesn't want to delete the ISO, but actually, that disk is a copy of the iso 
mad for that VM, so he probably wants to delete it. 


Maybe we are handling the CD-ROM thing in a non-expected way?

Comment 3 Germano Veit Michel 2023-04-19 10:59:42 UTC
Hi Ugo,

Thanks for the explanation, but I'm a bit confused.

(In reply to Ugo Palatucci from comment #2)
> When someone selects cd-rom in the customization page, we create a different
> datavolumetemplate under the hood.
> So the resulting VM would have two volumes.

OK, if I attach the CD in the customisation page it does say 'Clone PVC'.
But I'm not using that because I don't want to clone anything, see from comment#0:

(In reply to Germano Veit Michel from comment #0)
...
> 6. Go to the Disks tab
> 7. Detach all Disks
> 8. Add a new Disk, a CD-ROM that is bootable for installing Guest
>    Source: Use an existing PVC       <--------
>    Type: CD-ROM
>    [x] Use this disk as a boot source
> 9. Click Save and add the CDROM
...

Is this actually the same thing in the backend? Its not right? I just never used that option as it says clone...

> Maybe we are handling the CD-ROM thing in a non-expected way?
I don't know, that 'Clone PVC' in the Customization page is actually weird.
Why would one want to duplicate ISOs? It's a read-only media, there is no need.

Let me explain in more detail what I do:
* Upload ISO via the console, in the create PVC dialog I upload a local ISO file form my computer
* On each VM I want to use it, I go to Disks -> Add -> Use an existing PVC and point it to the upload above.

Maybe I am overcomplicating things and doing it in a different way the system was developed for?

Anyway... see 2 VMs "sharing the same ISO" which should give you a better idea of the issue:

First Windows VM, just dataVolumeTemplates and volumes sections

$ oc --context admin-metal -n openshift-cnv get vm windows-1 -o yaml | yq e '.spec | (.dataVolumeTemplates,.template.spec.volumes)'
- metadata:
    creationTimestamp: null
    name: windows-1-osdisk
    namespace: openshift-cnv
    ownerReferences:
      - apiVersion: kubevirt.io/v1
        blockOwnerDeletion: false
        kind: VirtualMachine
        name: windows-1
        uid: c520ad07-d6a4-4e54-868e-13c9f24a95b7
  spec:
    preallocation: false
    source:
      blank: {}
    storage:
      resources:
        requests:
          storage: 60Gi
      storageClassName: synology

- dataVolume:
    name: windows-1-osdisk
  name: osdisk
- name: cdrom
  persistentVolumeClaim:
    claimName: windows10

Same for second Windows VM

$ oc --context admin-metal -n openshift-cnv get vm windows-2 -o yaml | yq e '.spec | (.dataVolumeTemplates,.template.spec.volumes)'
- metadata:
    creationTimestamp: null
    name: windows-2-osdisk
    namespace: openshift-cnv
    ownerReferences:
      - apiVersion: kubevirt.io/v1
        blockOwnerDeletion: false
        kind: VirtualMachine
        name: windows-2
        uid: 73524c56-b583-4fed-9ba6-4b562b11923b
  spec:
    preallocation: false
    source:
      blank: {}
    storage:
      resources:
        requests:
          storage: 60Gi
      storageClassName: synology
- name: cdrom
  persistentVolumeClaim:
    claimName: windows10
- dataVolume:
    name: windows-2-osdisk
  name: osdisk

If I go to the delete dialog of any of them, it says Delete Disk (2x), so I assume osdisk and my shared ISO (which I want to keep).

I don't think I'm using DV for the ISO. Its a single PVC for both:

$ oc --context admin-metal -n openshift-cnv get pvc | grep windows
windows-1                   Bound    pvc-c90e059e-c782-490f-93fd-b96708c548b1   68174084064    RWX            synology            16m
windows-1-osdisk            Bound    pvc-7c7d3709-283f-45d6-ba5f-9bc420c3fb13   68174084064    RWX            synology            11m
windows-2-osdisk            Bound    pvc-dd85550b-190d-4e0c-bfae-20cd7251207f   68174084064    RWX            synology            9m34s
windows-2-windows-2         Bound    pvc-1b84abc5-2f08-4f4b-8613-64de2dcb6b04   68174084064    RWX            synology            12m
windows10                   Bound    pvc-6c5a4111-d559-4dcb-b227-80ce8ec97b95   13634816813    RWX            synology            13d   <--- this is the ISO

$ oc --context admin-metal -n openshift-cnv get pvc windows10 -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    cdi.kubevirt.io/storage.bind.immediate.requested: "true"
    cdi.kubevirt.io/storage.condition.bound: "true"
    cdi.kubevirt.io/storage.condition.bound.message: ""
    cdi.kubevirt.io/storage.condition.bound.reason: ""
    cdi.kubevirt.io/storage.condition.running: "false"
    cdi.kubevirt.io/storage.condition.running.message: Upload Complete
    cdi.kubevirt.io/storage.condition.running.reason: Completed
    cdi.kubevirt.io/storage.contentType: kubevirt
    cdi.kubevirt.io/storage.deleteAfterCompletion: "true"
    cdi.kubevirt.io/storage.pod.phase: Succeeded
    cdi.kubevirt.io/storage.pod.ready: "false"
    cdi.kubevirt.io/storage.pod.restarts: "0"
    cdi.kubevirt.io/storage.preallocation.requested: "false"
    cdi.kubevirt.io/storage.upload.target: ""
    cdi.kubevirt.io/storage.uploadPodName: cdi-upload-windows10
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: cluster.local/nfs-subdir-external-provisioner
    volume.kubernetes.io/storage-provisioner: cluster.local/nfs-subdir-external-provisioner
  creationTimestamp: "2023-04-05T23:35:45Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    alerts.k8s.io/KubePersistentVolumeFillingUp: disabled
    app: containerized-data-importer
    app.kubernetes.io/component: storage
    app.kubernetes.io/managed-by: cdi-controller
    app.kubernetes.io/part-of: hyperconverged-cluster
    app.kubernetes.io/version: 4.12.2
  name: windows10
  namespace: openshift-cnv
  resourceVersion: "102949"
  uid: 6c5a4111-d559-4dcb-b227-80ce8ec97b95
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: "13634816813"
  storageClassName: synology
  volumeMode: Filesystem
  volumeName: pvc-6c5a4111-d559-4dcb-b227-80ce8ec97b95
status:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: "13634816813"
  phase: Bound

And here is the PV

$ oc --context admin-metal -n openshift-cnv get pv pvc-6c5a4111-d559-4dcb-b227-80ce8ec97b95 -o yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: cluster.local/nfs-subdir-external-provisioner
  creationTimestamp: "2023-04-05T23:35:47Z"
  finalizers:
  - kubernetes.io/pv-protection
  name: pvc-6c5a4111-d559-4dcb-b227-80ce8ec97b95
  resourceVersion: "101064"
  uid: a375f313-605e-43f5-9494-57779a294657
spec:
  accessModes:
  - ReadWriteMany
  capacity:
    storage: "13634816813"
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: windows10
    namespace: openshift-cnv
    resourceVersion: "101056"
    uid: 6c5a4111-d559-4dcb-b227-80ce8ec97b95
  nfs:
    path: /volume4/openshift/openshift-cnv-windows10-pvc-6c5a4111-d559-4dcb-b227-80ce8ec97b95
    server: 192.168.1.253
  persistentVolumeReclaimPolicy: Delete
  storageClassName: synology
  volumeMode: Filesystem
status:
  phase: Bound

Comment 4 Ugo Palatucci 2023-04-20 08:15:00 UTC
thanks @gveitmic for the explanation. 

I'll dig more into it.


Note You need to log in before you can comment on or make changes to this bug.