Bug 2158322
Summary: | [console] When deleting VM, 'Delete disks' checkbox includes PVC of cd-rom with ISO in the count | ||
---|---|---|---|
Product: | Container Native Virtualization (CNV) | Reporter: | Germano Veit Michel <gveitmic> |
Component: | User Experience | Assignee: | Ugo Palatucci <upalatuc> |
Status: | CLOSED MIGRATED | QA Contact: | Guohua Ouyang <gouyang> |
Severity: | low | Docs Contact: | |
Priority: | unspecified | ||
Version: | 4.11.0 | CC: | gouyang |
Target Milestone: | --- | ||
Target Release: | 4.14.0 | ||
Hardware: | x86_64 | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2023-08-22 10:14:25 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Germano Veit Michel
2023-01-05 04:31:06 UTC
The current implementation is it deletes all disks of the VM, user can uncheck the checkbox and then delete the disks manually. An improvement here can be listing the disks in the delete modal to let users choose what disks they want to delete along with the VM, as the bug suggested. @gveitmic I think the issue here would be the same even if we create a list of disks When someone selects cd-rom in the customization page, we create a different datavolumetemplate under the hood. So the resulting VM would have two volumes. Even if the ISO is shared across vms, the data volume creates the PVC from the source provided (the iso). VMs that shares ISO do not actually share the same PVC. But those PVCs share the same source. So for this reason you see 2x Even if we create a list, the user will uncheck the CD ROM disk thinking that he doesn't want to delete the ISO, but actually, that disk is a copy of the iso mad for that VM, so he probably wants to delete it. Maybe we are handling the CD-ROM thing in a non-expected way? Hi Ugo, Thanks for the explanation, but I'm a bit confused. (In reply to Ugo Palatucci from comment #2) > When someone selects cd-rom in the customization page, we create a different > datavolumetemplate under the hood. > So the resulting VM would have two volumes. OK, if I attach the CD in the customisation page it does say 'Clone PVC'. But I'm not using that because I don't want to clone anything, see from comment#0: (In reply to Germano Veit Michel from comment #0) ... > 6. Go to the Disks tab > 7. Detach all Disks > 8. Add a new Disk, a CD-ROM that is bootable for installing Guest > Source: Use an existing PVC <-------- > Type: CD-ROM > [x] Use this disk as a boot source > 9. Click Save and add the CDROM ... Is this actually the same thing in the backend? Its not right? I just never used that option as it says clone... > Maybe we are handling the CD-ROM thing in a non-expected way? I don't know, that 'Clone PVC' in the Customization page is actually weird. Why would one want to duplicate ISOs? It's a read-only media, there is no need. Let me explain in more detail what I do: * Upload ISO via the console, in the create PVC dialog I upload a local ISO file form my computer * On each VM I want to use it, I go to Disks -> Add -> Use an existing PVC and point it to the upload above. Maybe I am overcomplicating things and doing it in a different way the system was developed for? Anyway... see 2 VMs "sharing the same ISO" which should give you a better idea of the issue: First Windows VM, just dataVolumeTemplates and volumes sections $ oc --context admin-metal -n openshift-cnv get vm windows-1 -o yaml | yq e '.spec | (.dataVolumeTemplates,.template.spec.volumes)' - metadata: creationTimestamp: null name: windows-1-osdisk namespace: openshift-cnv ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: false kind: VirtualMachine name: windows-1 uid: c520ad07-d6a4-4e54-868e-13c9f24a95b7 spec: preallocation: false source: blank: {} storage: resources: requests: storage: 60Gi storageClassName: synology - dataVolume: name: windows-1-osdisk name: osdisk - name: cdrom persistentVolumeClaim: claimName: windows10 Same for second Windows VM $ oc --context admin-metal -n openshift-cnv get vm windows-2 -o yaml | yq e '.spec | (.dataVolumeTemplates,.template.spec.volumes)' - metadata: creationTimestamp: null name: windows-2-osdisk namespace: openshift-cnv ownerReferences: - apiVersion: kubevirt.io/v1 blockOwnerDeletion: false kind: VirtualMachine name: windows-2 uid: 73524c56-b583-4fed-9ba6-4b562b11923b spec: preallocation: false source: blank: {} storage: resources: requests: storage: 60Gi storageClassName: synology - name: cdrom persistentVolumeClaim: claimName: windows10 - dataVolume: name: windows-2-osdisk name: osdisk If I go to the delete dialog of any of them, it says Delete Disk (2x), so I assume osdisk and my shared ISO (which I want to keep). I don't think I'm using DV for the ISO. Its a single PVC for both: $ oc --context admin-metal -n openshift-cnv get pvc | grep windows windows-1 Bound pvc-c90e059e-c782-490f-93fd-b96708c548b1 68174084064 RWX synology 16m windows-1-osdisk Bound pvc-7c7d3709-283f-45d6-ba5f-9bc420c3fb13 68174084064 RWX synology 11m windows-2-osdisk Bound pvc-dd85550b-190d-4e0c-bfae-20cd7251207f 68174084064 RWX synology 9m34s windows-2-windows-2 Bound pvc-1b84abc5-2f08-4f4b-8613-64de2dcb6b04 68174084064 RWX synology 12m windows10 Bound pvc-6c5a4111-d559-4dcb-b227-80ce8ec97b95 13634816813 RWX synology 13d <--- this is the ISO $ oc --context admin-metal -n openshift-cnv get pvc windows10 -o yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: cdi.kubevirt.io/storage.bind.immediate.requested: "true" cdi.kubevirt.io/storage.condition.bound: "true" cdi.kubevirt.io/storage.condition.bound.message: "" cdi.kubevirt.io/storage.condition.bound.reason: "" cdi.kubevirt.io/storage.condition.running: "false" cdi.kubevirt.io/storage.condition.running.message: Upload Complete cdi.kubevirt.io/storage.condition.running.reason: Completed cdi.kubevirt.io/storage.contentType: kubevirt cdi.kubevirt.io/storage.deleteAfterCompletion: "true" cdi.kubevirt.io/storage.pod.phase: Succeeded cdi.kubevirt.io/storage.pod.ready: "false" cdi.kubevirt.io/storage.pod.restarts: "0" cdi.kubevirt.io/storage.preallocation.requested: "false" cdi.kubevirt.io/storage.upload.target: "" cdi.kubevirt.io/storage.uploadPodName: cdi-upload-windows10 pv.kubernetes.io/bind-completed: "yes" pv.kubernetes.io/bound-by-controller: "yes" volume.beta.kubernetes.io/storage-provisioner: cluster.local/nfs-subdir-external-provisioner volume.kubernetes.io/storage-provisioner: cluster.local/nfs-subdir-external-provisioner creationTimestamp: "2023-04-05T23:35:45Z" finalizers: - kubernetes.io/pvc-protection labels: alerts.k8s.io/KubePersistentVolumeFillingUp: disabled app: containerized-data-importer app.kubernetes.io/component: storage app.kubernetes.io/managed-by: cdi-controller app.kubernetes.io/part-of: hyperconverged-cluster app.kubernetes.io/version: 4.12.2 name: windows10 namespace: openshift-cnv resourceVersion: "102949" uid: 6c5a4111-d559-4dcb-b227-80ce8ec97b95 spec: accessModes: - ReadWriteMany resources: requests: storage: "13634816813" storageClassName: synology volumeMode: Filesystem volumeName: pvc-6c5a4111-d559-4dcb-b227-80ce8ec97b95 status: accessModes: - ReadWriteMany capacity: storage: "13634816813" phase: Bound And here is the PV $ oc --context admin-metal -n openshift-cnv get pv pvc-6c5a4111-d559-4dcb-b227-80ce8ec97b95 -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: pv.kubernetes.io/provisioned-by: cluster.local/nfs-subdir-external-provisioner creationTimestamp: "2023-04-05T23:35:47Z" finalizers: - kubernetes.io/pv-protection name: pvc-6c5a4111-d559-4dcb-b227-80ce8ec97b95 resourceVersion: "101064" uid: a375f313-605e-43f5-9494-57779a294657 spec: accessModes: - ReadWriteMany capacity: storage: "13634816813" claimRef: apiVersion: v1 kind: PersistentVolumeClaim name: windows10 namespace: openshift-cnv resourceVersion: "101056" uid: 6c5a4111-d559-4dcb-b227-80ce8ec97b95 nfs: path: /volume4/openshift/openshift-cnv-windows10-pvc-6c5a4111-d559-4dcb-b227-80ce8ec97b95 server: 192.168.1.253 persistentVolumeReclaimPolicy: Delete storageClassName: synology volumeMode: Filesystem status: phase: Bound thanks @gveitmic for the explanation. I'll dig more into it. |