Bug 1726854
| Summary: | hostDisk devices are not removed when the VM is deleted | ||
|---|---|---|---|
| Product: | Container Native Virtualization (CNV) | Reporter: | joherr |
| Component: | Virtualization | Assignee: | Fabian Deutsch <fdeutsch> |
| Status: | CLOSED DEFERRED | QA Contact: | Israel Pinto <ipinto> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 1.4 | CC: | alitke, awels, cnv-qe-bugs, fdeutsch, rmohr, sgott |
| Target Milestone: | --- | ||
| Target Release: | 2.1.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-08-09 13:18:34 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
What was the reason why local PVs are not being used? Local PVs have the functionality to delete the content after the PVC is getting deleted. This is hard to achieve from within kubevirt. A hostDisk is the equivalent of hostPath from k8s, which does not have this feature either. The VM is not the one who is responsible for the hostPath, it is the VMI, so the hostPath would be deleted when the VMI is deleted (where you can the unse emptyDisk, which provides that, but that is not what you want for the VM). If delete is the only missing feature and it is in general simpler to use hostDisk for a user than local PV, for whatever reason, it would be pretty simple to e.g. add a CronJob which lists VMs, reads which disks it uses and then goes to all nodes and removes unreferenced ones. This is of course not an API-only solution. (In reply to Fabian Deutsch from comment #1) > What was the reason why local PVs are not being used? > > Local PVs have the functionality to delete the content after the PVC is > getting deleted. hostDisks are much easier for them to setup since its done in the VM definition. It is also the closest to mimicking their current environments. @Roman Question, so is the following scenario possible? 1. I create a VM with some hostDisks in it. 2. I start a VMI from this VM and it is scheduled and started on node01. This creates the disks on node01, everything works as expected. 3. I stop the VMI, the disks are still on node01 as expected. 4. I start a VMI from the same VM again, and it scheduled on node02 because node01 is down or some other reason the VMI cannot be scheduled on node01 any more, or it just happens to get scheduled on node02 If 4 is possible would it create a new set of disk images on node02, or would the scheduler know it can only schedule the VMI on node01 because that is where the disks are? @Alexander hostDisk has the same limitations like hostPath. If you want to ensure that it only runs on node01 you will have to use nodeSelectors for that. If you place your hostDisks on e.g. an nfs share and mount it at the same location on multiple nodes, then you can use a common label for all nodes which have this mount avaliable. So yes, if you don't bind the VMI to a specific node, then the scheduler will schedule it wherever enough resources are present, which will lead to a new hostDisk on that node, if no disk is present. Does the VM keep track of which node an associated VMI ran on last in the status or something? The reason this is hard because we don't know which node the disk exists on from a VMs perspective, and thus it cannot determine how to clean it up. hostDisks is limited to kubevirt, moving to virt team for their decision on how to resolve it. This issue is going to be overcome by the availability of hostpath provisioner. Closing this BZ due to that. Please feel free to re-open this if you feel this is in error. |
Description of problem: When a virtual machine is deleted, the hostDisk images that were created are not deleted. Version-Release number of selected component (if applicable): OCP 3.11, CNV 1.4 How reproducible: always Steps to Reproduce: 1. Create a vm with at least one hostDisk volume 2. start the vm 3. stop the vm 4. delete the vm Actual results: All hostDisk images remain. Expected results: the hostdisk images should be removed and any empty directory structure that was created. Additional info: VM volume definition: volumes: - hostDisk: capacity: 100Gi path: /srv/vms/vm-hostdisk-4.img type: DiskOrCreate name: os-disk # ls /srv/vms/ vm-hostdisk-4-os.img Maybe add a new option, Type, or Type modifier to allow for the deletion of the hostDisk image and empty directory structure. Something like CreateandDelete or DiskOrCreateAndDelete