Description of problem: When the virt-launcher pod is killed unexpectedly, there is a possiblity that any hotplugged filesystem volumes will have their disk.img file removed from the backing storage. Version-Release number of selected component (if applicable): How reproducible: Intermittent. Steps to Reproduce: 1. Start a VM and create a volume then hotplug that volume into the VM. 2. At this point there are 2 pods, the virt-launcher pod and the attachment pod (hpvolume-xyz) 3. Force delete the virt-launcher pod: kubectl delete pod virt-launcher-xyz-abcd --force --graceperiod=0 4. This will terminate the virt-launcer pod, and put the VMI into failed state. 5. Check the contents of the volume you created in step 1. There should be a disk.img file. But a small number of times the disk.img will not be there. The following scenario happened: - The virt-launcher pod is deleted, since we use an emptyDir as the point that mounts the disk.img file, there is now a race between the kubelet (which will empty the content of the emptyDir) and virt-handler. Virt-handler will notice the pod is gone, and go ahead and unmount all the hotplugged volumes from the virt-launcher pod. If virt-handler is run first, then there is no problem and everything is fine. However if the kubelet runs first, it will remove the contents of the emptyDir, which includes the bind mounted disk.img files. This will then also remove the file from the source volume. Actual results: A small percentage of times, the disk.img file disappears due to the kubelet winning the race and clearing the emptyDir before virt-handler can unmount the volumes Expected results: There is no race, and virt-handler always gets to unmount first. Or some other way of ensuring that 100% of the time, no data is lost. Additional info: It is unlikely the U/S kubernetes community will accept patches that modify the emptyDir behavior of blindly wiping the contents of the emptyDir. We have to find a different solution.
There's already a backport merged for v4.9, but the target release is v4.10. Do you want to duplicate this bug for v4.9?
Yeah we need a duplication with a target of 4.9.1
Run the test 100 times and issue could not be reproduced. Move bug to Verified CNV v4.10.0-218
This is in 4.9.0. Updating target release.
There is another bug for 4.9.0 #2013662
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Virtualization 4.10.0 Images security and bug fix update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0947