Bug 2013662 - Unexpected killing of virt-launcher pod, can result in loss of data for hotplugged volumes
Summary: Unexpected killing of virt-launcher pod, can result in loss of data for hotpl...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Storage
Version: 4.9.1
Hardware: x86_64
OS: Linux
unspecified
high
Target Milestone: ---
: 4.9.1
Assignee: Adam Litke
QA Contact: Yan Du
URL:
Whiteboard:
Depends On: 2007397 2021209
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-10-13 13:37 UTC by Maya Rashish
Modified: 2021-12-13 19:59 UTC (History)
6 users (show)

Fixed In Version: CNV 4.9.0-227
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 2007397
Environment:
Last Closed: 2021-12-13 19:59:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt kubevirt pull 6479 0 None Merged [release-0.44] Resolve hotplug race between kubelet and virt-handler 2021-11-08 14:58:37 UTC
Red Hat Product Errata RHBA-2021:5091 0 None None None 2021-12-13 19:59:15 UTC

Description Maya Rashish 2021-10-13 13:37:42 UTC
+++ This bug was initially created as a clone of Bug #2007397 +++

Description of problem:
When the virt-launcher pod is killed unexpectedly, there is a possiblity that any hotplugged filesystem volumes will have their disk.img file removed from the backing storage.

Version-Release number of selected component (if applicable):


How reproducible:
Intermittent.

Steps to Reproduce:
1. Start a VM and create a volume then hotplug that volume into the VM.
2. At this point there are 2 pods, the virt-launcher pod and the attachment pod (hpvolume-xyz)
3. Force delete the virt-launcher pod: kubectl delete pod virt-launcher-xyz-abcd --force --graceperiod=0
4. This will terminate the virt-launcer pod, and put the VMI into failed state.
5. Check the contents of the volume you created in step 1. There should be a disk.img file. But a small number of times the disk.img will not be there. The following scenario happened:
- The virt-launcher pod is deleted, since we use an emptyDir as the point that mounts the disk.img file, there is now a race between the kubelet (which will empty the content of the emptyDir) and virt-handler. Virt-handler will notice the pod is gone, and go ahead and unmount all the hotplugged volumes from the virt-launcher pod. If virt-handler is run first, then there is no problem and everything is fine. However if the kubelet runs first, it will remove the contents of the emptyDir, which includes the bind mounted disk.img files. This will then also remove the file from the source volume.

Actual results:
A small percentage of times, the disk.img file disappears due to the kubelet winning the race and clearing the emptyDir before virt-handler can unmount the volumes

Expected results:
There is no race, and virt-handler always gets to unmount first. Or some other way of ensuring that 100% of the time, no data is lost.

Additional info:
It is unlikely the U/S kubernetes community will accept patches that modify the emptyDir behavior of blindly wiping the contents of the emptyDir. We have to find a different solution.

--- Additional comment from Maya Rashish on 2021-10-13 09:29:22 UTC ---

There's already a backport merged for v4.9, but the target release is v4.10. Do you want to duplicate this bug for v4.9?

--- Additional comment from Alexander Wels on 2021-10-13 13:03:47 UTC ---

Yeah we need a duplication with a target of 4.9.1

Comment 1 Yan Du 2021-11-17 12:33:49 UTC
Test on latest CNV 4.9.1

Run the test 100 times, issue can not be reproduced, disk.img file exists even the launcher pod is deleted.

Comment 7 errata-xmlrpc 2021-12-13 19:59:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Virtualization 4.9.1 Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:5091


Note You need to log in before you can comment on or make changes to this bug.