Bug 2162140 - CNV VirtualMachine pod used a lot more memory then it suppose to
Summary: CNV VirtualMachine pod used a lot more memory then it suppose to
Keywords:
Status: CLOSED DUPLICATE of bug 2167508
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Virtualization
Version: 4.10.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.13.1
Assignee: Itamar Holder
QA Contact: Kedar Bidarkar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-01-18 21:05 UTC by Sean Haselden
Modified: 2023-03-28 13:45 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-03-28 13:45:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Issue Tracker CNV-24491 0 None None None 2023-01-18 21:09:28 UTC
Red Hat Knowledge Base (Solution) 6995346 0 None None None 2023-01-25 11:13:45 UTC

Description Sean Haselden 2023-01-18 21:05:52 UTC
Description of problem:

virt-launcher pod consumes more memory than it should be allowed.

- The VM was configured with requests of 8G, and that is the total mem visible when we exec free on the guest os. The actual used mem observed from inside the VM is approx 2Gi. However, when inspecting the pod stats with crictl we can see mem usage by the pod is in dozens of Gi RAM.


Version-Release number of selected component (if applicable):
OpenShift 4.10.16, CNV in version 4.10.1, Kubevirt v0.49.0-155-g32b905704

How reproducible:

- VM has a single network interface attached to a cnv-bridge NAD, used for ingress/egress
- A pvc of 3.2 TB was attached to the VM, which downloaded 2.5 TB of data from an s3 bucket (ceph object storage). It was indeed saved to the machine.
- The workload was supposed to be CPU intensive, not  memory intensive. 
- The VM was configured with requests of 8G, and that is the total mem visible when we exec free on the guest os. The actual used mem observed from inside the VM is approx 2Gi. However, when inspecting the pod stats with crictl we can see mem usage by the pod is in dozens of Gi RAM.

- The attached PVC is an additional disk, not an extension of the root disk, and it is of type block with virtio-scsi driver. The customer used mkfs.xfs on the entire block with no partitions nor LVMs. We see the host cach



Actual results:


Expected results:

VM pod stays within limit of memory assigned to it. 

Additional info:

Comment 10 Antonio Cardace 2023-03-03 16:47:53 UTC
Deferring to 4.13.1 due to capacity.


Note You need to log in before you can comment on or make changes to this bug.