Bug 1804037

Summary: Scheduling Memory calculation disregards huge-pages
Product: Red Hat Enterprise Virtualization Manager Reporter: Germano Veit Michel <gveitmic>
Component: ovirt-engineAssignee: Andrej Krejcir <akrejcir>
Status: CLOSED ERRATA QA Contact: Polina <pagranat>
Severity: high Docs Contact:
Priority: high    
Version: 4.3.8CC: ahadas, akrejcir, emarcus, klaas, pelauter, sgoodman
Target Milestone: ovirt-4.4.1Flags: lsvaty: testing_plan_complete-
Target Release: ---   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: rhv-4.4.0-29 Doc Type: Bug Fix
Doc Text:
Previously, the `Memory` scheduling filter did not correctly consider memory for huge pages. Consequently, the Manager tried to start virtual machines without huge pages on the memory dedicated to huge pages. With this update, the `Memory` filter correctly considers huge page memory.
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-09-23 16:11:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: Virt RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Germano Veit Michel 2020-02-18 04:59:28 UTC
Description of problem:

The engine seems to allow unintended overcommit in case hugepages are used on the host and the scheduler is scheduling non hugepages VMs.

Version-Release number of selected component (if applicable):
vdsm-4.30.40-1.el7ev.x86_64
ovirt-engine-4.3.8.2-0.4.el7.noarch

How reproducible:
Always

Steps to Reproduce:
1. Starting point
   * Cluster with no overcommit (100%)
   * Host with 8G of memory
   ==> <max_scheduling_memory>7963934720</max_scheduling_memory>

2. Configure 3 hugepages of 1G each, reboot the host:
   Scheduling memory is still the same, but we now have only 5G of non HP memory:
   ==> <max_scheduling_memory>7963934720</max_scheduling_memory>

3. User can run non HP VMs with up to 8G total, which can crash the host, as the real free memory is around 5G after HP configuration.

For example, I started 2x 3.5G VMs, for a total of 7G of commited memory (non HP)

# vdsm-client Host getStats | grep memCommitted
    "memCommitted": 7000, 

For a total of ~5G of available memory on the host (non HP)

# egrep 'MemTotal|Hugepagesize|HugePages_Total' /proc/meminfo 
MemTotal:        8173012 kB
HugePages_Total:       3
Hugepagesize:    1048576 kB

Actual results:
Memory used by HugePages is not subtracted from Scheduling Memory Calculation to scheduling non HP VMs.

Expected results:
Being subtracted?

Comment 1 Germano Veit Michel 2020-02-18 05:01:08 UTC
(In reply to Germano Veit Michel from comment #0)
> 3. User can run non HP VMs with up to 8G total, which can crash the host, as
> the real free memory is around 5G after HP configuration.

and also run a 3G Hugepages VM at the same time...

Comment 2 Germano Veit Michel 2020-02-18 05:44:26 UTC
As this can get confusing...

BZ1804037 - Scheduling Memory calculation disregards huge-pages                                 ---> for not considering statically allocated hugepages at kernel cmdline when calculating scheduling memory
BZ1804046 - Engine does not reduce scheduling memory when a VM with dynamic hugepages runs      ---> for not considering VMs running with dynamic hugepages               when calculating scheduling memory

Comment 3 Michal Skrivanek 2020-03-10 12:25:26 UTC
change SLA team to virt, we're not tracking SLA separately anymore

Comment 7 Polina 2020-06-03 06:38:50 UTC
Verifying according to the https://bugzilla.redhat.com/show_bug.cgi?id=1804037#c5 and #c6 since HugePages are subtracted now and no discussion was continued for the UI issue

Comment 15 errata-xmlrpc 2020-09-23 16:11:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Red Hat Virtualization security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:3807