Bug 2239915 - MTQ does not work with Auto CPU limits
Summary: MTQ does not work with Auto CPU limits
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Virtualization
Version: 4.14.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.14.0
Assignee: Barak
QA Contact: Kedar Bidarkar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-09-20 19:12 UTC by Denys Shchedrivyi
Modified: 2023-11-08 14:07 UTC (History)
2 users (show)

Fixed In Version: v4.14.0.rhel9-2161
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-11-08 14:06:16 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt managed-tenant-quota pull 25 0 None Merged [release-v1.1] Add get namespaces permission to rbac 2023-09-21 10:08:14 UTC
Red Hat Issue Tracker CNV-33152 0 None None None 2023-09-20 19:13:27 UTC
Red Hat Product Errata RHSA-2023:6817 0 None None None 2023-11-08 14:07:21 UTC

Description Denys Shchedrivyi 2023-09-20 19:12:56 UTC
Description of problem:
 After adding autoCPULimitNamespaceLabelSelector to the Kubevirt - all VMs set resources.limits.cpu automatically. I created VMMRQ for allocating additional resources during the migration, but it does not help - the migration is still in Pending state.

VM and VMI have resources.requests only:

>  oc get vm vm-fedora-cpu-auto-lim -o json | jq .spec.template.spec.domain.resources
> {
>  "requests": {
>    "cpu": "1",
>    "memory": "1Gi"
>  }
> }

The POD has requests and limits (because of auto CPU limits set):

> $ oc get pod virt-launcher-vm-fedora-cpu-auto-lim-qdj5q -o json | jq .spec.containers[0].resources
> {
>   "limits": {
>     "cpu": "1",
>   },
>   "requests": {
>     "cpu": "1",
>   }
> }

The CPU usage is very close to resource quota:

> $ oc get resourcequota
> NAME        AGE   REQUEST                     LIMIT
> quota-cpu   80m   requests.cpu: 1001m/1100m   limits.cpu: 1010m/1100m

Created VMMRQ with additional cpu:

> $ oc get vmmrq  my-vmmrq-cpu-4 -o json | jq .spec
> {
>   "additionalMigrationResources": {
>     "limits.cpu": 4,
>     "requests.cpu": 4
>   }
> }

However, the migration still in Pending state:

> $ oc get vmim
> NAME                        PHASE     VMI
> kubevirt-migrate-vm-kd2rh   Pending   vm-fedora-cpu-auto-lim


Version-Release number of selected component (if applicable):
4.14

How reproducible:
100%

Steps to Reproduce:
1. enable autoCPULimitNamespaceLabelSelector in HCO
2. create resourcequota with request and limits
3. create VM with cpu request, but without limits
4. create vmmrq with resource and limits
5. migrate VM

Actual results:
 Migration is in Pending state

Expected results:
 Migration completed

Additional info:

Comment 1 Barak 2023-09-21 10:08:15 UTC
This PR should fix the issue:
https://github.com/kubevirt/managed-tenant-quota/pull/25

Comment 2 Denys Shchedrivyi 2023-10-04 22:07:06 UTC
 Verified on v4.14.0.rhel9-2166

 resource quota temporary increased and migration completed.

Comment 4 errata-xmlrpc 2023-11-08 14:06:16 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Virtualization 4.14.0 Images security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6817


Note You need to log in before you can comment on or make changes to this bug.