Bug 2241953 - MTQ does not work with LimitRanges
Summary: MTQ does not work with LimitRanges
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Container Native Virtualization (CNV)
Classification: Red Hat
Component: Virtualization
Version: 4.14.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.14.1
Assignee: Barak
QA Contact: Kedar Bidarkar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2023-10-03 15:33 UTC by Denys Shchedrivyi
Modified: 2023-12-07 15:00 UTC (History)
1 user (show)

Fixed In Version: v4.14.1.rhel9-8
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2023-12-07 15:00:50 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github kubevirt managed-tenant-quota pull 31 0 None Merged [release-v1.1] Bump kubevirt to v1.0.0 and add limit range support 2023-10-19 07:39:42 UTC
Red Hat Issue Tracker CNV-33665 0 None None None 2023-10-03 15:33:52 UTC
Red Hat Product Errata RHSA-2023:7704 0 None None None 2023-12-07 15:00:52 UTC

Description Denys Shchedrivyi 2023-10-03 15:33:38 UTC
Description of problem:
 Managed-Tenant-Quota does not help with increasing quotas for VM that set the memory/cpu limits using LimitRange.

I've LimitRange which set the limits to VM automatically:

> $ oc describe limitrange
> Name:       resource-limits
> Namespace:  test-mtq
> Type        Resource  Min  Max  Default Request  Default Limit  Max Limit/Request > Ratio
> ----        --------  ---  ---  ---------------  -------------  -----------------------
> Container   cpu       -    2    100m             1              -
> Container   memory    -    2Gi  1Gi              1536Mi         -


The POD has requests and limits (because of LimitRange):

$ oc get pod  virt-launcher-vm-fedora-cpu-mem-1-f4rtp -o json | jq .spec.containers[0].resources
> {
>   "limits": {
>     "cpu": "1",
>     "memory": "1536Mi"
>   },
>   "requests": {
>     "cpu": "1",
>     "memory": "1268Mi"
>   }
> }

and there is a VMMRQ:

> $ oc get vmmrq my-vmmrq -o json |  jq .spec
> {
>   "additionalMigrationResources": {
>     "limits.cpu": 4,
>     "limits.memory": "4.5Gi",
>     "requests.cpu": 4,
>     "requests.memory": "4.5Gi"
>   }
> }

But migration is still in Pending state:

> $ oc get vmim
> NAME                        PHASE     VMI
> kubevirt-migrate-vm-52x2p   Pending   vm-fedora-cpu-mem-1



Version-Release number of selected component (if applicable):
4.14

How reproducible:
100%

Steps to Reproduce:
1. create LimitRange 
2. create resourcequota with request and limits
3. create VM with cpu request, but without limits
4. create vmmrq with resource and limits
5. migrate VM

Actual results:
 Migration is in Pending state

Expected results:
 Migration completed

Additional info:

Comment 2 Denys Shchedrivyi 2023-11-27 20:26:21 UTC
Verified on CNV-v4.14.1.rhel9-62, MTQ with LimitRange works well

Comment 9 errata-xmlrpc 2023-12-07 15:00:50 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Virtualization 4.14.1 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:7704


Note You need to log in before you can comment on or make changes to this bug.