Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 2109368

Summary: [RFE] provide a mechanism to reserve quota for migration
Product: OpenShift Container Platform Reporter: nijin ashok <nashok>
Component: kube-apiserverAssignee: Abu Kashem <akashem>
Status: CLOSED NOTABUG QA Contact: Ke Wang <kewang>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.10CC: jchaloup, mfojtik, xxia
Target Milestone: ---Keywords: Reopened
Target Release: ---   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-01-16 14:37:01 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description nijin ashok 2022-07-21 05:23:15 UTC
Description of problem:

Defined ResourceQuata for the namespace as below:

~~~
apiVersion: v1
kind: ResourceQuota
metadata:
  name: cnv-quota
spec:
  hard:
    requests.cpu: "400m"
~~~

Started a VM with cpu request 300m.

~~~
Resource Quotas
  Name:         cnv-quota
  Resource      Used  Hard
  --------      ---   ---
  requests.cpu  300m  400m
~~~

Tried to live migrate the VM and migration failed with the error below because the namespace doesn't have a quota left to create the destination virt-launcher pod:

~~~
1s          Warning   FailedCreate                                    virtualmachineinstance/rhel8-wild-moth                                                        (combined from similar events): Error creating pod: pods "virt-launcher-rhel8-wild-moth-w65h2" is forbidden: exceeded quota: cnv-quota, requested: requests.cpu=300m, used: requests.cpu=300m, limited: requests.cpu=400m
~~~

Version-Release number of selected component (if applicable):

OpenShift Virtualization   4.10.2

How reproducible:

100%

Steps to Reproduce:

Please refer above.

Actual results:

VM live migration fails with resource limits while creating destination pod.

Expected results:

A user cannot raise the limit just to facilitate migration and to calculate the limits considering also live migration overhead is not easy. And if the admin reserves some resource for migration, a normal user may accidentally take up this reserve for general workload and the admin doesn't have control over this.

Comment 1 sgott 2022-08-10 12:22:16 UTC
This is simply not possible in KubeVirt. The basic issue is that KubeVirt doesn't have authority over Quotas--those are assigned by the cluster admin.

In terms of "reserving" quota for a specific use, I believe that's a general OpenShift discussion.

I'm re-assigning this to OCP. I might have assigned the incorrect component, so please feel free to re-assign.

Comment 2 Jan Chaloupka 2022-08-11 09:36:05 UTC
> A user cannot raise the limit just to facilitate migration and to calculate the limits considering also live migration overhead is not easy. And if the admin reserves some resource for migration, a normal user may accidentally take up this reserve for general workload and the admin doesn't have control over this.

Kubernetes/OCP has no concept of live migration. If the configured quota is not sufficient for the migration, it needs to be temporarily increased to accommodate the increased demand. Given a pod can not be created, the kube-scheduler can not preempt the general workload to reduce the resource consumption. Making priority classes and preemption unusable. Or make any decision since the scheduling phase is performed after a pod is created.

What is getting requested here is a mechanism for allowing creation of a new pod in the same namespace without changing the hard resource quota while allowing existence of this new pod which together with already existing pods exceeds the hard resource quota. In other words allowing a specific pod to temporarily "escape" from the resource quota constraints to replace the current pod/vm after the live migration is complete. Which is currently impossible to do and against the design principles.

The closest solution is to use LimitRanges together with label selectors to further restrict which pods can request available resources [2]. Unfortunately, ability to use label selectors (or a different mechanism of selecting pods) with LimitRanges has not been implemented yet.

[1] https://kubernetes.io/docs/concepts/policy/limit-range/
[2] https://github.com/kubernetes/kubernetes/issues/56799

ResourceQuota/LimitRanges are enforced during admission handling on the apiserver side.

Comment 4 Michal Fojtik 2023-01-16 11:39:31 UTC
Dear reporter, we greatly appreciate the bug you have reported here. Unfortunately, due to migration to a new issue-tracking system (https://issues.redhat.com/), we cannot continue triaging bugs reported in Bugzilla. Since this bug has been stale for multiple days, we, therefore, decided to close this bug.
If you think this is a mistake or this bug has a higher priority or severity as set today, please feel free to reopen this bug and tell us why. We are going to move every re-opened bug to https://issues.redhat.com. 

Thank you for your patience and understanding.