Description of problem: The namespace has got below limits defined: ~~~ oc describe limits Name: resource-limits Namespace: default Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Container memory - 5Gi 1Gi 3Gi - Container cpu - 2 2 2 - ~~~ A VM was created from the OpenShift UI from the default template. The template doesn't have any limits defined. Immediately after starting the VM, the below warning message was shown in the console: ~~~ Pending Changes The following areas have pending changes that will be applied when this VirtualMachine is restarted. Details CPU | Memory ~~~ There were no manual CPU/Memory changes. However, since there are limits defined on the namespace, it will add the default limits in the VMI object. ~~~ oc get vmi rhel6-continuing-damselfly -o yaml |yq -y '.spec.domain.resources' limits: cpu: '2' memory: 3Gi requests: cpu: '2' memory: 2Gi ~~~ It looks like this update is triggering this warning although this was already applied to the virt-launcher pod. ~~~ oc get pod virt-launcher-rhel6-continuing-damselfly-s8ckb -o yaml |yq -y '.spec.containers[0].resources' limits: cpu: '2' devices.kubevirt.io/kvm: '1' devices.kubevirt.io/tun: '1' devices.kubevirt.io/vhost-net: '1' memory: 3313Mi requests: cpu: '2' devices.kubevirt.io/kvm: '1' devices.kubevirt.io/tun: '1' devices.kubevirt.io/vhost-net: '1' ephemeral-storage: 50M memory: 2289Mi ~~~ Version-Release number of selected component (if applicable): OCP 4.11.4 OpenShift Virtualization 4.11.0 How reproducible: 100% Steps to Reproduce: 1. Define limits on the namespace. 2. Create a VM from the default template and start the VM. 3. Immediately after starting the VM, the below warning is shown in the UI: Pending Changes The following areas have pending changes that will be applied when this VirtualMachine is restarted. Actual results: Incorrect pending changes warning about memory and CPU while starting a VM in a namespace with limitranges. Expected results: Additional info:
Hi Nijin, thank you for opening this bug. I have a question: what exactly are the expected results of this bug? I'd like to make sure I understand this problem correctly. Thanks.
(In reply to Hilda Stastna from comment #1) > Hi Nijin, > > thank you for opening this bug. I have a question: > what exactly are the expected results of this bug? I'd like to make sure I > understand this problem correctly. Thanks. Hello Hilda, The warning message in the UI is incorrect since the limits are already applied. So it is not a "pending change" and the message is confusing.
So the expected result is not to display the warning message? Or to change the message being displayed?
(In reply to Hilda Stastna from comment #3) > So the expected result is not to display the warning message? Or to change > the message being displayed? It's not to display the warning since there are no "pending changes".
Hi Nijin, so I was exploring this bug and was able to reproduce the scenario you've described, but I am not sure if it is a bug. The warning is displayed because there is a difference between VM and VMI "requests" objects: VM yaml: ... resources: requests: memory: 2Gi ... VMI yaml: ... resources: ... requests: cpu: '2' memory: 2Gi ... I am sure you can check whole yaml files if needed. IMO the "issue" you've described is a natural consequence of what's defined in the template used for creating a VM vs. limits we specified for the chosen namespace, that influence the VMI. WDYT?
This is a valid bug as the VM is created from default yaml, it should not show pending changes just after the VM is created in any conditions.
(In reply to Hilda Stastna from comment #5) > IMO the "issue" you've described is a natural consequence of what's defined > in the template used for creating a VM vs. limits we specified for the > chosen namespace, that influence the VMI. WDYT? The message is wrong from a user's perspective. The message is "The following areas have pending changes that will be applied when this VirtualMachine is restarted". We don't have any pending change here that will be applied after the VM restart. Whatever changes that were injected into the VMI are already applied to the virt-launcher pod. I think it's still a bug.
I am not sure if we are able to distinguish between this case and the other one in the UI, the one when there are really some pending changes. Also I am not sure if this isn't more a backend bug, as we get the info about VMI from backend. In addition, we were told that we should avoid adding too much logic into the UI code.
(In reply to Hilda Stastna from comment #8) > I am not sure if we are able to distinguish between this case and the other > one in the UI, the one when there are really some pending changes. > Also I am not sure if this isn't more a backend bug, as we get the info > about VMI from backend. > In addition, we were told that we should avoid adding too much logic into > the UI code. Do we have to move the component to virtualization?
Move the bug to virt for a look because the VM and VMI gets different "requests" objects when there is a limits defined in the namespace, please use the attached the VM yaml and VMI yaml for reference.
Hi, Yes, AFAIK the changes were already merged.
verify with build: CNV-v4.13.0.rhel9-1639 step: 1: create limits $ oc describe limits Name: resource-limits Namespace: default Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Container cpu - 2 100m 2 - Container memory - 5Gi 4Gi 4Gi - 2: From UI, created a VM from default template.Immediately after starting the VM. There is no warning message shown. check vmi define, no LimitRange inject ... resources: requests: memory: 2Gi ... check virt-launcher pod, limit already applied ... limits: cpu: "2" devices.kubevirt.io/kvm: "1" devices.kubevirt.io/tun: "1" devices.kubevirt.io/vhost-net: "1" memory: 4Gi requests: cpu: 100m devices.kubevirt.io/kvm: "1" devices.kubevirt.io/tun: "1" devices.kubevirt.io/vhost-net: "1" ephemeral-storage: 50M memory: 2294Mi ... move to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Virtualization 4.13.0 Images security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2023:3205