Description of problem: VM with HotPluggable disks and namespace defined LimitRanges won't restart due to hardcoded Limits Version-Release number of selected component (if applicable): 4.11 How reproducible: Always Steps to Reproduce: 1. setting specific LimitRange on NameSpace/core on an RHOCP cluster apiVersion: v1 kind: LimitRange metadata: name: resource-limits spec: limits: - type: Container maxLimitRerquestRatio: memory: "1" max: ephemeral-storage: "20Gi" memory: "32Gi" min: memory: "32Mi" default: ephemeral-storage: "200Mi" memory: "32Mi" defaultRequest: ephemeral-storage: "50Mi" - type: openshift.io/Image max: storage: 1Gi 2. vms which had additional dataVolumes and the hotpluggable option set to true couldn't restart properly with the error. 3. Actual results: Warning FailedCreate 0m27s virtualmachine-controller Error creating attachment pod: pods "hp-volume-4d7hc" is forbidden: [minimum memory usage per Container is 32Mi, but request is 2M, memory max limit to request ratio per Container is 1, but provided ratio is 40.000000] Expected results: VMs and hotplug works well Additional info: The suspicion is that this is caused by HotplugAttachmentPod, since in the upstream code the resource requests and limits seem to be hard coded.
Test on CNV v4.11.3-8, issue has been fixed. $ oc get limitranges -o yaml apiVersion: v1 items: - apiVersion: v1 kind: LimitRange metadata: creationTimestamp: "2023-01-18T13:24:46Z" name: resource-limits namespace: test resourceVersion: "1176087" uid: 975fb5a9-1c54-430a-84b6-8f0b1826b225 spec: limits: - default: ephemeral-storage: 200Mi memory: 128Mi defaultRequest: ephemeral-storage: 50Mi memory: 128Mi maxLimitRequestRatio: memory: "1" type: Container kind: List metadata: resourceVersion: "" --- apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: dv1 spec: source: http: url: http://url/cirros-images/cirros-0.4.0-x86_64-disk.qcow2 storage: resources: requests: storage: 100Mi storageClassName: ocs-storagecluster-ceph-rbd contentType: kubevirt --- apiVersion: cdi.kubevirt.io/v1beta1 kind: DataVolume metadata: name: blank-dv spec: source: blank: {} pvc: accessModes: - ReadWriteOnce resources: requests: storage: 50Mi storageClassName: ocs-storagecluster-ceph-rbd volumeMode: Block --- apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: labels: kubevirt.io/vm: vm-cirros name: vm-cirros spec: running: true template: metadata: labels: kubevirt.io/vm: vm-cirros spec: domain: devices: disks: - disk: bus: virtio name: dv-disk - disk: bus: virtio name: cloudinitdisk resources: limits: cpu: 100m memory: 90Mi requests: cpu: 100m memory: 90Mi terminationGracePeriodSeconds: 0 volumes: - name: dv-disk dataVolume: name: dv1 - cloudInitNoCloud: userData: | #!/bin/sh echo 'printed from cloud-init userdata' name: cloudinitdisk $ virtctl addvolume vm-cirros --volume-name=blank-dv Successfully submitted add volume request to VM vm-cirros for volume blank-dv $ oc get pod NAME READY STATUS RESTARTS AGE hp-volume-4qg7x 1/1 Running 0 12s virt-launcher-vm-cirros-7j7ps 1/1 Running 0 63s
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Virtualization 4.11.3 Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2023:0621