Bug 1935217

Summary: [CNV-2.5] Manifests in openshift-cnv missing resource requirements - Storage
Product: Container Native Virtualization (CNV) Reporter: sgott
Component: StorageAssignee: Alexander Wels <awels>
Status: CLOSED ERRATA QA Contact: Kevin Alon Goldblatt <kgoldbla>
Severity: high Docs Contact:
Priority: unspecified    
Version: 2.5.3CC: alitke, awels, cnv-qe-bugs, fdeutsch, ipinto, jgil, kbidarka, mrashish, ycui
Target Milestone: ---   
Target Release: 4.10.0   
Hardware: All   
OS: Linux   
Whiteboard:
Fixed In Version: CNV v4.10.0-164, virt-cdi-operator v4.10.0-21 Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1931519 Environment:
Last Closed: 2022-03-16 15:50:56 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1931519, 1935218, 1935219    
Bug Blocks:    

Description sgott 2021-03-04 14:43:56 UTC
+++ This bug was initially created as a clone of Bug #1931519 +++

This is a clone to track items specifically related to storage component

------------

Description of problem:

Most of the deployments and daemonsets stored in the openshift-cnv namespace don't specify the resource request in their manifests. Only daemonset/kube-cni-linux-bridge-plugin, deployment/kubemacpool-mac-controller-manager and daemonset/kube-cni-linux-bridge-plugin have it defined as follows:


Kind       | Name                               | CPU Req/Limits | Mem Req/Limits
---------- | ---------------------------------- | -------------- | ---------------
daemonset  | kube-cni-linux-bridge-plugin       | 60m/0m         | 30Mi/0Mi
deployment | kubemacpool-mac-controller-manager | 100m/300m      | 300Mi/600Mi


The following list of manifests don't define the resource requirements:

Kind       | Name
---------- | ---- 
daemonset  | bridge-marker
daemonset  | nmstate-handler
daemonset  | ovs-cni-amd64
daemonset  | bridge-marker
daemonset  | nmstate-handler
daemonset  | ovs-cni-amd64
daemonset  | kubevirt-node-labeller
daemonset  | ovs-cni-amd64
daemonset  | nmstate-handler
deployment | cdi-uploadproxy
deployment | cdi-apiserver
deployment | nmstate-webhook
deployment | hostpath-provisioner-operator
deployment | virt-api
deployment | virt-controller
deployment | virt-handler
deployment | virt-operator
deployment | virt-template-validator
deployment | vm-import-controller
deployment | vm-import-operator
deployment | cdi-deployment
deployment | cluster-network-addons-operator
deployment | cdi-operator
deployment | cluster-network-addons-operator
deployment | kubevirt-ssp-operato
deployment | hco-operator


Version-Release number of selected component (if applicable):
CNV 2.5.3 and onward.

How reproducible:



Steps to Reproduce:
1.Create CNV namespace
2.Create CNV Operator Group
3.Create HCO subscription and deploy stable
4.Wait for deployment of HCO operator to complete
5.Check for resource requests in deployed manifests.

Actual results:
Only 2 deployed manifests define their resource requirements, and only 1 define the resource limits (see list above). 

Expected results:
All deployed manifests define the resource requirements.

Additional info:
N/A

Comment 6 Adam Litke 2021-09-14 16:24:48 UTC
Alexander.  This missed 4.9.0 (unless you deem it a blocker).  Can you provide an update please?  Are there any open PRs for CDI and HPP?

Comment 7 Alexander Wels 2021-09-15 18:37:20 UTC
There are no open PRs for CDI or HPP.

Comment 8 Alexander Wels 2021-09-15 20:13:38 UTC
Created a PR for main, not going to backport to 4.9 since it is not a blocker.

Comment 9 Maya Rashish 2021-11-08 16:30:51 UTC
can we kick this out to 4.10? there's no backport for it and it sounds like it doesn't belong on a z-stream update.

Comment 10 Adam Litke 2021-11-10 12:46:34 UTC
Maya, I'd like to defer to Alexander on whether this is appropriate for z-stream.

Comment 12 Adam Litke 2021-12-03 13:33:17 UTC
Pushing to 4.10.

Comment 13 Kevin Alon Goldblatt 2021-12-07 21:43:00 UTC
Verified with the following code:
------------------------------------------------------
oc version
Client Version: 4.10.0-202111232044.p0.g7642a3a.assembly.stream-7642a3a
Server Version: 4.10.0-0.nightly-2021-10-25-190146
Kubernetes Version: v1.22.1+674f31e

oc get csv -n openshift-cnv
NAME                                       DISPLAY                    VERSION   REPLACES                                  PHASE
kubevirt-hyperconverged-operator.v4.10.0   OpenShift Virtualization   4.10.0    kubevirt-hyperconverged-operator.v4.9.0   Succeeded


Verified with the following scenario:
------------------------------------------------------
oc get deployment cdi-apiserver -n openshift-cnv -oyaml |grep -A 2 requests:
          requests:
            cpu: 10m
            memory: 150Mi
oc get deployment cdi-deployment -n openshift-cnv -oyaml |grep -A 2 requests:
          requests:
            cpu: 10m
            memory: 150Mi
oc get deployment cdi-uploadproxy -n openshift-cnv -oyaml |grep -A 2 requests:
          requests:
            cpu: 10m
            memory: 150Mi
oc get deployment cdi-operator -n openshift-cnv -oyaml |grep -A 2 requests:
          requests:
            cpu: 10m
            memory: 150Mi
oc get deployment hostpath-provisioner-operator -n openshift-cnv -oyaml |grep -A 2 requests:
          requests:
            cpu: 10m
            memory: 150Mi


Moving to VERIFIED!

Comment 18 errata-xmlrpc 2022-03-16 15:50:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Virtualization 4.10.0 Images security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:0947