Bug 1825019 - kube-proxy deployment does not include any memory/cpu requests
Summary: kube-proxy deployment does not include any memory/cpu requests
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.5
Hardware: All
OS: All
unspecified
medium
Target Milestone: ---
: 4.5.0
Assignee: Alexander Constantinescu
QA Contact: zhaozhanqi
URL:
Whiteboard:
Depends On:
Blocks: 1844136
TreeView+ depends on / blocked
 
Reported: 2020-04-16 20:50 UTC by Cesar Wong
Modified: 2020-07-13 17:28 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-13 17:27:59 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-network-operator pull 608 0 None closed Bug 1825019: Setting resource request for kube-proxy deployment 2020-08-04 17:09:44 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:28:23 UTC

Description Cesar Wong 2020-04-16 20:50:19 UTC
Description of problem:
When using Calico SDN, the kube-proxy daemonset is deployed. It makes no cpu or memory rquests, resulting in pods with a QoSClass of BestEffort. A cluster with this configuration always fail this e2e test: https://github.com/openshift/origin/blob/4d0922fb92f85f566cb22bbaaedf587e8a50aca4/test/extended/operators/qos.go#L20

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Run e2e against a cluster running with Calico SDN

Actual results:
Test fails:
[sig-arch] Managed cluster should ensure control plane pods do not run in best-effort QoS

Expected results:
Test succeeds

Additional info:

Comment 3 zhaozhanqi 2020-04-29 06:14:22 UTC
hi, Cesar, could you help or which QE from Calico can help verified Calico related bug?

Comment 4 Cesar Wong 2020-04-29 21:21:15 UTC
There are instructions here: 
https://docs.projectcalico.org/getting-started/openshift/installation

I verified with 4.5.0-0.nightly-2020-04-29-173148:

$ oc get pods -n kube-proxy
NAME                         READY   STATUS    RESTARTS   AGE
openshift-kube-proxy-89h2b   1/1     Running   0          9m17s
openshift-kube-proxy-frpzw   1/1     Running   0          9m17s
openshift-kube-proxy-lbskh   1/1     Running   0          9m17s

Then look at the qosClass in the status of one of the pods:
$ oc get pod openshift-kube-proxy-89h2b -o jsonpath='{ .status.qosClass }' -n openshift-kube-proxy
Burstable

which means the bug is fixed. It was previously BestEffort, which was causing the e2e failure.

Comment 5 zhaozhanqi 2020-04-30 02:02:59 UTC
ok, thanks Cesar . then move this bug to verified.

Comment 6 Richard Theis 2020-05-29 18:58:17 UTC
Will this fix be cherry-picked to 4.3 and 4.4?

Comment 7 errata-xmlrpc 2020-07-13 17:27:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.