Bug 1948267 - [kube-descheduler]descheduler operator pod should not run as “BestEffort” qosClass
Summary: [kube-descheduler]descheduler operator pod should not run as “BestEffort” qos...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-scheduler
Version: 4.7
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.7.z
Assignee: Jan Chaloupka
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On: 1947771
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-04-11 08:54 UTC by Jan Chaloupka
Modified: 2021-04-26 16:08 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-04-26 16:08:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-kube-descheduler-operator pull 183 0 None open bug 1948267: descheduler operator: set resource requests 2021-04-11 08:57:40 UTC
Red Hat Product Errata RHSA-2021:1225 0 None None None 2021-04-26 16:08:39 UTC

Description Jan Chaloupka 2021-04-11 08:54:42 UTC
This bug was initially created as a copy of Bug #1947771

I am copying this bug because: 



Description of problem:
descheduler operator pod should not run as “BestEffort” qosClass

Version-Release number of selected component (if applicable):
[root@localhost verification-tests]# oc get csv  clusterkubedescheduleroperator.4.8.0-202104072252.p0  -n openshift-kube-descheduler-operator
NAME                                                   DISPLAY                     VERSION                 REPLACES   PHASE
clusterkubedescheduleroperator.4.8.0-202104072252.p0   Kube Descheduler Operator   4.8.0-202104072252.p0              Succeeded

How reproducible:
Always

Steps to Reproduce:
1) Check the descheduler-operator pod status by command: 
`oc get po descheduler-operator-6c7dc99c6c-n5xhj -o json -n openshift-kube-descheduler-operator |jq .status.qosClass`

Actual results:
1) Show the descheduler operator pod runs as “BestEffort” qosClass
[root@localhost verification-tests]# oc get po descheduler-operator-6c7dc99c6c-n5xhj -o json -n openshift-kube-descheduler-operator |jq .status.qosClass 
"BestEffort"


Expected results:
1) Should set cpu/memory limits/request for the descheduler operator pod.

Additional info:

Comment 2 RamaKasturi 2021-04-15 10:18:43 UTC
Do not have the latest descheduler operator yet, will verify once present.

Comment 4 RamaKasturi 2021-04-20 11:05:29 UTC
Verified with the build below and i do see that descheduler operator pod has memory & cpu set.

[knarra@knarra openshift-client-linux-4.7.0-0.nightly-2021-04-17-022838]$ ./oc get pod descheduler-operator-56978f7d46-ctktt -o json -n openshift-kube-descheduler-operator |jq .status.qosClass
"Burstable"

[knarra@knarra openshift-client-linux-4.7.0-0.nightly-2021-04-17-022838]$ ./oc get pod descheduler-operator-56978f7d46-ctktt -o json -n openshift-kube-descheduler-operator | grep "cpu"
                                        "f:cpu": {},
                        "cpu": "10m",

[knarra@knarra openshift-client-linux-4.7.0-0.nightly-2021-04-17-022838]$ ./oc get pod descheduler-operator-56978f7d46-ctktt -o json -n openshift-kube-descheduler-operator | grep "memory"
                                        "f:memory": {}
                        "memory": "50Mi"
                "key": "node.kubernetes.io/memory-pressure",

 "resources": {
                    "requests": {
                        "cpu": "10m",
                        "memory": "50Mi"
                    }
                },

Based on the above moving bug to verified state.

Comment 6 errata-xmlrpc 2021-04-26 16:08:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.8 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:1225


Note You need to log in before you can comment on or make changes to this bug.