Description of problem: descheduler operator pod should not run as “BestEffort” qosClass Version-Release number of selected component (if applicable): [root@localhost verification-tests]# oc get csv clusterkubedescheduleroperator.4.8.0-202104072252.p0 -n openshift-kube-descheduler-operator NAME DISPLAY VERSION REPLACES PHASE clusterkubedescheduleroperator.4.8.0-202104072252.p0 Kube Descheduler Operator 4.8.0-202104072252.p0 Succeeded How reproducible: Always Steps to Reproduce: 1) Check the descheduler-operator pod status by command: `oc get po descheduler-operator-6c7dc99c6c-n5xhj -o json -n openshift-kube-descheduler-operator |jq .status.qosClass` Actual results: 1) Show the descheduler operator pod runs as “BestEffort” qosClass [root@localhost verification-tests]# oc get po descheduler-operator-6c7dc99c6c-n5xhj -o json -n openshift-kube-descheduler-operator |jq .status.qosClass "BestEffort" Expected results: 1) Should set cpu/memory limits/request for the descheduler operator pod. Additional info:
PR awaiting review
Verified in the csv below and i now see that descheduler pod run as a "Burstable" pod. [knarra@knarra openshift-client-linux-4.8.0-0.nightly-2021-04-13-171608]$ ./oc get csv -n openshift-kube-descheduler-operator NAME DISPLAY VERSION REPLACES PHASE clusterkubedescheduleroperator.4.8.0-202104121740.p0 Kube Descheduler Operator 4.8.0-202104121740.p0 Succeeded [knarra@knarra openshift-client-linux-4.8.0-0.nightly-2021-04-13-171608]$ ./oc get pod descheduler-operator-77545bff45-zwg66 -o json -n openshift-kube-descheduler-operator |jq .status.qosClass "Burstable" "resources": { "requests": { "cpu": "10m", "memory": "50Mi" } [knarra@knarra openshift-client-linux-4.8.0-0.nightly-2021-04-13-171608]$ ./oc get pod openshift-kube-scheduler-operator-865bd594c7-6826n -o json -n openshift-kube-scheduler-operator | grep "cpu" "cpu": "10m", [knarra@knarra openshift-client-linux-4.8.0-0.nightly-2021-04-13-171608]$ ./oc get pod openshift-kube-scheduler-operator-865bd594c7-6826n -o json -n openshift-kube-scheduler-operator | grep "memory" "memory": "50Mi" "key": "node.kubernetes.io/memory-pressure", Based on the above moving bug to verified state.
csv & build details where this was verified. [knarra@knarra openshift-client-linux-4.7.0-0.nightly-2021-04-17-022838]$ ./oc get csv NAME DISPLAY VERSION REPLACES PHASE clusterkubedescheduleroperator.4.7.0-202104142050.p0 Kube Descheduler Operator 4.7.0-202104142050.p0 Succeeded [knarra@knarra openshift-client-linux-4.7.0-0.nightly-2021-04-17-022838]$ ./oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.7.0-0.nightly-2021-04-17-022838 True False 4h52m Cluster version is 4.7.0-0.nightly-2021-04-17-022838
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438