Bug 1869542
Summary: | "targetThreshold" values are not propagated correctly to configmap | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | zhou ying <yinzhou> |
Component: | kube-scheduler | Assignee: | Mike Dame <mdame> |
Status: | CLOSED ERRATA | QA Contact: | RamaKasturi <knarra> |
Severity: | medium | Docs Contact: | |
Priority: | medium | ||
Version: | 4.6 | CC: | alchan, aos-bugs, mfojtik |
Target Milestone: | --- | ||
Target Release: | 4.6.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2020-10-27 16:28:38 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1868314 |
Description
zhou ying
2020-08-18 09:11:05 UTC
Verified bug in the descheduler operator below and i see that when targetThreshold and Threshold values are changed in kubedescheduler cluster object, they get updated to configmap and also cluster pod gets restarted. [ramakasturinarra@dhcp35-60 ~]$ oc get csv NAME DISPLAY VERSION REPLACES PHASE clusterkubedescheduleroperator.4.6.0-202008200527.p0 Kube Descheduler Operator 4.6.0-202008200527.p0 Succeeded [ramakasturinarra@dhcp35-60 ~]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.0-0.nightly-2020-08-21-011653 True False 10h Cluster version is 4.6.0-0.nightly-2020-08-21-011653 Intial values: ============== oc get kubedescheduler cluster -o yaml: ======================================== - name: LowNodeUtilization params: - name: cputhreshold value: "10" - name: memorythreshold value: "20" - name: podsthreshold value: "30" - name: memorytargetthreshold value: "50" - name: cputargetthreshold value: "45" - name: podstargetthreshold value: "30" - name: nodes value: "3" oc get configmap -o yaml: ============================= nodeResourceUtilizationThresholds: numberOfNodes: 3 targetThresholds: cpu: 45 memory: 50 pods: 30 thresholds: cpu: 10 memory: 20 pods: 30 update TargetThresholdes in kubedescheduler cluster object: ============================================================ cluster pod gets restarted +++++++++++++++++++++++++++++++ [ramakasturinarra@dhcp35-60 ~]$ oc edit kubedescheduler cluster kubedescheduler.operator.openshift.io/cluster edited [ramakasturinarra@dhcp35-60 ~]$ oc get pods NAME READY STATUS RESTARTS AGE cluster-779764bdf4-glcfn 1/1 Running 0 4m29s cluster-fbf7b4f85-4vn6c 0/1 ContainerCreating 0 4s descheduler-operator-89c97b754-f6v7d 1/1 Running 0 18m values in kubedescheduler cluster object: ++++++++++++++++++++++++++++++++++++++++++ - name: LowNodeUtilization params: - name: cputhreshold value: "10" - name: memorythreshold value: "20" - name: podsthreshold value: "30" - name: memorytargetthreshold value: "45" - name: cputargetthreshold value: "40" - name: podstargetthreshold value: "30" - name: nodes value: "3" values in configmap: ++++++++++++++++++++++++ nodeResourceUtilizationThresholds: numberOfNodes: 3 targetThresholds: cpu: 40 memory: 45 pods: 30 thresholds: cpu: 10 memory: 20 pods: 30 Similar test for thresholds: ============================== [ramakasturinarra@dhcp35-60 ~]$ oc edit kubedescheduler cluster kubedescheduler.operator.openshift.io/cluster edited [ramakasturinarra@dhcp35-60 ~]$ oc get pods NAME READY STATUS RESTARTS AGE cluster-8677c57d87-q5cfm 0/1 ContainerCreating 0 4s cluster-fbf7b4f85-4vn6c 1/1 Running 0 2m33s descheduler-operator-89c97b754-f6v7d 1/1 Running 0 21m strategies: - name: LowNodeUtilization params: - name: cputhreshold value: "10" - name: memorythreshold value: "25" - name: podsthreshold value: "20" - name: memorytargetthreshold value: "45" - name: cputargetthreshold value: "40" - name: podstargetthreshold value: "30" - name: nodes value: "2" nodeResourceUtilizationThresholds: numberOfNodes: 2 targetThresholds: cpu: 40 memory: 45 pods: 30 thresholds: cpu: 10 memory: 25 pods: 20 Based on the above moving the bug to verified state. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196 |