Bug 1869542 - "targetThreshold" values are not propagated correctly to configmap
Summary: "targetThreshold" values are not propagated correctly to configmap
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-scheduler
Version: 4.6
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.6.0
Assignee: Mike Dame
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks: 1868314
TreeView+ depends on / blocked
 
Reported: 2020-08-18 09:11 UTC by zhou ying
Modified: 2020-10-27 16:28 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 16:28:38 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-kube-descheduler-operator pull 121 0 None closed Bug 1869542: Fix targetThreshold propagation for LowNodeUtilization 2020-11-17 23:10:39 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:28:53 UTC

Description zhou ying 2020-08-18 09:11:05 UTC
Description of problem:
threshold and targetThreshold values are same in configmap, where as it should be different

Version-Release number of selected component (if applicable):
clusterkubedescheduleroperator.4.6.0-202008111711.p0

How reproducible:
always

Steps to Reproduce:
1. Install the descheduler from webconsole;
2. Update the stratege as follow:
  strategies:
  - name: LowNodeUtilization
    params:
    - name: CPUThreshold
      value: "30"
    - name: MemoryThreshold
      value: "35"
    - name: PodsThreshold
      value: "30"
    - name: MemoryTargetThreshold
      value: "40"
    - name: CPUTargetThreshold
      value: "70"
    - name: PodsTargetThreshold
      value: "60"
    - name: NumberOfNodes
      value: "3"

3. Check the configmap for the descheduler operator :
  `oc get cm cluster -o yaml`

Actual results:
3. targetThreshold values in configmap are not same as the one in kubedescheduler object
[root@dhcp-140-138 ~]# oc get cm cluster -o yaml
apiVersion: v1
data:
  policy.yaml: |
    strategies:
      LowNodeUtilization:
        enabled: true
        params:
          namespaces:
            exclude: null
            include: null
          nodeResourceUtilizationThresholds:
            targetThresholds:
              cpu: 30
              memory: 35
              pods: 30
            thresholds:
              cpu: 30
              memory: 35
              pods: 30
kind: ConfigMap
...

Expected results:
3. Should set the targetThresholds values correctly as present in the kubedescheduler object

Additional info:

Comment 4 RamaKasturi 2020-08-21 16:29:18 UTC
Verified bug in the descheduler operator below and i see that when targetThreshold and Threshold values are changed in kubedescheduler cluster object, they get updated to configmap and also cluster pod gets restarted.

[ramakasturinarra@dhcp35-60 ~]$ oc get csv
NAME                                                   DISPLAY                     VERSION                 REPLACES   PHASE
clusterkubedescheduleroperator.4.6.0-202008200527.p0   Kube Descheduler Operator   4.6.0-202008200527.p0              Succeeded
[ramakasturinarra@dhcp35-60 ~]$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.0-0.nightly-2020-08-21-011653   True        False         10h     Cluster version is 4.6.0-0.nightly-2020-08-21-011653


Intial values:
==============
oc get kubedescheduler cluster -o yaml:
========================================
- name: LowNodeUtilization
    params:
    - name: cputhreshold
      value: "10"
    - name: memorythreshold
      value: "20"
    - name: podsthreshold
      value: "30"
    - name: memorytargetthreshold
      value: "50"
    - name: cputargetthreshold
      value: "45"
    - name: podstargetthreshold
      value: "30"
    - name: nodes
      value: "3"
oc get configmap -o yaml:
=============================
nodeResourceUtilizationThresholds:
            numberOfNodes: 3
            targetThresholds:
              cpu: 45
              memory: 50
              pods: 30
            thresholds:
              cpu: 10
              memory: 20
              pods: 30

update TargetThresholdes in kubedescheduler cluster object:
============================================================
cluster pod gets restarted
+++++++++++++++++++++++++++++++

[ramakasturinarra@dhcp35-60 ~]$ oc edit kubedescheduler cluster
kubedescheduler.operator.openshift.io/cluster edited
[ramakasturinarra@dhcp35-60 ~]$ oc get pods
NAME                                   READY   STATUS              RESTARTS   AGE
cluster-779764bdf4-glcfn               1/1     Running             0          4m29s
cluster-fbf7b4f85-4vn6c                0/1     ContainerCreating   0          4s
descheduler-operator-89c97b754-f6v7d   1/1     Running             0          18m

values in kubedescheduler cluster object:
++++++++++++++++++++++++++++++++++++++++++
- name: LowNodeUtilization
    params:
    - name: cputhreshold
      value: "10"
    - name: memorythreshold
      value: "20"
    - name: podsthreshold
      value: "30"
    - name: memorytargetthreshold
      value: "45"
    - name: cputargetthreshold
      value: "40"
    - name: podstargetthreshold
      value: "30"
    - name: nodes
      value: "3"

values in configmap:
++++++++++++++++++++++++
nodeResourceUtilizationThresholds:
              numberOfNodes: 3
              targetThresholds:
                cpu: 40
                memory: 45
                pods: 30
              thresholds:
                cpu: 10
                memory: 20
                pods: 30

Similar test for thresholds:
==============================
[ramakasturinarra@dhcp35-60 ~]$ oc edit kubedescheduler cluster
kubedescheduler.operator.openshift.io/cluster edited
[ramakasturinarra@dhcp35-60 ~]$ oc get pods
NAME                                   READY   STATUS              RESTARTS   AGE
cluster-8677c57d87-q5cfm               0/1     ContainerCreating   0          4s
cluster-fbf7b4f85-4vn6c                1/1     Running             0          2m33s
descheduler-operator-89c97b754-f6v7d   1/1     Running             0          21m

strategies:
  - name: LowNodeUtilization
    params:
    - name: cputhreshold
      value: "10"
    - name: memorythreshold
      value: "25"
    - name: podsthreshold
      value: "20"
    - name: memorytargetthreshold
      value: "45"
    - name: cputargetthreshold
      value: "40"
    - name: podstargetthreshold
      value: "30"
    - name: nodes
      value: "2"
nodeResourceUtilizationThresholds:
              numberOfNodes: 2
              targetThresholds:
                cpu: 40
                memory: 45
                pods: 30
              thresholds:
                cpu: 10
                memory: 25
                pods: 20

Based on the above moving the bug to verified state.

Comment 6 errata-xmlrpc 2020-10-27 16:28:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.