Bug 1877892 - [Descheduler] values in the configmap is still shown as null for include & exclude even after setting the right value
Summary: [Descheduler] values in the configmap is still shown as null for include & ex...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-scheduler
Version: 4.6
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.6.0
Assignee: Mike Dame
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-10 16:49 UTC by RamaKasturi
Modified: 2023-09-14 06:08 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 16:39:40 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-kube-descheduler-operator pull 131 0 None closed Bug 1877892: Fix lowercasing in namespace & priorityThreshold parameters 2021-01-01 16:04:58 UTC
Red Hat Knowledge Base (Solution) 5449901 0 None None None 2020-09-30 21:14:40 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:39:52 UTC

Description RamaKasturi 2020-09-10 16:49:58 UTC
Description of problem:
Values in the configmap is still seen as null for include & exclude params even after setting the right value

Version-Release number of selected component (if applicable):
[ramakasturinarra@dhcp35-60 openshift-client-linux-4.6.0-0.nightly-2020-09-10-031249]$ ./oc version 
Client Version: 4.6.0-0.nightly-2020-09-10-031249
Server Version: 4.6.0-0.nightly-2020-09-09-040238
Kubernetes Version: v1.19.0-rc.2+068702d


How reproducible:
Always

Steps to Reproduce:
1. Install latest 4.6 cluster
2. Add the podLifeTime strategy as below
strategies:
  - name: PodLifeTime
    params:
    - name: maxPodLifeTimeSeconds
      value: "3600"

3. Now edit the kubedescheduler object and the other params as below
- name: includeNamespaces
      value: my-project


Actual results:
I see that the cluster pod does not get respinned also the values in the configmap does not change.

oc get kubedescheduler cluster -o yaml
===========================================
spec:
  deschedulingIntervalSeconds: 3600
  image: registry.redhat.io/openshift4/ose-descheduler@sha256:ac21a65ec072db9b9c66c1c6aed940428c9313cc7870cc3976bebf3c5772cde7
  strategies:
  - name: PodLifeTime
    params:
    - name: maxPodLifeTimeSeconds
      value: "3600"
    - name: includeNamespaces
      value: my-project

oc get configmap cluster -o yaml
=================================
[ramakasturinarra@dhcp35-60 openshift-client-linux-4.6.0-0.nightly-2020-09-10-031249]$ oc get configmap cluster -o yaml
apiVersion: v1
data:
  policy.yaml: |
    strategies:
      PodLifeTime:
        enabled: true
        params:
          maxPodLifeTimeSeconds: 3600
          namespaces:
            exclude: null
            include: null
          thresholdPriority: null
          thresholdPriorityClassName: ""
kind: ConfigMap
metadata:
  creationTimestamp: "2020-09-10T16:37:56Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:policy.yaml: {}
      f:metadata:
        f:ownerReferences:
          .: {}
          k:{"uid":"d75acd99-551c-40c7-989d-48b08d2bda90"}:
            .: {}
            f:apiVersion: {}
            f:kind: {}
            f:name: {}
            f:uid: {}
    manager: cluster-kube-descheduler-operator
    operation: Update
    time: "2020-09-10T16:37:56Z"
  name: cluster
  namespace: openshift-kube-descheduler-operator
  ownerReferences:
  - apiVersion: v1beta1
    kind: KubeDescheduler
    name: cluster
    uid: d75acd99-551c-40c7-989d-48b08d2bda90
  resourceVersion: "58283"
  selfLink: /api/v1/namespaces/openshift-kube-descheduler-operator/configmaps/cluster
  uid: 2afd4f20-fb01-4c7f-90f7-094afa866ef2


Expected results:
values of params should be set correctly and should not show null

Additional info:
same thing happens for thresholdPriority & thresholdPriorityClassName as well

spec:
  deschedulingIntervalSeconds: 3600
  image: registry.redhat.io/openshift4/ose-descheduler@sha256:ac21a65ec072db9b9c66c1c6aed940428c9313cc7870cc3976bebf3c5772cde7
  strategies:
  - name: PodLifeTime
    params:
    - name: maxPodLifeTimeSeconds
      value: "3600"
    - name: includeNamespaces
      value: my-project
    - name: thresholdPriorityClassName
      value: system-cluster-critical

[ramakasturinarra@dhcp35-60 openshift-client-linux-4.6.0-0.nightly-2020-09-10-031249]$ oc get configmap cluster -o yaml
apiVersion: v1
data:
  policy.yaml: |
    strategies:
      PodLifeTime:
        enabled: true
        params:
          maxPodLifeTimeSeconds: 3600
          namespaces:
            exclude: null
            include: null
          thresholdPriority: null
          thresholdPriorityClassName: ""

Comment 5 RamaKasturi 2020-09-16 12:32:12 UTC
Verified with the payload below and i see that the include & exclude has right values after setting it in the kubedescheduler object.

[ramakasturinarra@dhcp35-60 openshift-client-linux-4.6.0-0.nightly-2020-09-16-000734]$ oc get csv
NAME                                                   DISPLAY                     VERSION                 REPLACES   PHASE
clusterkubedescheduleroperator.4.6.0-202009152100.p0   Kube Descheduler Operator   4.6.0-202009152100.p0              Succeeded

[ramakasturinarra@dhcp35-60 openshift-client-linux-4.6.0-0.nightly-2020-09-16-000734]$ oc get configmap cluster -o yaml
apiVersion: v1
data:
  policy.yaml: |
    strategies:
      LowNodeUtilization:
        enabled: true
        params:
          namespaces: null
          nodeResourceUtilizationThresholds:
            targetThresholds:
              cpu: 50
              memory: 40
              pods: 60
            thresholds:
              cpu: 10
              memory: 20
              pods: 30
          thresholdPriority: null
          thresholdPriorityClassName: ""
      PodLifeTime:
        enabled: true
        params:
          maxPodLifeTimeSeconds: 3600
          namespaces:
            exclude:
            - my-project1
            include:
            - my-project
          thresholdPriority: null
          thresholdPriorityClassName: system-cluster-critical
      RemoveDuplicates:
        enabled: true
        params:
          namespaces: null
          removeDuplicates: {}
          thresholdPriority: null
          thresholdPriorityClassName: ""
      RemovePodsHavingTooManyRestarts:
        enabled: true
        params:
          namespaces:
            exclude: null
            include: null
          podsHavingTooManyRestarts:
            podRestartThreshold: 10
          thresholdPriority: null
          thresholdPriorityClassName: ""
      RemovePodsViolatingInterPodAntiAffinity:
        enabled: true
        params:
          namespaces:
            exclude: null
            include: null
          thresholdPriority: null
          thresholdPriorityClassName: ""

@jaspreet , also tested comment2 and it works fine with out any issues. 

Based on the above moving the bug to verified state.

Comment 8 errata-xmlrpc 2020-10-27 16:39:40 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196

Comment 9 Red Hat Bugzilla 2023-09-14 06:08:20 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days


Note You need to log in before you can comment on or make changes to this bug.