Bug 1801692 - [DeScheduler] nodeResourceUtilizationThresholds: {} should not be added as a param for every strategy
Summary: [DeScheduler] nodeResourceUtilizationThresholds: {} should not be added as a...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-scheduler
Version: 4.4
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.5.0
Assignee: Jan Chaloupka
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-02-11 13:45 UTC by RamaKasturi
Modified: 2020-07-13 17:15 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-13 17:14:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift descheduler pull 29 0 None closed bug 1801692: Sync with upstream 2020-06-23 08:27:05 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:15:25 UTC

Description RamaKasturi 2020-02-11 13:45:39 UTC
Description of problem:
nodeResourceUtilizationThresholds: {} gets added as a param to every strategy listed in the cluster configmap and i think that should not be added since it is not  really a required param and it can create confusion to the user /customer who is reading the file.

Version-Release number of selected component (if applicable):


How reproducible:
Always

Steps to Reproduce:
1. Clone the descheduler operator github repo.
2. Run oc create -f manifests/. to create the resources
3. Now edit the file  02_kube-descheduler-operator.cr.yaml and add the strategies below.

strategies:
    - name: duplicates
    - name: interpodantiaffinity
    - name: nodeaffinity
    - name: nodetaints
    - name: lownodeutilization
      params:
      - name: cputhreshold
        value: "10"
      - name: memorythreshold
        value: "20"
      - name: podsthreshold
        value: "30"
      - name: memorytargetthreshold
        value: "40"
      - name: cputargetthreshold
        value: "50"
      - name: podstargetthreshold
        value: "60"
      - name: nodes
        value: "3"
4. Now check if all of them got populated in the cluster configmap

Actual results:
All the strategies gets populated in the configmap but also adds an extra param called  nodeResourceUtilizationThresholds: {} to all strategies although this does not impact any functionality as such.

Expected results:
extra param nodeResourceUtilizationThresholds: {} should not be added to all strategies as it is confusing to the customer or user who is reading the file.

Additional info:

[ramakasturinarra@dhcp35-60 ocp_files]$ oc get configmap cluster -o yaml
apiVersion: v1
data:
  policy.yaml: |
    strategies:
      LowNodeUtilization:
        enabled: true
        params:
          nodeResourceUtilizationThresholds:
            numberOfNodes: 1
            targetThresholds:
              cpu: 50
              memory: 50
              pods: 20
            thresholds:
              cpu: 50
              memory: 50
              pods: 20
      RemoveDuplicates:
        enabled: true
        params:
          nodeResourceUtilizationThresholds: {}
      RemovePodsViolatingInterPodAntiAffinity:
        enabled: true
        params:
          nodeResourceUtilizationThresholds: {}
      RemovePodsViolatingNodeAffinity:
        enabled: true
        params:
          nodeAffinityType:
          - requiredDuringSchedulingIgnoredDuringExecution
          nodeResourceUtilizationThresholds: {}
      RemovePodsViolatingNodeTaints:
        enabled: true
        params:
          nodeResourceUtilizationThresholds: {}

Comment 1 Jan Chaloupka 2020-02-11 13:53:12 UTC
It happens because params.nodeResourceUtilizationThresholds is a struct, not a pointer to a struct. So using omitempty in field definition does not apply. In order to fix it, we need to update the field in https://github.com/kubernetes-sigs/descheduler first. Was there any particular decision to have NodeResourceUtilizationThresholds [1] not as a pointer to NodeResourceUtilizationThresholds data type?

[1] https://github.com/kubernetes-sigs/descheduler/blob/master/pkg/api/types.go#L49

Comment 2 Mike Dame 2020-02-11 14:43:49 UTC
It seems like a bug, since that field is intended to be omitempty: https://github.com/kubernetes-sigs/descheduler/blob/master/pkg/api/v1alpha1/types.go#L49. I don't know if the internal type also needs to be a pointer, but I don't see why not make the versioned type a pointer

Comment 3 Jan Chaloupka 2020-02-13 11:45:19 UTC
Upstream issue: https://github.com/kubernetes-sigs/descheduler/pull/240

> I don't know if the internal type also needs to be a pointer, but I don't see why not make the versioned type a pointer

AFAIK, internal type can be changed without worrying about backward compatibility as it is its primary purpose. I prefer to change the internal type as well to keep the versioned one as closed as possible.

Comment 4 Maciej Szulik 2020-02-14 15:28:31 UTC
This might slip 4.4, since we're waiting for upstream, moving this to 4.5 for now.

Comment 7 RamaKasturi 2020-05-18 13:07:21 UTC
Moving the bug to verified as i do not see the issue reported here, however i do see a param with empty parenthesis is being present and will report a bug and see if that can be fixed.

apiVersion: v1
data:
  policy.yaml: |
    strategies:
      LowNodeUtilization:
        enabled: true
        params:
          nodeResourceUtilizationThresholds:
            numberOfNodes: 3
            targetThresholds:
              cpu: 10
              memory: 20
              pods: 30
            thresholds:
              cpu: 10
              memory: 20
              pods: 30
      RemoveDuplicates:
        enabled: true
        params: {}
      RemovePodsHavingTooManyRestarts:
        enabled: true
        params:
          podsHavingTooManyRestarts:
            podRestartThreshold: 10
      RemovePodsViolatingInterPodAntiAffinity:
        enabled: true
        params: {}
      RemovePodsViolatingNodeAffinity:
        enabled: true
        params:
          nodeAffinityType:
          - requiredDuringSchedulingIgnoredDuringExecution
      RemovePodsViolatingNodeTaints:
        enabled: true
        params: {}

Verified in the payload below:
===============================
[ramakasturinarra@dhcp35-60 ocp_files]$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.5.0-0.nightly-2020-05-17-235851   True        False         108m    Cluster version is 4.5.0-0.nightly-2020-05-17-235851

Comment 9 errata-xmlrpc 2020-07-13 17:14:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.