Bug 1836961 - [Descheduler] - Prevent empty value from showing up when a struct is printed
Summary: [Descheduler] - Prevent empty value from showing up when a struct is printed
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-scheduler
Version: 4.5
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: 4.6.0
Assignee: Maciej Szulik
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-05-18 14:49 UTC by RamaKasturi
Modified: 2020-10-27 16:00 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 16:00:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-kube-descheduler-operator pull 111 0 None closed Bug 1836961: update tags on descheduler types and improve field descriptions 2020-10-15 06:42:17 UTC
Github openshift cluster-kube-descheduler-operator pull 114 0 None closed Bug 1836961: bump(descheduler): Include api changes for StrategyParams 2020-10-15 06:42:17 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:00:50 UTC

Description RamaKasturi 2020-05-18 14:49:38 UTC
Description of problem:
I see that when there are no parameters present for a strategy it just prints "params: {}", i think we could make the struct not to print anything if there is no value.

Version-Release number of selected component (if applicable):
[ramakasturinarra@dhcp35-60 ocp_files]$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.5.0-0.nightly-2020-05-17-235851   True        False         3h27m   Cluster version is 4.5.0-0.nightly-2020-05-17-235851


How reproducible:
Always

Steps to Reproduce:
1. Install 4.5 cluster
2. Enable all the strategy for descheduler by editing the kubedescheduler cluster object
3. Now run the command "oc get configmap cluster -o yaml"

Actual results:
user can see that the struct is printed with param:{}
RemoveDuplicates:
        enabled: true
        params: {}
      RemovePodsHavingTooManyRestarts:
        enabled: true
        params:
          podsHavingTooManyRestarts: {}
      RemovePodsViolatingInterPodAntiAffinity:
        enabled: true
        params: {}
      RemovePodsViolatingNodeAffinity:
        enabled: true
        params:
          nodeAffinityType:
          - requiredDuringSchedulingIgnoredDuringExecution
      RemovePodsViolatingNodeTaints:
        enabled: true
        params: {}

Expected results:
If param:{} is empty, we could simply not show that.

Additional info:

Comment 1 Maciej Szulik 2020-05-20 08:28:52 UTC
PRs in the queue.

Comment 5 Jan Chaloupka 2020-05-27 08:48:42 UTC
The code needs to be changed in descheduler as well. Upstream PR https://github.com/kubernetes-sigs/descheduler/pull/296

Comment 6 RamaKasturi 2020-05-27 15:05:51 UTC
Moving the bug to assigned state based on comment 4

Comment 7 Jan Chaloupka 2020-05-28 08:13:26 UTC
Not a blocker, moving to 4.6. In case the upstream PR gets merged before Fr, it can be still fixed in 4.5.

Comment 8 Maciej Szulik 2020-06-18 10:13:26 UTC
This was fixed with recent sync Mike has done in https://github.com/openshift/descheduler/pull/33 moving to verification.

Comment 11 Mike Dame 2020-07-13 15:36:39 UTC
Moving this back to assigned, the change wasn't updated in the operator (bumping with https://github.com/openshift/cluster-kube-descheduler-operator/pull/114)

Comment 14 RamaKasturi 2020-07-17 12:12:56 UTC
Verified the bug in the latest master and i do not see the empty value is shown up when a struct is printed. Will change the bug state once i verify it in the downstream operator.

[ramakasturinarra@dhcp35-60 ~]$ oc get configmap cluster -o yaml
apiVersion: v1
data:
  policy.yaml: |
    strategies:
      LowNodeUtilization:
        enabled: true
        params:
          nodeResourceUtilizationThresholds:
            numberOfNodes: 3
            targetThresholds:
              cpu: 10
              memory: 20
              pods: 30
            thresholds:
              cpu: 10
              memory: 20
              pods: 30
      RemoveDuplicates:
        enabled: true
      RemovePodsHavingTooManyRestarts:
        enabled: true
        params:
          podsHavingTooManyRestarts:
            includingInitContainers: true
            podRestartThreshold: 10
      RemovePodsViolatingInterPodAntiAffinity:
        enabled: true
      RemovePodsViolatingNodeAffinity:
        enabled: true
        params:
          nodeAffinityType:
          - requiredDuringSchedulingIgnoredDuringExecution
      RemovePodsViolatingNodeTaints:
        enabled: true
kind: ConfigMap
metadata:
  creationTimestamp: "2020-07-17T11:20:34Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:policy.yaml: {}
      f:metadata:
        f:ownerReferences:
          .: {}
          k:{"uid":"e45629c6-3ec1-443d-98bc-262e241cad54"}:
            .: {}
            f:apiVersion: {}
            f:kind: {}
            f:name: {}
            f:uid: {}
    manager: cluster-kube-descheduler-operator
    operation: Update
    time: "2020-07-17T11:33:10Z"
  name: cluster
  namespace: openshift-kube-descheduler-operator
  ownerReferences:
  - apiVersion: v1beta1
    kind: KubeDescheduler
    name: cluster
    uid: e45629c6-3ec1-443d-98bc-262e241cad54
  resourceVersion: "149566"
  selfLink: /api/v1/namespaces/openshift-kube-descheduler-operator/configmaps/cluster
  uid: ed98b8d1-f80f-49e1-af84-706e620c5775

Comment 15 RamaKasturi 2020-07-29 12:40:05 UTC
Tried verifying the same in the downstream build and works fine.

[ramakasturinarra@dhcp35-60 verification-tests]$ oc get configmap cluster -o yaml
apiVersion: v1
data:
  policy.yaml: |
    strategies:
      LowNodeUtilization:
        enabled: true
        params:
          nodeResourceUtilizationThresholds:
            numberOfNodes: 3
            targetThresholds:
              cpu: 10
              memory: 20
              pods: 30
            thresholds:
              cpu: 10
              memory: 20
              pods: 30
      RemoveDuplicates:
        enabled: true
      RemovePodsHavingTooManyRestarts:
        enabled: true
        params:
          podsHavingTooManyRestarts:
            includingInitContainers: true
            podRestartThreshold: 10
      RemovePodsViolatingInterPodAntiAffinity:
        enabled: true
      RemovePodsViolatingNodeAffinity:
        enabled: true
        params:
          nodeAffinityType:
          - requiredDuringSchedulingIgnoredDuringExecution
      RemovePodsViolatingNodeTaints:
        enabled: true
kind: ConfigMap
metadata:
  creationTimestamp: "2020-07-29T12:27:26Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:policy.yaml: {}
      f:metadata:
        f:ownerReferences:
          .: {}
          k:{"uid":"b5b6d945-d176-42cf-b818-e7f3df095305"}:
            .: {}
            f:apiVersion: {}
            f:kind: {}
            f:name: {}
            f:uid: {}
    manager: cluster-kube-descheduler-operator
    operation: Update
    time: "2020-07-29T12:27:26Z"
  name: cluster
  namespace: openshift-kube-descheduler-operator
  ownerReferences:
  - apiVersion: v1beta1
    kind: KubeDescheduler
    name: cluster
    uid: b5b6d945-d176-42cf-b818-e7f3df095305
  resourceVersion: "407253"
  selfLink: /api/v1/namespaces/openshift-kube-descheduler-operator/configmaps/cluster
  uid: 63fec6db-8d6e-4e78-be1d-e04f615e754c


[ramakasturinarra@dhcp35-60 verification-tests]$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.0-0.nightly-2020-07-25-091217   True        False         6h24m   Cluster version is 4.6.0-0.nightly-2020-07-25-091217
[ramakasturinarra@dhcp35-60 verification-tests]$ oc get csv
NAME                                                   DISPLAY                     VERSION                 REPLACES   PHASE
clusterkubedescheduleroperator.4.6.0-202007271331.p0   Kube Descheduler Operator   4.6.0-202007271331.p0              Succeeded

Based on the above moving the bug to verified state.

Comment 17 errata-xmlrpc 2020-10-27 16:00:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.