Description of problem: I see that when there are no parameters present for a strategy it just prints "params: {}", i think we could make the struct not to print anything if there is no value. Version-Release number of selected component (if applicable): [ramakasturinarra@dhcp35-60 ocp_files]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.5.0-0.nightly-2020-05-17-235851 True False 3h27m Cluster version is 4.5.0-0.nightly-2020-05-17-235851 How reproducible: Always Steps to Reproduce: 1. Install 4.5 cluster 2. Enable all the strategy for descheduler by editing the kubedescheduler cluster object 3. Now run the command "oc get configmap cluster -o yaml" Actual results: user can see that the struct is printed with param:{} RemoveDuplicates: enabled: true params: {} RemovePodsHavingTooManyRestarts: enabled: true params: podsHavingTooManyRestarts: {} RemovePodsViolatingInterPodAntiAffinity: enabled: true params: {} RemovePodsViolatingNodeAffinity: enabled: true params: nodeAffinityType: - requiredDuringSchedulingIgnoredDuringExecution RemovePodsViolatingNodeTaints: enabled: true params: {} Expected results: If param:{} is empty, we could simply not show that. Additional info:
PRs in the queue.
The code needs to be changed in descheduler as well. Upstream PR https://github.com/kubernetes-sigs/descheduler/pull/296
Moving the bug to assigned state based on comment 4
Not a blocker, moving to 4.6. In case the upstream PR gets merged before Fr, it can be still fixed in 4.5.
This was fixed with recent sync Mike has done in https://github.com/openshift/descheduler/pull/33 moving to verification.
Moving this back to assigned, the change wasn't updated in the operator (bumping with https://github.com/openshift/cluster-kube-descheduler-operator/pull/114)
Verified the bug in the latest master and i do not see the empty value is shown up when a struct is printed. Will change the bug state once i verify it in the downstream operator. [ramakasturinarra@dhcp35-60 ~]$ oc get configmap cluster -o yaml apiVersion: v1 data: policy.yaml: | strategies: LowNodeUtilization: enabled: true params: nodeResourceUtilizationThresholds: numberOfNodes: 3 targetThresholds: cpu: 10 memory: 20 pods: 30 thresholds: cpu: 10 memory: 20 pods: 30 RemoveDuplicates: enabled: true RemovePodsHavingTooManyRestarts: enabled: true params: podsHavingTooManyRestarts: includingInitContainers: true podRestartThreshold: 10 RemovePodsViolatingInterPodAntiAffinity: enabled: true RemovePodsViolatingNodeAffinity: enabled: true params: nodeAffinityType: - requiredDuringSchedulingIgnoredDuringExecution RemovePodsViolatingNodeTaints: enabled: true kind: ConfigMap metadata: creationTimestamp: "2020-07-17T11:20:34Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:data: .: {} f:policy.yaml: {} f:metadata: f:ownerReferences: .: {} k:{"uid":"e45629c6-3ec1-443d-98bc-262e241cad54"}: .: {} f:apiVersion: {} f:kind: {} f:name: {} f:uid: {} manager: cluster-kube-descheduler-operator operation: Update time: "2020-07-17T11:33:10Z" name: cluster namespace: openshift-kube-descheduler-operator ownerReferences: - apiVersion: v1beta1 kind: KubeDescheduler name: cluster uid: e45629c6-3ec1-443d-98bc-262e241cad54 resourceVersion: "149566" selfLink: /api/v1/namespaces/openshift-kube-descheduler-operator/configmaps/cluster uid: ed98b8d1-f80f-49e1-af84-706e620c5775
Tried verifying the same in the downstream build and works fine. [ramakasturinarra@dhcp35-60 verification-tests]$ oc get configmap cluster -o yaml apiVersion: v1 data: policy.yaml: | strategies: LowNodeUtilization: enabled: true params: nodeResourceUtilizationThresholds: numberOfNodes: 3 targetThresholds: cpu: 10 memory: 20 pods: 30 thresholds: cpu: 10 memory: 20 pods: 30 RemoveDuplicates: enabled: true RemovePodsHavingTooManyRestarts: enabled: true params: podsHavingTooManyRestarts: includingInitContainers: true podRestartThreshold: 10 RemovePodsViolatingInterPodAntiAffinity: enabled: true RemovePodsViolatingNodeAffinity: enabled: true params: nodeAffinityType: - requiredDuringSchedulingIgnoredDuringExecution RemovePodsViolatingNodeTaints: enabled: true kind: ConfigMap metadata: creationTimestamp: "2020-07-29T12:27:26Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:data: .: {} f:policy.yaml: {} f:metadata: f:ownerReferences: .: {} k:{"uid":"b5b6d945-d176-42cf-b818-e7f3df095305"}: .: {} f:apiVersion: {} f:kind: {} f:name: {} f:uid: {} manager: cluster-kube-descheduler-operator operation: Update time: "2020-07-29T12:27:26Z" name: cluster namespace: openshift-kube-descheduler-operator ownerReferences: - apiVersion: v1beta1 kind: KubeDescheduler name: cluster uid: b5b6d945-d176-42cf-b818-e7f3df095305 resourceVersion: "407253" selfLink: /api/v1/namespaces/openshift-kube-descheduler-operator/configmaps/cluster uid: 63fec6db-8d6e-4e78-be1d-e04f615e754c [ramakasturinarra@dhcp35-60 verification-tests]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.0-0.nightly-2020-07-25-091217 True False 6h24m Cluster version is 4.6.0-0.nightly-2020-07-25-091217 [ramakasturinarra@dhcp35-60 verification-tests]$ oc get csv NAME DISPLAY VERSION REPLACES PHASE clusterkubedescheduleroperator.4.6.0-202007271331.p0 Kube Descheduler Operator 4.6.0-202007271331.p0 Succeeded Based on the above moving the bug to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196