Description of problem: nodeResourceUtilizationThresholds: {} gets added as a param to every strategy listed in the cluster configmap and i think that should not be added since it is not really a required param and it can create confusion to the user /customer who is reading the file. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Clone the descheduler operator github repo. 2. Run oc create -f manifests/. to create the resources 3. Now edit the file 02_kube-descheduler-operator.cr.yaml and add the strategies below. strategies: - name: duplicates - name: interpodantiaffinity - name: nodeaffinity - name: nodetaints - name: lownodeutilization params: - name: cputhreshold value: "10" - name: memorythreshold value: "20" - name: podsthreshold value: "30" - name: memorytargetthreshold value: "40" - name: cputargetthreshold value: "50" - name: podstargetthreshold value: "60" - name: nodes value: "3" 4. Now check if all of them got populated in the cluster configmap Actual results: All the strategies gets populated in the configmap but also adds an extra param called nodeResourceUtilizationThresholds: {} to all strategies although this does not impact any functionality as such. Expected results: extra param nodeResourceUtilizationThresholds: {} should not be added to all strategies as it is confusing to the customer or user who is reading the file. Additional info: [ramakasturinarra@dhcp35-60 ocp_files]$ oc get configmap cluster -o yaml apiVersion: v1 data: policy.yaml: | strategies: LowNodeUtilization: enabled: true params: nodeResourceUtilizationThresholds: numberOfNodes: 1 targetThresholds: cpu: 50 memory: 50 pods: 20 thresholds: cpu: 50 memory: 50 pods: 20 RemoveDuplicates: enabled: true params: nodeResourceUtilizationThresholds: {} RemovePodsViolatingInterPodAntiAffinity: enabled: true params: nodeResourceUtilizationThresholds: {} RemovePodsViolatingNodeAffinity: enabled: true params: nodeAffinityType: - requiredDuringSchedulingIgnoredDuringExecution nodeResourceUtilizationThresholds: {} RemovePodsViolatingNodeTaints: enabled: true params: nodeResourceUtilizationThresholds: {}
It happens because params.nodeResourceUtilizationThresholds is a struct, not a pointer to a struct. So using omitempty in field definition does not apply. In order to fix it, we need to update the field in https://github.com/kubernetes-sigs/descheduler first. Was there any particular decision to have NodeResourceUtilizationThresholds [1] not as a pointer to NodeResourceUtilizationThresholds data type? [1] https://github.com/kubernetes-sigs/descheduler/blob/master/pkg/api/types.go#L49
It seems like a bug, since that field is intended to be omitempty: https://github.com/kubernetes-sigs/descheduler/blob/master/pkg/api/v1alpha1/types.go#L49. I don't know if the internal type also needs to be a pointer, but I don't see why not make the versioned type a pointer
Upstream issue: https://github.com/kubernetes-sigs/descheduler/pull/240 > I don't know if the internal type also needs to be a pointer, but I don't see why not make the versioned type a pointer AFAIK, internal type can be changed without worrying about backward compatibility as it is its primary purpose. I prefer to change the internal type as well to keep the versioned one as closed as possible.
This might slip 4.4, since we're waiting for upstream, moving this to 4.5 for now.
Moving the bug to verified as i do not see the issue reported here, however i do see a param with empty parenthesis is being present and will report a bug and see if that can be fixed. apiVersion: v1 data: policy.yaml: | strategies: LowNodeUtilization: enabled: true params: nodeResourceUtilizationThresholds: numberOfNodes: 3 targetThresholds: cpu: 10 memory: 20 pods: 30 thresholds: cpu: 10 memory: 20 pods: 30 RemoveDuplicates: enabled: true params: {} RemovePodsHavingTooManyRestarts: enabled: true params: podsHavingTooManyRestarts: podRestartThreshold: 10 RemovePodsViolatingInterPodAntiAffinity: enabled: true params: {} RemovePodsViolatingNodeAffinity: enabled: true params: nodeAffinityType: - requiredDuringSchedulingIgnoredDuringExecution RemovePodsViolatingNodeTaints: enabled: true params: {} Verified in the payload below: =============================== [ramakasturinarra@dhcp35-60 ocp_files]$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.5.0-0.nightly-2020-05-17-235851 True False 108m Cluster version is 4.5.0-0.nightly-2020-05-17-235851
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409