Bug 1877892
| Summary: | [Descheduler] values in the configmap is still shown as null for include & exclude even after setting the right value | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | RamaKasturi <knarra> |
| Component: | kube-scheduler | Assignee: | Mike Dame <mdame> |
| Status: | CLOSED ERRATA | QA Contact: | RamaKasturi <knarra> |
| Severity: | high | Docs Contact: | |
| Priority: | high | ||
| Version: | 4.6 | CC: | aos-bugs, jkaur, mdame, mfojtik, mvardhan, rekhan |
| Target Milestone: | --- | ||
| Target Release: | 4.6.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2020-10-27 16:39:40 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Verified with the payload below and i see that the include & exclude has right values after setting it in the kubedescheduler object.
[ramakasturinarra@dhcp35-60 openshift-client-linux-4.6.0-0.nightly-2020-09-16-000734]$ oc get csv
NAME DISPLAY VERSION REPLACES PHASE
clusterkubedescheduleroperator.4.6.0-202009152100.p0 Kube Descheduler Operator 4.6.0-202009152100.p0 Succeeded
[ramakasturinarra@dhcp35-60 openshift-client-linux-4.6.0-0.nightly-2020-09-16-000734]$ oc get configmap cluster -o yaml
apiVersion: v1
data:
policy.yaml: |
strategies:
LowNodeUtilization:
enabled: true
params:
namespaces: null
nodeResourceUtilizationThresholds:
targetThresholds:
cpu: 50
memory: 40
pods: 60
thresholds:
cpu: 10
memory: 20
pods: 30
thresholdPriority: null
thresholdPriorityClassName: ""
PodLifeTime:
enabled: true
params:
maxPodLifeTimeSeconds: 3600
namespaces:
exclude:
- my-project1
include:
- my-project
thresholdPriority: null
thresholdPriorityClassName: system-cluster-critical
RemoveDuplicates:
enabled: true
params:
namespaces: null
removeDuplicates: {}
thresholdPriority: null
thresholdPriorityClassName: ""
RemovePodsHavingTooManyRestarts:
enabled: true
params:
namespaces:
exclude: null
include: null
podsHavingTooManyRestarts:
podRestartThreshold: 10
thresholdPriority: null
thresholdPriorityClassName: ""
RemovePodsViolatingInterPodAntiAffinity:
enabled: true
params:
namespaces:
exclude: null
include: null
thresholdPriority: null
thresholdPriorityClassName: ""
@jaspreet , also tested comment2 and it works fine with out any issues.
Based on the above moving the bug to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196 The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days |
Description of problem: Values in the configmap is still seen as null for include & exclude params even after setting the right value Version-Release number of selected component (if applicable): [ramakasturinarra@dhcp35-60 openshift-client-linux-4.6.0-0.nightly-2020-09-10-031249]$ ./oc version Client Version: 4.6.0-0.nightly-2020-09-10-031249 Server Version: 4.6.0-0.nightly-2020-09-09-040238 Kubernetes Version: v1.19.0-rc.2+068702d How reproducible: Always Steps to Reproduce: 1. Install latest 4.6 cluster 2. Add the podLifeTime strategy as below strategies: - name: PodLifeTime params: - name: maxPodLifeTimeSeconds value: "3600" 3. Now edit the kubedescheduler object and the other params as below - name: includeNamespaces value: my-project Actual results: I see that the cluster pod does not get respinned also the values in the configmap does not change. oc get kubedescheduler cluster -o yaml =========================================== spec: deschedulingIntervalSeconds: 3600 image: registry.redhat.io/openshift4/ose-descheduler@sha256:ac21a65ec072db9b9c66c1c6aed940428c9313cc7870cc3976bebf3c5772cde7 strategies: - name: PodLifeTime params: - name: maxPodLifeTimeSeconds value: "3600" - name: includeNamespaces value: my-project oc get configmap cluster -o yaml ================================= [ramakasturinarra@dhcp35-60 openshift-client-linux-4.6.0-0.nightly-2020-09-10-031249]$ oc get configmap cluster -o yaml apiVersion: v1 data: policy.yaml: | strategies: PodLifeTime: enabled: true params: maxPodLifeTimeSeconds: 3600 namespaces: exclude: null include: null thresholdPriority: null thresholdPriorityClassName: "" kind: ConfigMap metadata: creationTimestamp: "2020-09-10T16:37:56Z" managedFields: - apiVersion: v1 fieldsType: FieldsV1 fieldsV1: f:data: .: {} f:policy.yaml: {} f:metadata: f:ownerReferences: .: {} k:{"uid":"d75acd99-551c-40c7-989d-48b08d2bda90"}: .: {} f:apiVersion: {} f:kind: {} f:name: {} f:uid: {} manager: cluster-kube-descheduler-operator operation: Update time: "2020-09-10T16:37:56Z" name: cluster namespace: openshift-kube-descheduler-operator ownerReferences: - apiVersion: v1beta1 kind: KubeDescheduler name: cluster uid: d75acd99-551c-40c7-989d-48b08d2bda90 resourceVersion: "58283" selfLink: /api/v1/namespaces/openshift-kube-descheduler-operator/configmaps/cluster uid: 2afd4f20-fb01-4c7f-90f7-094afa866ef2 Expected results: values of params should be set correctly and should not show null Additional info: same thing happens for thresholdPriority & thresholdPriorityClassName as well spec: deschedulingIntervalSeconds: 3600 image: registry.redhat.io/openshift4/ose-descheduler@sha256:ac21a65ec072db9b9c66c1c6aed940428c9313cc7870cc3976bebf3c5772cde7 strategies: - name: PodLifeTime params: - name: maxPodLifeTimeSeconds value: "3600" - name: includeNamespaces value: my-project - name: thresholdPriorityClassName value: system-cluster-critical [ramakasturinarra@dhcp35-60 openshift-client-linux-4.6.0-0.nightly-2020-09-10-031249]$ oc get configmap cluster -o yaml apiVersion: v1 data: policy.yaml: | strategies: PodLifeTime: enabled: true params: maxPodLifeTimeSeconds: 3600 namespaces: exclude: null include: null thresholdPriority: null thresholdPriorityClassName: ""