Bug 1896697
| Summary: | [Descheduler] policy.yaml param in cluster configmap is empty | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | RamaKasturi <knarra> |
| Component: | kube-scheduler | Assignee: | Mike Dame <mdame> |
| Status: | CLOSED ERRATA | QA Contact: | RamaKasturi <knarra> |
| Severity: | high | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 4.7 | CC: | aos-bugs, mdame, mfojtik |
| Target Milestone: | --- | ||
| Target Release: | 4.7.0 | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2021-02-24 15:32:36 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Below errors are seen in the descheduler operator pod logs:
I1111 09:18:38.921788 1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-descheduler-operator", Name:"descheduler-operator", UID:"c46ec26f-b8ec-41a7-ac93-de8e3ad784c7", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ObservedConfigWriteError' Failed to write observed config: KubeDescheduler.operator.openshift.io "cluster" is invalid: spec.managementState: Invalid value: "": spec.managementState in body should match '^(Managed|Unmanaged|Force|Removed)$'
E1111 09:18:39.118726 1 base_controller.go:250] "ConfigObserver" controller failed to sync "key", err: error writing updated observed config: KubeDescheduler.operator.openshift.io "cluster" is invalid: spec.managementState: Invalid value: "": spec.managementState in body should match '^(Managed|Unmanaged|Force|Removed)$'
The policy file being empty is a symptom of the error you reported: > base_controller.go:250] "ConfigObserver" controller failed to sync "key", err: error writing updated observed config: KubeDescheduler.operator.openshift.io "cluster" is invalid: spec.managementState: Invalid value: "": spec.managementState in body should match '^(Managed|Unmanaged|Force|Removed)$' For future reference, errors with "failed to sync 'key'" indicate that the operator is failing to run at a certain point, likely early in its sync cycle. These are the key indicators when debugging broken functionality such as what you described with the policy configmap. In this case I think it is the fact that the managementState field is not set in the example CR we provide to operator hub (https://github.com/openshift/cluster-kube-descheduler-operator/blob/master/manifests/4.7/cluster-kube-descheduler-operator.v4.7.0.clusterserviceversion.yaml#L8-L22). The errors look like OpenAPI validation for the valid regex in that field. What's weird is that this is failing in the client code, and not when creating the kubedescheduler/cluster custom resource itself. Verified in the payload below and i see that managementState is now set to Managed and no errors seen in descheduler operator pod logs.
[knarra@knarra openshift-tests-private]$ oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.7.0-0.nightly-2020-11-12-022659 True False 5h40m Cluster version is 4.7.0-0.nightly-2020-11-12-022659
[knarra@knarra openshift-tests-private]$ oc get csv
NAME DISPLAY VERSION REPLACES PHASE
clusterkubedescheduleroperator.4.7.0-202011120342.p0 Kube Descheduler Operator 4.7.0-202011120342.p0 Succeeded
PIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ObservedConfigChanged' Writing updated observed config: map[string]interface{}{
+ "apiVersion": string("descheduler/v1alpha1"),
+ "kind": string("DeschedulerPolicy"),
+ "strategies": map[string]interface{}{
+ "RemoveDuplicates": map[string]interface{}{
+ "enabled": bool(true),
+ "params": map[string]interface{}{
+ "removeDuplicates": map[string]interface{}{"excludeOwnerKinds": []interface{}{string("ReplicaSet")}},
+ },
+ },
+ },
}
I1112 09:39:59.253038 1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-descheduler-operator", Name:"descheduler-operator", UID:"46ce1100-3694-4b97-b798-8df79e52fb24", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/cluster -n openshift-kube-descheduler-operator because it was missing
I1112 09:39:59.283281 1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-descheduler-operator", Name:"descheduler-operator", UID:"46ce1100-3694-4b97-b798-8df79e52fb24", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentCreated' Created Deployment.apps/cluster -n openshift-kube-descheduler-operator because it was missing
I1112 09:40:00.426408 1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-descheduler-operator", Name:"descheduler-operator", UID:"46ce1100-3694-4b97-b798-8df79e52fb24", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/cluster -n openshift-kube-descheduler-operator:
cause by changes in data.policy.yaml
I1112 09:40:00.438875 1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-descheduler-operator", Name:"descheduler-operator", UID:"46ce1100-3694-4b97-b798-8df79e52fb24", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/cluster -n openshift-kube-descheduler-operator because it changed
Based on the above moving bug to verified state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2020:5633 |
Description of problem: I see that policy.yaml param in cluster configmap is empty and kubeDescheduler object reports errors as below managementState should be either managed, force. Version-Release number of selected component (if applicable): [knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-11-10-023606]$ oc get csv NAME DISPLAY VERSION REPLACES PHASE clusterkubedescheduleroperator.4.7.0-202011031553.p0 Kube Descheduler Operator 4.7.0-202011031553.p0 Succeeded How reproducible: Always Steps to Reproduce: 1. Install 4.7 cluster 2. From OperatorHub install descheduler operator 3. create policy.cfg file with the below contents [knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-11-10-023606]$ cat policy.cfg apiVersion: "descheduler/v1alpha1" kind: "DeschedulerPolicy" strategies: "RemoveDuplicates": enabled: true params: removeDuplicates: excludeOwnerKinds: - "ReplicaSet" 4. create configmap using the above policy.cfg file oc create configmap --from-file=policy.cfg descheduler-policy 5. Now create kubedescheduler object from console by specifying the params below policy: name: descheduler-policy Actual results: descheduler cluster pod gets created but policy.yaml file in cluster configmap is empty, also kubedescheduler objects reports error in UI which says managementState should be either managed or forced. Expected results: policy.yaml file in cluster configmap should not be empty and kubedescheduler object should not report any error in UI. Additional info: Jan has kubedescheduler/cluster to have .spec.managementState: Managed and things started working fine.