Bug 1896697 - [Descheduler] policy.yaml param in cluster configmap is empty
Summary: [Descheduler] policy.yaml param in cluster configmap is empty
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-scheduler
Version: 4.7
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.7.0
Assignee: Mike Dame
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-11-11 10:15 UTC by RamaKasturi
Modified: 2021-02-24 15:33 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-24 15:32:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-kube-descheduler-operator pull 151 0 None open Bug 1896697: Add ManagementState to sample CR 2020-11-11 17:01:40 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:33:22 UTC

Description RamaKasturi 2020-11-11 10:15:22 UTC
Description of problem:
I see that policy.yaml param in cluster configmap is empty and kubeDescheduler object reports errors as below
managementState should be either managed, force.

Version-Release number of selected component (if applicable):
[knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-11-10-023606]$ oc get csv
NAME                                                   DISPLAY                     VERSION                 REPLACES   PHASE
clusterkubedescheduleroperator.4.7.0-202011031553.p0   Kube Descheduler Operator   4.7.0-202011031553.p0              Succeeded


How reproducible:
Always

Steps to Reproduce:
1. Install 4.7 cluster
2. From OperatorHub install descheduler operator
3. create policy.cfg file with the below contents
[knarra@knarra openshift-client-linux-4.7.0-0.nightly-2020-11-10-023606]$ cat policy.cfg 
apiVersion: "descheduler/v1alpha1"
kind: "DeschedulerPolicy"
strategies:
  "RemoveDuplicates":
     enabled: true
     params:
       removeDuplicates:
         excludeOwnerKinds:
         - "ReplicaSet"

4. create configmap using the above policy.cfg file
oc create configmap --from-file=policy.cfg descheduler-policy
5. Now create kubedescheduler object from console by specifying the params below 
policy:
  name: descheduler-policy


Actual results:
descheduler cluster pod gets created but policy.yaml file in cluster configmap is empty, also kubedescheduler objects reports error in UI which says managementState should be either managed or forced.

Expected results:
policy.yaml file in cluster configmap should not be empty and kubedescheduler object should not report any error in UI.

Additional info:
Jan has kubedescheduler/cluster to have .spec.managementState: Managed and things started working fine.

Comment 1 RamaKasturi 2020-11-11 13:05:26 UTC
Below errors are seen in the descheduler operator pod logs:

I1111 09:18:38.921788       1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-descheduler-operator", Name:"descheduler-operator", UID:"c46ec26f-b8ec-41a7-ac93-de8e3ad784c7", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Warning' reason: 'ObservedConfigWriteError' Failed to write observed config: KubeDescheduler.operator.openshift.io "cluster" is invalid: spec.managementState: Invalid value: "": spec.managementState in body should match '^(Managed|Unmanaged|Force|Removed)$'
E1111 09:18:39.118726       1 base_controller.go:250] "ConfigObserver" controller failed to sync "key", err: error writing updated observed config: KubeDescheduler.operator.openshift.io "cluster" is invalid: spec.managementState: Invalid value: "": spec.managementState in body should match '^(Managed|Unmanaged|Force|Removed)$'

Comment 2 Mike Dame 2020-11-11 16:55:57 UTC
The policy file being empty is a symptom of the error you reported:

> base_controller.go:250] "ConfigObserver" controller failed to sync "key", err: error writing updated observed config: KubeDescheduler.operator.openshift.io "cluster" is invalid: spec.managementState: Invalid value: "": spec.managementState in body should match '^(Managed|Unmanaged|Force|Removed)$'

For future reference, errors with "failed to sync 'key'" indicate that the operator is failing to run at a certain point, likely early in its sync cycle. These are the key indicators when debugging broken functionality such as what you described with the policy configmap.

In this case I think it is the fact that the managementState field is not set in the example CR we provide to operator hub (https://github.com/openshift/cluster-kube-descheduler-operator/blob/master/manifests/4.7/cluster-kube-descheduler-operator.v4.7.0.clusterserviceversion.yaml#L8-L22). The errors look like OpenAPI validation for the valid regex in that field. What's weird is that this is failing in the client code, and not when creating the kubedescheduler/cluster custom resource itself.

Comment 4 RamaKasturi 2020-11-12 11:53:23 UTC
Verified in the payload below and i see that managementState is now set to Managed and no errors seen in descheduler operator pod logs.

[knarra@knarra openshift-tests-private]$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.7.0-0.nightly-2020-11-12-022659   True        False         5h40m   Cluster version is 4.7.0-0.nightly-2020-11-12-022659
[knarra@knarra openshift-tests-private]$ oc get csv
NAME                                                   DISPLAY                     VERSION                 REPLACES   PHASE
clusterkubedescheduleroperator.4.7.0-202011120342.p0   Kube Descheduler Operator   4.7.0-202011120342.p0              Succeeded

PIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ObservedConfigChanged' Writing updated observed config:   map[string]interface{}{
+ 	"apiVersion": string("descheduler/v1alpha1"),
+ 	"kind":       string("DeschedulerPolicy"),
+ 	"strategies": map[string]interface{}{
+ 		"RemoveDuplicates": map[string]interface{}{
+ 			"enabled": bool(true),
+ 			"params": map[string]interface{}{
+ 				"removeDuplicates": map[string]interface{}{"excludeOwnerKinds": []interface{}{string("ReplicaSet")}},
+ 			},
+ 		},
+ 	},
  }
I1112 09:39:59.253038       1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-descheduler-operator", Name:"descheduler-operator", UID:"46ce1100-3694-4b97-b798-8df79e52fb24", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapCreated' Created ConfigMap/cluster -n openshift-kube-descheduler-operator because it was missing
I1112 09:39:59.283281       1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-descheduler-operator", Name:"descheduler-operator", UID:"46ce1100-3694-4b97-b798-8df79e52fb24", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentCreated' Created Deployment.apps/cluster -n openshift-kube-descheduler-operator because it was missing
I1112 09:40:00.426408       1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-descheduler-operator", Name:"descheduler-operator", UID:"46ce1100-3694-4b97-b798-8df79e52fb24", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'ConfigMapUpdated' Updated ConfigMap/cluster -n openshift-kube-descheduler-operator:
cause by changes in data.policy.yaml
I1112 09:40:00.438875       1 event.go:282] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-descheduler-operator", Name:"descheduler-operator", UID:"46ce1100-3694-4b97-b798-8df79e52fb24", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'DeploymentUpdated' Updated Deployment.apps/cluster -n openshift-kube-descheduler-operator because it changed

Based on the above moving bug to verified state.

Comment 7 errata-xmlrpc 2021-02-24 15:32:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.