Bug 1957012 - Deleting the KubeDescheduler CR does not remove the corresponding deployment or configmap
Summary: Deleting the KubeDescheduler CR does not remove the corresponding deployment ...
Keywords:
Status: ASSIGNED
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-scheduler
Version: 4.7
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: ---
: 4.8.0
Assignee: Mike Dame
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-05-04 20:39 UTC by Chad Scribner
Modified: 2021-05-06 07:02 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:


Attachments (Terms of Use)

Description Chad Scribner 2021-05-04 20:39:16 UTC
Description of problem:
When deleting the KubeDescheduler CR, the Deployment and ConfigMap do not get deleted which results in the configuration from the CR continuing to run and rebalancing pods.

Version-Release number of selected component (if applicable):
OpenShift: 4.7.8
Kube Descheduler Operator: 4.7.0-202104142050.p0


How reproducible:
Always

Steps to Reproduce:
1. oc new-project test
2. oc create deployment testapp --image=registry.redhat.io/rhel8/httpd-24 --replicas=3
3. oc edit deploy/testapp (add the following block to the spec:)

```
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - testapp
              topologyKey: kubernetes.io/hostname
            weight: 100
```

3. Install the Descheduler: https://docs.openshift.com/container-platform/4.7/nodes/scheduling/nodes-descheduler.html#nodes-descheduler-installing_nodes-descheduler
4. Create a KubeDescheduler CR with the TopologyAndDuplicates profile and an interval of 30 seconds. At this point, the pods will get rebalanced if need be.
5. Delete the KubeDescheduler CR

Actual results:
The pods will continue to get rebalanced even though the CR has been deleted.

Expected results:
Deleting the CR will also remove the Deployment and ConfigMap which result in pods getting rebalanced even after removal of the CR.

Additional info:

Comment 1 Mike Dame 2021-05-05 20:52:58 UTC
In the operator, we do set the Descheduler cluster deployment to have an ownerref referring to the CR: https://github.com/openshift/cluster-kube-descheduler-operator/blob/8903d09/pkg/operator/target_config_reconciler.go#L270-L276

And in my own testing, the OwnerRef UID set on the Deployment does match the UID of the CR (see output below). But then, deleting the CR does not delete the deployment. I wonder if this is a limitation of ownerrefs with CRDs, or if there's some other way we need to configure it.

I also noticed that after deleting the CR, if you try to recreate it then the existing Deployment's ownerrefs do not get updated to the new CR's UID. However, this would be irrelevant if the Deployment was actually deleted when the CR was

(Note: We actually first discovered this in https://bugzilla.redhat.com/show_bug.cgi?id=1913821, but it got forgotten when focus shifted to documenting how to fully uninstall the Descheduler)


> $ oc get -o yaml kubedescheduler/cluster
> apiVersion: operator.openshift.io/v1
> kind: KubeDescheduler
> metadata:
>   creationTimestamp: "2021-05-05T13:59:54Z"
>   generation: 1
>   name: cluster
>   namespace: openshift-kube-descheduler-operator
>   resourceVersion: "38161"
>   uid: b2fe822a-736b-4a9f-9f73-5552087433a3
> spec:
>   deschedulingIntervalSeconds: 3600
>   logLevel: Normal
>   managementState: Managed
>   operatorLogLevel: Normal
>   profiles:
>   - AffinityAndTaints
> status:
>   generations:
>   - group: apps
>     hash: ""
>     lastGeneration: 1
>     name: cluster
>     namespace: openshift-kube-descheduler-operator
>     resource: deployments
>   readyReplicas: 0
> 
> $ oc get -o yaml deployment.apps/cluster
> apiVersion: apps/v1
> kind: Deployment
> metadata:
>   annotations:
>     deployment.kubernetes.io/revision: "1"
>     operator.openshift.io/pull-spec: quay.io/openshift/origin-descheduler:4.7
>   creationTimestamp: "2021-05-05T14:06:28Z"
>   generation: 1
>   labels:
>     app: descheduler
>   name: cluster
>   namespace: openshift-kube-descheduler-operator
>   ownerReferences:
>   - apiVersion: v1
>     kind: KubeDescheduler
>     name: cluster
>     uid: b2fe822a-736b-4a9f-9f73-5552087433a3
>   resourceVersion: "38170"
>   uid: 4909677a-1dc4-4030-8aa6-3d5ef2b845c1
> ...


Note You need to log in before you can comment on or make changes to this bug.