Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1821772

Summary: [Descheduler] RemovePodsViolatingNodeAffinity does not evict pods even when there are viable nodes which can fit them
Product: OpenShift Container Platform Reporter: Mike Dame <mdame>
Component: kube-schedulerAssignee: Mike Dame <mdame>
Status: CLOSED ERRATA QA Contact: RamaKasturi <knarra>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.4CC: aos-bugs, knarra, maszulik, mdame, mfojtik
Target Milestone: ---   
Target Release: 4.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1820253 Environment:
Last Closed: 2020-05-04 11:48:34 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1820253    
Bug Blocks:    

Description Mike Dame 2020-04-07 15:09:36 UTC
+++ This bug was initially created as a clone of Bug #1820253 +++

Description of problem:
I see that RemovePodsViolatingNodeAffinity does not evict pods even when there are viable nodes which can fit them.

Version-Release number of selected component (if applicable):
4.4.0-0.nightly-2020-04-01-080616

How reproducible:
Always

Steps to Reproduce:
1. Configure descheduler operator on the cluster with 3 worker nodes and make sure that all of them are schedulable
2. Now apply strategy "RemovePodsViolatingNodeAffinity"
3. Run the command to create pods "oc run hello --image=openshift/hello-openshift:latest --replicas=2"
4. Edit the dc and apply the below node affinity

spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
             nodeSelectorTerms:
             - matchExpressions:
                - key: e2e-az-NorthSouth
                  operator: In
                  values:
                  - e2e-az-North
                  - e2e-az-South
5. Label node with "oc label node nodeA  e2e-az-NorthSouth=e2e-az-North"
6. Now make sure that pod starts running on the nodeA where the label is added.
7. Now remove the lable from NodeA and add it to NodeB

Actual results:
Pod does not get evicted and does not run on NodeB but descheduler log shows that "Pod does not fit on Node A"

I0402 14:24:49.791129       1 node_affinity.go:41] Executing for nodeAffinityType: requiredDuringSchedulingIgnoredDuringExecution
I0402 14:24:49.791158       1 node_affinity.go:46] Processing node: "ip-10-0-143-136.us-east-2.compute.internal"
I0402 14:24:49.825872       1 node_affinity.go:46] Processing node: "ip-10-0-149-239.us-east-2.compute.internal"
I0402 14:24:49.869452       1 node_affinity.go:46] Processing node: "ip-10-0-151-123.us-east-2.compute.internal"
I0402 14:24:49.965682       1 node_affinity.go:46] Processing node: "ip-10-0-168-150.us-east-2.compute.internal"
I0402 14:24:50.065238       1 node_affinity.go:46] Processing node: "ip-10-0-170-132.us-east-2.compute.internal"
I0402 14:24:50.167936       1 node_affinity.go:46] Processing node: "ip-10-0-141-59.us-east-2.compute.internal"
I0402 14:24:50.286970       1 node.go:158] Pod hello-2-lmcns does not fit on node ip-10-0-141-59.us-east-2.compute.internal
I0402 14:24:50.287027       1 node.go:158] Pod hello-2-mj6st does not fit on node ip-10-0-141-59.us-east-2.compute.internal
I0402 14:24:50.287069       1 node_affinity.go:73] Evicted 0 pods
I0402 14:25:50.287284       1 node_affinity.go:41] Executing for nodeAffinityType: requiredDuringSchedulingIgnoredDuringExecution

Expected results:
Pods should get evicted and should schedule on NodeB.

Additional info:

--- Additional comment from Mike Dame on 2020-04-02 15:32:20 UTC ---

Upstream PR which I believe will fix this: https://github.com/kubernetes-sigs/descheduler/pull/256

Comment 1 Mike Dame 2020-04-07 17:54:33 UTC
4.4 backport for this: https://github.com/openshift/descheduler/pull/27

Comment 6 errata-xmlrpc 2020-05-04 11:48:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0581