Bug 2026109 - Altering the Schedule Profile configurations doesn't affect the placement of the pods
Summary: Altering the Schedule Profile configurations doesn't affect the placement of ...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-scheduler
Version: 4.9
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.9.z
Assignee: Jan Chaloupka
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On: 2002300
Blocks: 2026110 2026111
TreeView+ depends on / blocked
 
Reported: 2021-11-23 19:11 UTC by Mike Dame
Modified: 2022-03-21 12:30 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 2002300
: 2026110 (view as bug list)
Environment:
Last Closed: 2022-03-21 12:30:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-kube-scheduler-operator pull 379 0 None open [release-4.9] Bug 2026109: Disable balancedAllocation and add weight for HighNodeUtilization profile 2022-02-05 01:00:56 UTC
Red Hat Product Errata RHBA-2022:0861 0 None None None 2022-03-21 12:30:28 UTC

Comment 1 Michal Fojtik 2021-12-25 05:22:32 UTC
This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity/priority. If you have further information on the current state of the bug, please update it, otherwise this bug can be closed in about 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant. Additionally, you can add LifecycleFrozen into Whiteboard if you think this bug should never be marked as stale. Please consult with bug assignee before you do that.

Comment 6 RamaKasturi 2022-03-15 16:39:48 UTC
Verified with build below and  i see that the fix works fine. Since the changes are not visible in a cluster with three worker nodes, i tried the test with a clusters which has 5 and 7 worker nodes and below are the results.

[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.9.0-0.nightly-2022-03-14-141506   True        False         169m    Cluster version is 4.9.0-0.nightly-2022-03-14-141506


7 node worker:
======================================
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES
httpd-6dcffcd64-btpr5     1/1     Running   0          17s   10.129.4.27   ip-10-0-140-66.us-east-2.compute.internal   <none>           <none>
httpd1-96bd5cc7c-t2k5r    1/1     Running   0          12s   10.129.4.28   ip-10-0-140-66.us-east-2.compute.internal   <none>           <none>
httpd2-856dcc466b-6n75b   1/1     Running   0          6s    10.129.4.29   ip-10-0-140-66.us-east-2.compute.internal   <none>           <none>
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc scale --replicas=100 deployment httpd
deployment.apps/httpd scaled
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc scale --replicas=100 deployment httpd1
deployment.apps/httpd1 scaled
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc scale --replicas=100 deployment httpd2
deployment.apps/httpd2 scaled

Config before adding the profile:
=========================================
 score:\n      enabled:\n      - name: NodeResourcesBalancedAllocation\n        weight: 1\n      - name: ImageLocality\n        weight: 1\n      - name: InterPodAffinity\n        weight: 1\n      - name: NodeResourcesLeastAllocated\n        weight: 1\n      - name: NodeAffinity\n        weight: 1\n      - name: NodePreferAvoidPods\n        weight: 10000\n      - name: PodTopologySpread\n        weight: 2\n      - name: TaintToleration\n        weight: 1\n  schedulerName: default-scheduler\n"

Before adding the profile:
==================================
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-131-24.us-east-2.compute.internal | wc -l
37
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-132-34.us-east-2.compute.internal | wc -l
37
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-140-66.us-east-2.compute.internal | wc -l
39
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-168-224.us-east-2.compute.internal | wc -l
48
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide  | grep ip-10-0-169-14.us-east-2.compute.internal | wc -l
46
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide  | grep ip-10-0-211-74.us-east-2.compute.internal | wc -l
45
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide  | grep ip-10-0-217-119.us-east-2.compute.internal | wc -l
48

oc patch scheduler cluster --type='merge' -p '{"spec":{"profile":"HighNodeUtilization"}}'

config file after patching:
============================
score:\n      enabled:\n      - name: ImageLocality\n        weight: 1\n      - name: InterPodAffinity\n        weight: 1\n      - name: NodeAffinity\n        weight: 1\n      - name: NodePreferAvoidPods\n        weight: 10000\n      - name: PodTopologySpread\n        weight: 2\n      - name: TaintToleration\n        weight: 1\n      - name: NodeResourcesMostAllocated\n        weight: 5\n  schedulerName: default-scheduler\n"

First rollout after chaning the profile below are the stats i see:
=====================================================================
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-131-24.us-east-2.compute.internal | wc -l
2
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-132-34.us-east-2.compute.internal | wc -l
13
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-140-66.us-east-2.compute.internal | wc -l
2
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-168-224.us-east-2.compute.internal | wc -l
144
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide  | grep ip-10-0-169-14.us-east-2.compute.internal | wc -l
8
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide  | grep ip-10-0-211-74.us-east-2.compute.internal | wc -l
3
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide  | grep ip-10-0-217-119.us-east-2.compute.internal | wc -l
128
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ 

5 node worker cluster:
=========================
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE                                         NOMINATED NODE   READINESS GATES
httpd-6dcffcd64-6hbph     1/1     Running   0          24s   10.131.2.22   ip-10-0-175-84.us-east-2.compute.internal    <none>           <none>
httpd1-96bd5cc7c-gtw65    1/1     Running   0          19s   10.131.2.23   ip-10-0-175-84.us-east-2.compute.internal    <none>           <none>
httpd2-856dcc466b-ddr7j   1/1     Running   0          13s   10.128.2.20   ip-10-0-162-138.us-east-2.compute.internal   <none>           <none>

[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc scale --replicas=100 deployment httpd
deployment.apps/httpd scaled
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc scale --replicas=100 deployment httpd1
deployment.apps/httpd1 scaled
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc scale --replicas=100 deployment httpd2
deployment.apps/httpd2 scaled

config file before profile alteration:
=======================================
score:\n      enabled:\n      - name: NodeResourcesBalancedAllocation\n        weight: 1\n      - name: ImageLocality\n        weight: 1\n      - name: InterPodAffinity\n        weight: 1\n      - name: NodeResourcesLeastAllocated\n        weight: 1\n      - name: NodeAffinity\n        weight: 1\n      - name: NodePreferAvoidPods\n        weight: 10000\n      - name: PodTopologySpread\n        weight: 2\n      - name: TaintToleration\n        weight: 1\n  schedulerName: default-scheduler\n"

[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep Running | wc -l
300

Before profile alteration:
===============================
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-139-120.us-east-2.compute.internal | wc -l
56
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-158-163.us-east-2.compute.internal | wc -l
55
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-162-138.us-east-2.compute.internal | wc -l
56
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-175-84.us-east-2.compute.internal | wc -l
57
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-201-198.us-east-2.compute.internal | wc -l
76

[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc patch scheduler cluster --type='merge' -p '{"spec":{"profile":"HighNodeUtilization"}}'
scheduler.config.openshift.io/cluster patched

config file after profile alteration:
=======================================
score:\n      enabled:\n      - name: ImageLocality\n        weight: 1\n      - name: InterPodAffinity\n        weight: 1\n      - name: NodeAffinity\n        weight: 1\n      - name: NodePreferAvoidPods\n        weight: 10000\n      - name: PodTopologySpread\n        weight: 2\n      - name: TaintToleration\n        weight: 1\n      - name: NodeResourcesMostAllocated\n        weight: 5\n  schedulerName: default-scheduler\n"

After profile alteration:
=====================================
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-139-120.us-east-2.compute.internal | wc -l
143
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-158-163.us-east-2.compute.internal | wc -l
8
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-162-138.us-east-2.compute.internal | wc -l
21
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-175-84.us-east-2.compute.internal | wc -l
39
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-201-198.us-east-2.compute.internal | wc -l
89
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ 

Since there are significant changes in the pod placement after altering the profile, moving bug to verified state.

one more observation i have made was after altering the profile and doing a second rollout the placement is much better than the first time.

7 node worker cluster:
===========================
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-131-24.us-east-2.compute.internal | wc -l
0
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-132-34.us-east-2.compute.internal | wc -l
9
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-140-66.us-east-2.compute.internal | wc -l
0
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-168-224.us-east-2.compute.internal | wc -l
156
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide  | grep ip-10-0-169-14.us-east-2.compute.internal | wc -l
0
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide  | grep ip-10-0-211-74.us-east-2.compute.internal | wc -l
0
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide  | grep ip-10-0-217-119.us-east-2.compute.internal | wc -l
135

5 node worker cluster:
=============================
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-139-120.us-east-2.compute.internal | wc -l
150
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-158-163.us-east-2.compute.internal | wc -l
0
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-162-138.us-east-2.compute.internal | wc -l
0
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-175-84.us-east-2.compute.internal | wc -l
0
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-201-198.us-east-2.compute.internal | wc -l
150
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$

Comment 8 errata-xmlrpc 2022-03-21 12:30:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.9.25 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:0861


Note You need to log in before you can comment on or make changes to this bug.