Bug 2026109

Summary: Altering the Schedule Profile configurations doesn't affect the placement of the pods
Product: OpenShift Container Platform Reporter: Mike Dame <mdame>
Component: kube-schedulerAssignee: Jan Chaloupka <jchaloup>
Status: CLOSED ERRATA QA Contact: RamaKasturi <knarra>
Severity: medium Docs Contact:
Priority: medium    
Version: 4.9CC: aos-bugs, knarra, mfojtik, yhe
Target Milestone: ---   
Target Release: 4.9.z   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 2002300
: 2026110 (view as bug list) Environment:
Last Closed: 2022-03-21 12:30:12 UTC Type: ---
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 2002300    
Bug Blocks: 2026110, 2026111    

Comment 1 Michal Fojtik 2021-12-25 05:22:32 UTC
This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity/priority. If you have further information on the current state of the bug, please update it, otherwise this bug can be closed in about 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant. Additionally, you can add LifecycleFrozen into Whiteboard if you think this bug should never be marked as stale. Please consult with bug assignee before you do that.

Comment 6 RamaKasturi 2022-03-15 16:39:48 UTC
Verified with build below and  i see that the fix works fine. Since the changes are not visible in a cluster with three worker nodes, i tried the test with a clusters which has 5 and 7 worker nodes and below are the results.

[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.9.0-0.nightly-2022-03-14-141506   True        False         169m    Cluster version is 4.9.0-0.nightly-2022-03-14-141506


7 node worker:
======================================
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE                                        NOMINATED NODE   READINESS GATES
httpd-6dcffcd64-btpr5     1/1     Running   0          17s   10.129.4.27   ip-10-0-140-66.us-east-2.compute.internal   <none>           <none>
httpd1-96bd5cc7c-t2k5r    1/1     Running   0          12s   10.129.4.28   ip-10-0-140-66.us-east-2.compute.internal   <none>           <none>
httpd2-856dcc466b-6n75b   1/1     Running   0          6s    10.129.4.29   ip-10-0-140-66.us-east-2.compute.internal   <none>           <none>
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc scale --replicas=100 deployment httpd
deployment.apps/httpd scaled
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc scale --replicas=100 deployment httpd1
deployment.apps/httpd1 scaled
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc scale --replicas=100 deployment httpd2
deployment.apps/httpd2 scaled

Config before adding the profile:
=========================================
 score:\n      enabled:\n      - name: NodeResourcesBalancedAllocation\n        weight: 1\n      - name: ImageLocality\n        weight: 1\n      - name: InterPodAffinity\n        weight: 1\n      - name: NodeResourcesLeastAllocated\n        weight: 1\n      - name: NodeAffinity\n        weight: 1\n      - name: NodePreferAvoidPods\n        weight: 10000\n      - name: PodTopologySpread\n        weight: 2\n      - name: TaintToleration\n        weight: 1\n  schedulerName: default-scheduler\n"

Before adding the profile:
==================================
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-131-24.us-east-2.compute.internal | wc -l
37
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-132-34.us-east-2.compute.internal | wc -l
37
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-140-66.us-east-2.compute.internal | wc -l
39
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-168-224.us-east-2.compute.internal | wc -l
48
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide  | grep ip-10-0-169-14.us-east-2.compute.internal | wc -l
46
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide  | grep ip-10-0-211-74.us-east-2.compute.internal | wc -l
45
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide  | grep ip-10-0-217-119.us-east-2.compute.internal | wc -l
48

oc patch scheduler cluster --type='merge' -p '{"spec":{"profile":"HighNodeUtilization"}}'

config file after patching:
============================
score:\n      enabled:\n      - name: ImageLocality\n        weight: 1\n      - name: InterPodAffinity\n        weight: 1\n      - name: NodeAffinity\n        weight: 1\n      - name: NodePreferAvoidPods\n        weight: 10000\n      - name: PodTopologySpread\n        weight: 2\n      - name: TaintToleration\n        weight: 1\n      - name: NodeResourcesMostAllocated\n        weight: 5\n  schedulerName: default-scheduler\n"

First rollout after chaning the profile below are the stats i see:
=====================================================================
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-131-24.us-east-2.compute.internal | wc -l
2
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-132-34.us-east-2.compute.internal | wc -l
13
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-140-66.us-east-2.compute.internal | wc -l
2
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-168-224.us-east-2.compute.internal | wc -l
144
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide  | grep ip-10-0-169-14.us-east-2.compute.internal | wc -l
8
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide  | grep ip-10-0-211-74.us-east-2.compute.internal | wc -l
3
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide  | grep ip-10-0-217-119.us-east-2.compute.internal | wc -l
128
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ 

5 node worker cluster:
=========================
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE                                         NOMINATED NODE   READINESS GATES
httpd-6dcffcd64-6hbph     1/1     Running   0          24s   10.131.2.22   ip-10-0-175-84.us-east-2.compute.internal    <none>           <none>
httpd1-96bd5cc7c-gtw65    1/1     Running   0          19s   10.131.2.23   ip-10-0-175-84.us-east-2.compute.internal    <none>           <none>
httpd2-856dcc466b-ddr7j   1/1     Running   0          13s   10.128.2.20   ip-10-0-162-138.us-east-2.compute.internal   <none>           <none>

[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc scale --replicas=100 deployment httpd
deployment.apps/httpd scaled
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc scale --replicas=100 deployment httpd1
deployment.apps/httpd1 scaled
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc scale --replicas=100 deployment httpd2
deployment.apps/httpd2 scaled

config file before profile alteration:
=======================================
score:\n      enabled:\n      - name: NodeResourcesBalancedAllocation\n        weight: 1\n      - name: ImageLocality\n        weight: 1\n      - name: InterPodAffinity\n        weight: 1\n      - name: NodeResourcesLeastAllocated\n        weight: 1\n      - name: NodeAffinity\n        weight: 1\n      - name: NodePreferAvoidPods\n        weight: 10000\n      - name: PodTopologySpread\n        weight: 2\n      - name: TaintToleration\n        weight: 1\n  schedulerName: default-scheduler\n"

[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep Running | wc -l
300

Before profile alteration:
===============================
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-139-120.us-east-2.compute.internal | wc -l
56
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-158-163.us-east-2.compute.internal | wc -l
55
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-162-138.us-east-2.compute.internal | wc -l
56
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-175-84.us-east-2.compute.internal | wc -l
57
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-201-198.us-east-2.compute.internal | wc -l
76

[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc patch scheduler cluster --type='merge' -p '{"spec":{"profile":"HighNodeUtilization"}}'
scheduler.config.openshift.io/cluster patched

config file after profile alteration:
=======================================
score:\n      enabled:\n      - name: ImageLocality\n        weight: 1\n      - name: InterPodAffinity\n        weight: 1\n      - name: NodeAffinity\n        weight: 1\n      - name: NodePreferAvoidPods\n        weight: 10000\n      - name: PodTopologySpread\n        weight: 2\n      - name: TaintToleration\n        weight: 1\n      - name: NodeResourcesMostAllocated\n        weight: 5\n  schedulerName: default-scheduler\n"

After profile alteration:
=====================================
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-139-120.us-east-2.compute.internal | wc -l
143
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-158-163.us-east-2.compute.internal | wc -l
8
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-162-138.us-east-2.compute.internal | wc -l
21
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-175-84.us-east-2.compute.internal | wc -l
39
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-201-198.us-east-2.compute.internal | wc -l
89
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ 

Since there are significant changes in the pod placement after altering the profile, moving bug to verified state.

one more observation i have made was after altering the profile and doing a second rollout the placement is much better than the first time.

7 node worker cluster:
===========================
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-131-24.us-east-2.compute.internal | wc -l
0
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-132-34.us-east-2.compute.internal | wc -l
9
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-140-66.us-east-2.compute.internal | wc -l
0
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-168-224.us-east-2.compute.internal | wc -l
156
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide  | grep ip-10-0-169-14.us-east-2.compute.internal | wc -l
0
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide  | grep ip-10-0-211-74.us-east-2.compute.internal | wc -l
0
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide  | grep ip-10-0-217-119.us-east-2.compute.internal | wc -l
135

5 node worker cluster:
=============================
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-139-120.us-east-2.compute.internal | wc -l
150
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-158-163.us-east-2.compute.internal | wc -l
0
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-162-138.us-east-2.compute.internal | wc -l
0
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-175-84.us-east-2.compute.internal | wc -l
0
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$ ./oc get pods -o wide | grep ip-10-0-201-198.us-east-2.compute.internal | wc -l
150
[knarra@knarra openshift-client-linux-4.9.0-0.nightly-2022-03-15-055944]$

Comment 8 errata-xmlrpc 2022-03-21 12:30:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.9.25 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:0861