Bug 1880914 - [DeScheduler] Supress messages related to "Pod lacks an eviction annotation and fails the following checks" from descheduler logs as they make the log file very confusing & clumsy.
Summary: [DeScheduler] Supress messages related to "Pod lacks an eviction annotation a...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-scheduler
Version: 4.6
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.6.0
Assignee: Mike Dame
QA Contact: RamaKasturi
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-21 06:34 UTC by RamaKasturi
Modified: 2020-10-27 16:43 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 16:42:58 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-kube-descheduler-operator pull 139 0 None open Bug 1880914: Change default log level to v=2 2020-09-22 08:10:35 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:43:22 UTC

Description RamaKasturi 2020-09-21 06:34:47 UTC
Description of problem:
Messages below should be suppressed from descheduler logs as they are not really useful for the user and due to this log file gets clumsy and confusing.
"Pod lacks an eviction annotation and fails the following checks"

Version-Release number of selected component (if applicable):
4.6.0-0.nightly-2020-09-17-195238

How Reproducible:
Always

Steps to Reproduce:
1. Install latest 4.6 cluster
2. Add strategies as below to descheduler
- name: "RemoveDuplicates"
   params:
   - name: "excludeOwnerKinds"
     value: " DeploymentConfig"
   - name: "thresholdPriorityClassName"
      value: "priorityclass1"
3. Now check cluster pod logs

Actual Results:
Lot of messages related to below is shown in the descheduler logs which are not really useful for the customer /user
Pod lacks an eviction annotation and fails the following checks

Expected Results:
suppress the logs which are really not useful for the customer.

Comment 2 RamaKasturi 2020-09-23 12:48:39 UTC
Verified bug with payload below and i see that loglevel has been changed to -v=2 and do not see all the logs which are mentioned in the description.

[ramakasturinarra@dhcp35-60 openshift-client-linux-4.6.0-0.nightly-2020-09-23-022756]$ oc get csv
NAME                                                   DISPLAY                     VERSION                 REPLACES   PHASE
clusterkubedescheduleroperator.4.6.0-202009230045.p0   Kube Descheduler Operator   4.6.0-202009230045.p0              Succeeded
[ramakasturinarra@dhcp35-60 openshift-client-linux-4.6.0-0.nightly-2020-09-23-022756]$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.6.0-0.nightly-2020-09-23-022756   True        False         73m     Cluster version is 4.6.0-0.nightly-2020-09-23-022756

Containers:
  openshift-descheduler:
    Container ID:  cri-o://e92256b4d30884f18829cd9eea271d297b338e302f886b0b814826e2f12fde89
    Image:         registry.redhat.io/openshift4/ose-descheduler@sha256:8f1543cbe08332ee9bab44c70b6e71868caf39b3bfe72cac517b7c8fe1e3a6aa
    Image ID:      registry.redhat.io/openshift4/ose-descheduler@sha256:8f1543cbe08332ee9bab44c70b6e71868caf39b3bfe72cac517b7c8fe1e3a6aa
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/descheduler
    Args:
      --policy-config-file=/policy-dir/policy.yaml
      --v=2
      --descheduling-interval=60s
    State:          Running

Logs:
==============================
[ramakasturinarra@dhcp35-60 openshift-client-linux-4.6.0-0.nightly-2020-09-23-022756]$ oc logs -f cluster-5df7cdb9c-tt9k4
I0923 12:39:31.559889       1 node.go:45] node lister returned empty list, now fetch directly
I0923 12:39:31.646674       1 duplicates.go:73] "Processing node" node="ip-10-0-130-57.ap-northeast-1.compute.internal"
I0923 12:39:31.947671       1 duplicates.go:73] "Processing node" node="ip-10-0-148-82.ap-northeast-1.compute.internal"
I0923 12:39:32.144076       1 duplicates.go:73] "Processing node" node="ip-10-0-166-167.ap-northeast-1.compute.internal"
I0923 12:39:32.173448       1 duplicates.go:73] "Processing node" node="ip-10-0-176-39.ap-northeast-1.compute.internal"
I0923 12:39:32.258174       1 duplicates.go:73] "Processing node" node="ip-10-0-201-214.ap-northeast-1.compute.internal"
I0923 12:39:32.350683       1 duplicates.go:73] "Processing node" node="ip-10-0-222-103.ap-northeast-1.compute.internal"
I0923 12:39:33.349969       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-166-167.ap-northeast-1.compute.internal" usage=map[cpu:43.6 memory:41.74923845555136 pods:7.2]
I0923 12:39:33.350047       1 lownodeutilization.go:203] "Node is appropriately utilized" node="ip-10-0-176-39.ap-northeast-1.compute.internal" usage=map[cpu:48.97142857142857 memory:33.38481284817499 pods:8.4]
I0923 12:39:33.350111       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-201-214.ap-northeast-1.compute.internal" usage=map[cpu:57.2 memory:66.75468003324949 pods:10]
I0923 12:39:33.350191       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-222-103.ap-northeast-1.compute.internal" usage=map[cpu:61.42857142857143 memory:45.906683623366675 pods:20.8]
I0923 12:39:33.350257       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-130-57.ap-northeast-1.compute.internal" usage=map[cpu:54.142857142857146 memory:40.41829106780817 pods:13.6]
I0923 12:39:33.350311       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-148-82.ap-northeast-1.compute.internal" usage=map[cpu:48.733333333333334 memory:62.175271047230375 pods:8]
I0923 12:39:33.350324       1 lownodeutilization.go:101] Criteria for a node under utilization: CPU: 10, Mem: 20, Pods: 30
I0923 12:39:33.350332       1 lownodeutilization.go:105] No node is underutilized, nothing to do here, you might tune your thresholds further
I0923 12:39:33.350346       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-130-57.ap-northeast-1.compute.internal"
I0923 12:39:33.353858       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-148-82.ap-northeast-1.compute.internal"
I0923 12:39:33.360711       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-166-167.ap-northeast-1.compute.internal"
I0923 12:39:33.365929       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-176-39.ap-northeast-1.compute.internal"
I0923 12:39:33.369184       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-201-214.ap-northeast-1.compute.internal"
I0923 12:39:33.372459       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-222-103.ap-northeast-1.compute.internal"
I0923 12:40:33.379231       1 node.go:45] node lister returned empty list, now fetch directly
I0923 12:40:33.389877       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-130-57.ap-northeast-1.compute.internal"
I0923 12:40:33.395179       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-148-82.ap-northeast-1.compute.internal"
I0923 12:40:33.398376       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-166-167.ap-northeast-1.compute.internal"
I0923 12:40:33.401571       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-176-39.ap-northeast-1.compute.internal"
I0923 12:40:33.408354       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-201-214.ap-northeast-1.compute.internal"
I0923 12:40:33.413440       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-222-103.ap-northeast-1.compute.internal"
I0923 12:40:33.417571       1 duplicates.go:73] "Processing node" node="ip-10-0-130-57.ap-northeast-1.compute.internal"
I0923 12:40:33.454463       1 duplicates.go:73] "Processing node" node="ip-10-0-148-82.ap-northeast-1.compute.internal"
I0923 12:40:33.645361       1 duplicates.go:73] "Processing node" node="ip-10-0-166-167.ap-northeast-1.compute.internal"
I0923 12:40:33.673401       1 duplicates.go:73] "Processing node" node="ip-10-0-176-39.ap-northeast-1.compute.internal"
I0923 12:40:33.845879       1 duplicates.go:73] "Processing node" node="ip-10-0-201-214.ap-northeast-1.compute.internal"
I0923 12:40:33.946312       1 duplicates.go:73] "Processing node" node="ip-10-0-222-103.ap-northeast-1.compute.internal"
I0923 12:40:35.347056       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-130-57.ap-northeast-1.compute.internal" usage=map[cpu:54.142857142857146 memory:40.41829106780817 pods:13.6]
I0923 12:40:35.347144       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-148-82.ap-northeast-1.compute.internal" usage=map[cpu:48.733333333333334 memory:62.175271047230375 pods:8]
I0923 12:40:35.347200       1 lownodeutilization.go:203] "Node is appropriately utilized" node="ip-10-0-166-167.ap-northeast-1.compute.internal" usage=map[cpu:36.93333333333333 memory:34.21327483541573 pods:6.8]
I0923 12:40:35.347268       1 lownodeutilization.go:203] "Node is appropriately utilized" node="ip-10-0-176-39.ap-northeast-1.compute.internal" usage=map[cpu:48.97142857142857 memory:33.38481284817499 pods:8.4]
I0923 12:40:35.347346       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-201-214.ap-northeast-1.compute.internal" usage=map[cpu:57.2 memory:66.75468003324949 pods:10]
I0923 12:40:35.347462       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-222-103.ap-northeast-1.compute.internal" usage=map[cpu:61.42857142857143 memory:45.906683623366675 pods:20.8]
I0923 12:40:35.347473       1 lownodeutilization.go:101] Criteria for a node under utilization: CPU: 10, Mem: 20, Pods: 30
I0923 12:40:35.347480       1 lownodeutilization.go:105] No node is underutilized, nothing to do here, you might tune your thresholds further
I0923 12:41:35.347627       1 node.go:45] node lister returned empty list, now fetch directly
I0923 12:41:35.357613       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-130-57.ap-northeast-1.compute.internal"
I0923 12:41:35.360903       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-148-82.ap-northeast-1.compute.internal"
I0923 12:41:35.364096       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-166-167.ap-northeast-1.compute.internal"
I0923 12:41:35.376872       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-176-39.ap-northeast-1.compute.internal"
I0923 12:41:35.382053       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-201-214.ap-northeast-1.compute.internal"
I0923 12:41:35.385223       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-222-103.ap-northeast-1.compute.internal"
I0923 12:41:35.388367       1 duplicates.go:73] "Processing node" node="ip-10-0-130-57.ap-northeast-1.compute.internal"
I0923 12:41:35.448373       1 duplicates.go:73] "Processing node" node="ip-10-0-148-82.ap-northeast-1.compute.internal"
I0923 12:41:35.549768       1 duplicates.go:73] "Processing node" node="ip-10-0-166-167.ap-northeast-1.compute.internal"
I0923 12:41:35.645270       1 duplicates.go:73] "Processing node" node="ip-10-0-176-39.ap-northeast-1.compute.internal"
I0923 12:41:35.746484       1 duplicates.go:73] "Processing node" node="ip-10-0-201-214.ap-northeast-1.compute.internal"
I0923 12:41:35.945483       1 duplicates.go:73] "Processing node" node="ip-10-0-222-103.ap-northeast-1.compute.internal"
I0923 12:41:37.247394       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-130-57.ap-northeast-1.compute.internal" usage=map[cpu:54.142857142857146 memory:40.41829106780817 pods:13.6]
I0923 12:41:37.247478       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-148-82.ap-northeast-1.compute.internal" usage=map[cpu:48.733333333333334 memory:62.175271047230375 pods:8]
I0923 12:41:37.247526       1 lownodeutilization.go:203] "Node is appropriately utilized" node="ip-10-0-166-167.ap-northeast-1.compute.internal" usage=map[cpu:36.93333333333333 memory:34.21327483541573 pods:6.8]
I0923 12:41:37.247586       1 lownodeutilization.go:203] "Node is appropriately utilized" node="ip-10-0-176-39.ap-northeast-1.compute.internal" usage=map[cpu:48.97142857142857 memory:33.38481284817499 pods:8.4]
I0923 12:41:37.247660       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-201-214.ap-northeast-1.compute.internal" usage=map[cpu:57.2 memory:66.75468003324949 pods:10]
I0923 12:41:37.247743       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-222-103.ap-northeast-1.compute.internal" usage=map[cpu:61.42857142857143 memory:45.906683623366675 pods:20.8]
I0923 12:41:37.247754       1 lownodeutilization.go:101] Criteria for a node under utilization: CPU: 10, Mem: 20, Pods: 30
I0923 12:41:37.247762       1 lownodeutilization.go:105] No node is underutilized, nothing to do here, you might tune your thresholds further
I0923 12:42:37.247879       1 node.go:45] node lister returned empty list, now fetch directly
I0923 12:42:37.256586       1 duplicates.go:73] "Processing node" node="ip-10-0-130-57.ap-northeast-1.compute.internal"
I0923 12:42:37.350064       1 duplicates.go:73] "Processing node" node="ip-10-0-148-82.ap-northeast-1.compute.internal"
I0923 12:42:37.450405       1 duplicates.go:73] "Processing node" node="ip-10-0-166-167.ap-northeast-1.compute.internal"
I0923 12:42:37.646435       1 duplicates.go:73] "Processing node" node="ip-10-0-176-39.ap-northeast-1.compute.internal"
I0923 12:42:37.745516       1 duplicates.go:73] "Processing node" node="ip-10-0-201-214.ap-northeast-1.compute.internal"
I0923 12:42:37.849549       1 duplicates.go:73] "Processing node" node="ip-10-0-222-103.ap-northeast-1.compute.internal"
I0923 12:42:38.848322       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-130-57.ap-northeast-1.compute.internal" usage=map[cpu:54.142857142857146 memory:40.41829106780817 pods:13.6]
I0923 12:42:38.848423       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-148-82.ap-northeast-1.compute.internal" usage=map[cpu:48.733333333333334 memory:62.175271047230375 pods:8]
I0923 12:42:38.848497       1 lownodeutilization.go:203] "Node is appropriately utilized" node="ip-10-0-166-167.ap-northeast-1.compute.internal" usage=map[cpu:36.93333333333333 memory:34.21327483541573 pods:6.8]
I0923 12:42:38.848589       1 lownodeutilization.go:203] "Node is appropriately utilized" node="ip-10-0-176-39.ap-northeast-1.compute.internal" usage=map[cpu:48.97142857142857 memory:33.38481284817499 pods:8.4]
I0923 12:42:38.848707       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-201-214.ap-northeast-1.compute.internal" usage=map[cpu:57.2 memory:66.75468003324949 pods:10]
I0923 12:42:38.848891       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-222-103.ap-northeast-1.compute.internal" usage=map[cpu:61.42857142857143 memory:45.906683623366675 pods:20.8]
I0923 12:42:38.848900       1 lownodeutilization.go:101] Criteria for a node under utilization: CPU: 10, Mem: 20, Pods: 30
I0923 12:42:38.848907       1 lownodeutilization.go:105] No node is underutilized, nothing to do here, you might tune your thresholds further
I0923 12:42:38.848929       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-130-57.ap-northeast-1.compute.internal"
I0923 12:42:38.943991       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-148-82.ap-northeast-1.compute.internal"
I0923 12:42:38.948062       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-166-167.ap-northeast-1.compute.internal"
I0923 12:42:38.954911       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-176-39.ap-northeast-1.compute.internal"
I0923 12:42:38.960210       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-201-214.ap-northeast-1.compute.internal"
I0923 12:42:38.963549       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-222-103.ap-northeast-1.compute.internal"
I0923 12:43:39.051688       1 node.go:45] node lister returned empty list, now fetch directly
I0923 12:43:39.650676       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-130-57.ap-northeast-1.compute.internal" usage=map[cpu:54.142857142857146 memory:40.41829106780817 pods:13.6]
I0923 12:43:39.650761       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-148-82.ap-northeast-1.compute.internal" usage=map[cpu:48.733333333333334 memory:62.175271047230375 pods:8]
I0923 12:43:39.650812       1 lownodeutilization.go:203] "Node is appropriately utilized" node="ip-10-0-166-167.ap-northeast-1.compute.internal" usage=map[cpu:36.93333333333333 memory:34.21327483541573 pods:6.8]
I0923 12:43:39.650873       1 lownodeutilization.go:203] "Node is appropriately utilized" node="ip-10-0-176-39.ap-northeast-1.compute.internal" usage=map[cpu:48.97142857142857 memory:33.38481284817499 pods:8.4]
I0923 12:43:39.650932       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-201-214.ap-northeast-1.compute.internal" usage=map[cpu:57.2 memory:66.75468003324949 pods:10]
I0923 12:43:39.651026       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-222-103.ap-northeast-1.compute.internal" usage=map[cpu:61.42857142857143 memory:45.906683623366675 pods:20.8]
I0923 12:43:39.651037       1 lownodeutilization.go:101] Criteria for a node under utilization: CPU: 10, Mem: 20, Pods: 30
I0923 12:43:39.651045       1 lownodeutilization.go:105] No node is underutilized, nothing to do here, you might tune your thresholds further
I0923 12:43:39.651061       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-130-57.ap-northeast-1.compute.internal"
I0923 12:43:39.668412       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-148-82.ap-northeast-1.compute.internal"
I0923 12:43:39.677752       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-166-167.ap-northeast-1.compute.internal"
I0923 12:43:39.697763       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-176-39.ap-northeast-1.compute.internal"
I0923 12:43:39.708778       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-201-214.ap-northeast-1.compute.internal"
I0923 12:43:39.716694       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-222-103.ap-northeast-1.compute.internal"
I0923 12:43:39.731775       1 duplicates.go:73] "Processing node" node="ip-10-0-130-57.ap-northeast-1.compute.internal"
I0923 12:43:39.950362       1 duplicates.go:73] "Processing node" node="ip-10-0-148-82.ap-northeast-1.compute.internal"
I0923 12:43:40.149067       1 duplicates.go:73] "Processing node" node="ip-10-0-166-167.ap-northeast-1.compute.internal"
I0923 12:43:40.282504       1 duplicates.go:73] "Processing node" node="ip-10-0-176-39.ap-northeast-1.compute.internal"
I0923 12:43:40.549922       1 duplicates.go:73] "Processing node" node="ip-10-0-201-214.ap-northeast-1.compute.internal"
I0923 12:43:40.749387       1 duplicates.go:73] "Processing node" node="ip-10-0-222-103.ap-northeast-1.compute.internal"
I0923 12:44:41.044796       1 node.go:45] node lister returned empty list, now fetch directly
I0923 12:44:41.749780       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-222-103.ap-northeast-1.compute.internal" usage=map[cpu:61.42857142857143 memory:45.906683623366675 pods:20.8]
I0923 12:44:41.749891       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-130-57.ap-northeast-1.compute.internal" usage=map[cpu:54.142857142857146 memory:40.41829106780817 pods:13.6]
I0923 12:44:41.749957       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-148-82.ap-northeast-1.compute.internal" usage=map[cpu:48.733333333333334 memory:62.175271047230375 pods:8]
I0923 12:44:41.750003       1 lownodeutilization.go:203] "Node is appropriately utilized" node="ip-10-0-166-167.ap-northeast-1.compute.internal" usage=map[cpu:36.93333333333333 memory:34.21327483541573 pods:6.8]
I0923 12:44:41.750066       1 lownodeutilization.go:203] "Node is appropriately utilized" node="ip-10-0-176-39.ap-northeast-1.compute.internal" usage=map[cpu:48.97142857142857 memory:33.38481284817499 pods:8.4]
I0923 12:44:41.750136       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-201-214.ap-northeast-1.compute.internal" usage=map[cpu:57.2 memory:66.75468003324949 pods:10]
I0923 12:44:41.750142       1 lownodeutilization.go:101] Criteria for a node under utilization: CPU: 10, Mem: 20, Pods: 30
I0923 12:44:41.750151       1 lownodeutilization.go:105] No node is underutilized, nothing to do here, you might tune your thresholds further
I0923 12:44:41.750162       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-130-57.ap-northeast-1.compute.internal"
I0923 12:44:41.844373       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-148-82.ap-northeast-1.compute.internal"
I0923 12:44:41.852321       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-166-167.ap-northeast-1.compute.internal"
I0923 12:44:41.855686       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-176-39.ap-northeast-1.compute.internal"
I0923 12:44:41.859270       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-201-214.ap-northeast-1.compute.internal"
I0923 12:44:41.866750       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-222-103.ap-northeast-1.compute.internal"
I0923 12:44:41.871874       1 duplicates.go:73] "Processing node" node="ip-10-0-130-57.ap-northeast-1.compute.internal"
I0923 12:44:41.949222       1 duplicates.go:73] "Processing node" node="ip-10-0-148-82.ap-northeast-1.compute.internal"
I0923 12:44:42.074590       1 duplicates.go:73] "Processing node" node="ip-10-0-166-167.ap-northeast-1.compute.internal"
I0923 12:44:42.275878       1 duplicates.go:73] "Processing node" node="ip-10-0-176-39.ap-northeast-1.compute.internal"
I0923 12:44:42.488497       1 duplicates.go:73] "Processing node" node="ip-10-0-201-214.ap-northeast-1.compute.internal"
I0923 12:44:42.747211       1 duplicates.go:73] "Processing node" node="ip-10-0-222-103.ap-northeast-1.compute.internal"
I0923 12:45:43.048733       1 node.go:45] node lister returned empty list, now fetch directly
I0923 12:45:43.061856       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-130-57.ap-northeast-1.compute.internal"
I0923 12:45:43.084639       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-148-82.ap-northeast-1.compute.internal"
I0923 12:45:43.088865       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-166-167.ap-northeast-1.compute.internal"
I0923 12:45:43.092659       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-176-39.ap-northeast-1.compute.internal"
I0923 12:45:43.101122       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-201-214.ap-northeast-1.compute.internal"
I0923 12:45:43.108116       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-222-103.ap-northeast-1.compute.internal"
I0923 12:45:43.112637       1 duplicates.go:73] "Processing node" node="ip-10-0-130-57.ap-northeast-1.compute.internal"
I0923 12:45:43.243763       1 duplicates.go:73] "Processing node" node="ip-10-0-148-82.ap-northeast-1.compute.internal"
I0923 12:45:43.347043       1 duplicates.go:73] "Processing node" node="ip-10-0-166-167.ap-northeast-1.compute.internal"
I0923 12:45:43.445940       1 duplicates.go:73] "Processing node" node="ip-10-0-176-39.ap-northeast-1.compute.internal"
I0923 12:45:43.548910       1 duplicates.go:73] "Processing node" node="ip-10-0-201-214.ap-northeast-1.compute.internal"
I0923 12:45:43.650665       1 duplicates.go:73] "Processing node" node="ip-10-0-222-103.ap-northeast-1.compute.internal"
I0923 12:45:45.044194       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-130-57.ap-northeast-1.compute.internal" usage=map[cpu:54.142857142857146 memory:40.41829106780817 pods:13.6]
I0923 12:45:45.044290       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-148-82.ap-northeast-1.compute.internal" usage=map[cpu:48.733333333333334 memory:62.175271047230375 pods:8]
I0923 12:45:45.044342       1 lownodeutilization.go:203] "Node is appropriately utilized" node="ip-10-0-166-167.ap-northeast-1.compute.internal" usage=map[cpu:36.93333333333333 memory:34.21327483541573 pods:6.8]
I0923 12:45:45.044412       1 lownodeutilization.go:203] "Node is appropriately utilized" node="ip-10-0-176-39.ap-northeast-1.compute.internal" usage=map[cpu:48.97142857142857 memory:33.38481284817499 pods:8.4]
I0923 12:45:45.044482       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-201-214.ap-northeast-1.compute.internal" usage=map[cpu:57.2 memory:66.75468003324949 pods:10]
I0923 12:45:45.044612       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-222-103.ap-northeast-1.compute.internal" usage=map[cpu:61.42857142857143 memory:45.906683623366675 pods:20.8]
I0923 12:45:45.044626       1 lownodeutilization.go:101] Criteria for a node under utilization: CPU: 10, Mem: 20, Pods: 30
I0923 12:45:45.044634       1 lownodeutilization.go:105] No node is underutilized, nothing to do here, you might tune your thresholds further
I0923 12:46:45.044766       1 node.go:45] node lister returned empty list, now fetch directly
I0923 12:46:45.053326       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-130-57.ap-northeast-1.compute.internal"
I0923 12:46:45.060045       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-148-82.ap-northeast-1.compute.internal"
I0923 12:46:45.066040       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-166-167.ap-northeast-1.compute.internal"
I0923 12:46:45.069230       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-176-39.ap-northeast-1.compute.internal"
I0923 12:46:45.072335       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-201-214.ap-northeast-1.compute.internal"
I0923 12:46:45.078926       1 toomanyrestarts.go:73] "Processing node" node="ip-10-0-222-103.ap-northeast-1.compute.internal"
I0923 12:46:45.083914       1 duplicates.go:73] "Processing node" node="ip-10-0-130-57.ap-northeast-1.compute.internal"
I0923 12:46:45.147848       1 duplicates.go:73] "Processing node" node="ip-10-0-148-82.ap-northeast-1.compute.internal"
I0923 12:46:45.245742       1 duplicates.go:73] "Processing node" node="ip-10-0-166-167.ap-northeast-1.compute.internal"
I0923 12:46:45.277240       1 duplicates.go:73] "Processing node" node="ip-10-0-176-39.ap-northeast-1.compute.internal"
I0923 12:46:45.350885       1 duplicates.go:73] "Processing node" node="ip-10-0-201-214.ap-northeast-1.compute.internal"
I0923 12:46:45.545992       1 duplicates.go:73] "Processing node" node="ip-10-0-222-103.ap-northeast-1.compute.internal"
I0923 12:46:47.047110       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-130-57.ap-northeast-1.compute.internal" usage=map[cpu:54.142857142857146 memory:40.41829106780817 pods:13.6]
I0923 12:46:47.047195       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-148-82.ap-northeast-1.compute.internal" usage=map[cpu:48.733333333333334 memory:62.175271047230375 pods:8]
I0923 12:46:47.047251       1 lownodeutilization.go:203] "Node is appropriately utilized" node="ip-10-0-166-167.ap-northeast-1.compute.internal" usage=map[cpu:36.93333333333333 memory:34.21327483541573 pods:6.8]
I0923 12:46:47.047307       1 lownodeutilization.go:203] "Node is appropriately utilized" node="ip-10-0-176-39.ap-northeast-1.compute.internal" usage=map[cpu:48.97142857142857 memory:33.38481284817499 pods:8.4]
I0923 12:46:47.047369       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-201-214.ap-northeast-1.compute.internal" usage=map[cpu:57.2 memory:66.75468003324949 pods:10]
I0923 12:46:47.047452       1 lownodeutilization.go:200] "Node is overutilized" node="ip-10-0-222-103.ap-northeast-1.compute.internal" usage=map[cpu:61.42857142857143 memory:45.906683623366675 pods:20.8]
I0923 12:46:47.047464       1 lownodeutilization.go:101] Criteria for a node under utilization: CPU: 10, Mem: 20, Pods: 30
I0923 12:46:47.047473       1 lownodeutilization.go:105] No node is underutilized, nothing to do here, you might tune your thresholds further

Based on the above moving bug to verified state.

Comment 5 errata-xmlrpc 2020-10-27 16:42:58 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.