Bug 1567743 - [pod_public_851] Openshift-descheduler should have proper loglevel as default installation arguments
Summary: [pod_public_851] Openshift-descheduler should have proper loglevel as default...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.10.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 3.10.0
Assignee: Avesh Agarwal
QA Contact: weiwei jiang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-16 07:01 UTC by weiwei jiang
Modified: 2018-07-30 19:13 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-07-30 19:13:03 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:1816 0 None None None 2018-07-30 19:13:21 UTC

Description weiwei jiang 2018-04-16 07:01:02 UTC
Description of problem:
After openshift-descheduler installed, Check the descheduler job pod log, found nothing due to no loglevel option given(default to 0).

Version-Release number of the following components:
rpm -q openshift-ansible
latest
rpm -q ansible
ansible --version

How reproducible:

Steps to Reproduce:
1. Setup OCP 3.10
2. Install openshift-descheduler with overwrite necessary parameters
ansible-playbook -vvv -i qe-inventory-host-file playbooks/openshift-descheduler/config.yml

3.

Actual results:
descheduler job pod have nothing logged
[root@ip-172-18-3-170 certificates]# oc logs descheduler-cronjob-1523862000-qxgdr -n openshift-descheduler 
[root@ip-172-18-3-170 certificates]# 

Expected results:
descheduler job pod should write something.

Additional info:
Please attach logs from ansible-playbook with the -vvv flag

Comment 3 weiwei jiang 2018-05-17 06:44:43 UTC
Checked on 
# oc version 
oc v3.10.0-0.46.0
kubernetes v1.10.0+b81c8f8
features: Basic-Auth GSSAPI Kerberos SPNEGO

Server https://ip-172-18-14-127.ec2.internal:8443
openshift v3.10.0-0.46.0
kubernetes v1.10.0+b81c8f8

And the issue can not be reproduced.

CronJob spec info:
        spec:
          containers:
          - args:
            - --policy-config-file=/policy-dir/policy.yaml
            - --v=5
            - --dry-run


# oc logs -f descheduler-cronjob-1526460000-6zmxk -n openshift-descheduler 
I0516 08:40:09.248704       1 reflector.go:202] Starting reflector *v1.Node (1h0m0s) from github.com/kubernetes-incubator/descheduler/pkg/descheduler/node/node.go:84
I0516 08:40:09.248833       1 reflector.go:240] Listing and watching *v1.Node from github.com/kubernetes-incubator/descheduler/pkg/descheduler/node/node.go:84
I0516 08:40:09.348921       1 duplicates.go:50] Processing node: "ip-172-18-0-241.ec2.internal"
I0516 08:40:09.400252       1 duplicates.go:54] "ReplicationController/hello-1"
I0516 08:40:09.400275       1 duplicates.go:65] Evicted pod: "hello-1-7h7xn" (<nil>)
I0516 08:40:09.400281       1 duplicates.go:65] Evicted pod: "hello-1-fxz7m" (<nil>)
I0516 08:40:09.400286       1 duplicates.go:65] Evicted pod: "hello-1-gp6nr" (<nil>)
I0516 08:40:09.400290       1 duplicates.go:65] Evicted pod: "hello-1-k7wzk" (<nil>)
I0516 08:40:09.400295       1 duplicates.go:65] Evicted pod: "hello-1-ls5zr" (<nil>)
I0516 08:40:09.400299       1 duplicates.go:65] Evicted pod: "hello-1-r78rr" (<nil>)
I0516 08:40:09.400304       1 duplicates.go:50] Processing node: "ip-172-18-14-127.ec2.internal"
I0516 08:40:09.407324       1 duplicates.go:54] "ReplicationController/hello-1"
I0516 08:40:09.407346       1 duplicates.go:65] Evicted pod: "hello-1-ds57x" (<nil>)
I0516 08:40:09.407352       1 duplicates.go:65] Evicted pod: "hello-1-nt9dm" (<nil>)
I0516 08:40:09.407356       1 duplicates.go:65] Evicted pod: "hello-1-rndqq" (<nil>)
I0516 08:40:09.407361       1 duplicates.go:65] Evicted pod: "hello-1-zchpj" (<nil>)
I0516 08:40:09.407365       1 duplicates.go:65] Evicted pod: "hello-1-zm8dp" (<nil>)
I0516 08:40:09.407370       1 duplicates.go:50] Processing node: "ip-172-18-5-71.ec2.internal"
I0516 08:40:09.413700       1 duplicates.go:54] "ReplicationController/hello-1"
I0516 08:40:09.413721       1 duplicates.go:65] Evicted pod: "hello-1-b5pzk" (<nil>)
I0516 08:40:09.413730       1 duplicates.go:65] Evicted pod: "hello-1-jwtwt" (<nil>)
I0516 08:40:09.413736       1 duplicates.go:65] Evicted pod: "hello-1-r9s75" (<nil>)
I0516 08:40:09.413743       1 duplicates.go:65] Evicted pod: "hello-1-tlkp6" (<nil>)
I0516 08:40:09.413749       1 duplicates.go:65] Evicted pod: "hello-1-vhzzf" (<nil>)
I0516 08:40:09.413774       1 duplicates.go:65] Evicted pod: "hello-1-wgswg" (<nil>)
I0516 08:40:09.432356       1 lownodeutilization.go:141] Node "ip-172-18-14-127.ec2.internal" is under utilized with usage: api.ResourceThresholds{"cpu":7.5, "memory":3.8011597496777925, "pods":6.4}
I0516 08:40:09.432400       1 lownodeutilization.go:149] allPods:16, nonRemovablePods:9, bePods:6, bPods:1, gPods:0
I0516 08:40:09.432442       1 lownodeutilization.go:141] Node "ip-172-18-5-71.ec2.internal" is under utilized with usage: api.ResourceThresholds{"memory":9.654945764181592, "pods":5.2, "cpu":10}
I0516 08:40:09.432463       1 lownodeutilization.go:149] allPods:13, nonRemovablePods:5, bePods:7, bPods:1, gPods:0
I0516 08:40:09.432504       1 lownodeutilization.go:141] Node "ip-172-18-0-241.ec2.internal" is under utilized with usage: api.ResourceThresholds{"memory":8.033117604319068, "pods":6, "cpu":7.5}
I0516 08:40:09.432520       1 lownodeutilization.go:149] allPods:15, nonRemovablePods:3, bePods:10, bPods:2, gPods:0
I0516 08:40:09.432525       1 lownodeutilization.go:65] Criteria for a node under utilization: CPU: 40, Mem: 40, Pods: 40
I0516 08:40:09.432532       1 lownodeutilization.go:72] Total number of underutilized nodes: 3
I0516 08:40:09.432537       1 lownodeutilization.go:80] all nodes are underutilized, nothing to do here
I0516 08:40:09.432543       1 pod_antiaffinity.go:45] Processing node: "ip-172-18-0-241.ec2.internal"
I0516 08:40:09.438660       1 pod_antiaffinity.go:45] Processing node: "ip-172-18-14-127.ec2.internal"
I0516 08:40:09.445160       1 pod_antiaffinity.go:45] Processing node: "ip-172-18-5-71.ec2.internal"

Comment 5 errata-xmlrpc 2018-07-30 19:13:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1816


Note You need to log in before you can comment on or make changes to this bug.