Bug 1683100 - system infrastructure component pod not set priorityclass field correctly
Summary: system infrastructure component pod not set priorityclass field correctly
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 4.1.0
Hardware: x86_64
OS: Linux
unspecified
medium
Target Milestone: ---
: 4.1.0
Assignee: ravig
QA Contact: Jianwei Hou
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-26 09:21 UTC by MinLi
Modified: 2019-06-04 10:44 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:44:39 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 None None None 2019-06-04 10:44:44 UTC

Description MinLi 2019-02-26 09:21:32 UTC
Description of problem:
system infrastructure component not set priorityclass to  system-node-critical or  system-cluster-critical, such as openshift-controller-manager, openshift-machine-api, sdn-controller

Version-Release number of selected component (if applicable):
clusterversion: 4.0.0-0.nightly-2019-02-25-234632

$ oc version                                
oc v4.0.0-0.182.0
kubernetes v1.12.4+4dd65df23d
features: Basic-Auth GSSAPI Kerberos SPNEGO

How reproducible:
always

Steps to Reproduce:
1.get pod of ns "openshift-controller-manager"
#oc get pod -n openshift-controller-manager
NAME                       READY     STATUS    RESTARTS   AGE
controller-manager-9wvh4   1/1       Running   0          77m
controller-manager-dvrcb   1/1       Running   0          76m
controller-manager-tpl74   1/1       Running   0          75m

2.check PriorityClassName field of pod
#oc describe pod controller-manager-9wvh4 -n openshift-controller-manager | grep -i priority
Priority:           0
PriorityClassName:  <none>

3.check PriorityClassName field of other infrastructure component pod, such as openshift-machine-api, openshift-sdn(sdn-controller pod)

Actual results:
2.PriorityClassName field is "none" and Priority field is "0"

Expected results:
2.PriorityClassName field is "system-node-critical" , and Priority field is        "2000001000"

Additional info:

Comment 1 Seth Jennings 2019-02-26 16:48:00 UTC
Static pods (i.e. the kube-* control plane components) have a known issue with this atm.

However, the openshift-{apiserver|controller-manager} should have these set.

Ravi, could you open PRs against the listed components.

MinLi, is this the full list of components you expect to have critical priorities?  Just want to set expectations so we can get all the changes in this one BZ.

Comment 2 ravig 2019-02-26 18:59:06 UTC
Seems apiserver is already setting the priorityClass.


#oc describe pod apiserver-2g9r8 -n openshift-apiserver | grep -i priority
Priority:           2000001000
PriorityClassName:  system-node-critical


#oc describe pod kube-apiserver-ip-10-0-131-80.us-east-2.compute.internal -n openshift-kube-apiserver | grep -i priority
Priority:           2000001000
PriorityClassName:  system-node-critical


#oc describe pod openshift-kube-scheduler-ip-10-0-131-80.us-east-2.compute.internal -n openshift-kube-scheduler | grep -i priority
Priority:           2000001000
PriorityClassName:  system-node-critical


#oc describe pods kube-controller-manager-ip-10-0-131-80.us-east-2.compute.internal -n openshift-kube-controller-manager|grep -i priority
Priority:           2000001000
PriorityClassName:  system-node-critical

We are not setting values for openshift-controller-manager.

#oc describe pod controller-manager-7t97k -n openshift-controller-manager | grep -i priority
Priority:           0
PriorityClassName:  <none>

Posted a PR for openshift-controller-manager:

https://github.com/openshift/cluster-openshift-controller-manager-operator/pull/77

Comment 3 MinLi 2019-02-28 07:00:26 UTC
@Seth Jennings, @ ravig ,

according to my test, ns "openshift-machine-api" is not setting the priorityClass. And the sdn-controller-XXX pod of ns "openshift-sdn" is not setting either.

Comment 4 ravig 2019-02-28 09:52:00 UTC
@MinLi,

I have created PRs ensure that particular priorityclasses. Following are PRs merged:

https://github.com/openshift/cluster-autoscaler-operator/pull/57
https://github.com/openshift/machine-api-operator/pull/230

But for sdn-controller-XXX pod in openshift-sdn namespace, Dan thinks the cluster critical priorityClass is not needed.

https://github.com/openshift/cluster-network-operator/pull/109 - Here is the PR that I closed since Dan thinks the critical priorityClasses are not needed.

Comment 5 MinLi 2019-03-04 03:05:13 UTC
@ravig,OK, I think you provide a reasonable explaination for why not add critical priorityClass to sdn-controller-XXX pod. 
And benefit a lot from you, thx!

Comment 6 Seth Jennings 2019-03-06 19:20:59 UTC
All PRs have merged

Comment 8 MinLi 2019-03-11 03:44:54 UTC
@Seth Jennings, @ravig, the following PriorityClass not set:

# oc describe pod  controller-manager-7hvh6 -n openshift-controller-manager | grep -i priority
Priority:           0
PriorityClassName:  <none>

# oc describe pod clusterapi-manager-controllers-957d78db5-zg225 -n openshift-machine-api | grep -i priority
Priority:           0
PriorityClassName:  <none>

version info:
4.0.0-0.nightly-2019-03-04-234414
oc v4.0.0-0.182.0
kubernetes v1.12.4+4dd65df23d

Comment 11 MinLi 2019-03-14 03:43:20 UTC
verified!

4.0.0-0.nightly-2019-03-13-233958

Comment 13 errata-xmlrpc 2019-06-04 10:44:39 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.