Bug 1927397 - p&f: add auto update for priority & fairness bootstrap configuration objects [NEEDINFO]
Summary: p&f: add auto update for priority & fairness bootstrap configuration objects
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-apiserver
Version: 4.7
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 4.8.0
Assignee: Abu Kashem
QA Contact: Ke Wang
Whiteboard: LifecycleReset
Depends On:
Blocks: 1926724 1930005 1956606
TreeView+ depends on / blocked
Reported: 2021-02-10 16:29 UTC by Abu Kashem
Modified: 2021-07-27 22:44 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of: 1926724
Last Closed: 2021-07-27 22:43:44 UTC
Target Upstream Version:
mfojtik: needinfo?

Attachments (Terms of Use)

System ID Private Priority Status Summary Last Updated
Github openshift kubernetes pull 736 0 None open Bug 1927397: UPSTREAM: 98028: add auto update for priority & fairness bootstrap configuration objects 2021-05-20 12:26:20 UTC
Red Hat Product Errata RHSA-2021:2438 0 None None None 2021-07-27 22:44:07 UTC

Comment 1 Michal Fojtik 2021-03-12 17:07:20 UTC
This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity/priority. If you have further information on the current state of the bug, please update it, otherwise this bug can be closed in about 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant. Additionally, you can add LifecycleFrozen into Keywords if you think this bug should never be marked as stale. Please consult with bug assignee before you do that.

Comment 2 Abu Kashem 2021-05-07 15:30:20 UTC

The upstream PR https://github.com/kubernetes/kubernetes/pull/97428 is close to being approved/merged. Can you please go through the PR description and the changes to come up with the test plan?

We want to get it in 4.8 as early as possible, we have added exempt flowschema for health probes but these new p&f bootstrap configuration won't be created in an upgrade. so bumping the priority/severity to urgent.

Comment 3 Michal Fojtik 2021-05-07 16:14:39 UTC
The LifecycleStale keyword was removed because the bug got commented on recently.
The bug assignee was notified.

Comment 4 Ke Wang 2021-05-12 06:26:53 UTC
akashem, I'm a little confused about the PR https://github.com/openshift/kubernetes/pull/736 which has not been merged, and the upstream PR https://github.com/kubernetes/kubernetes/pull/97428, I checked the https://github.com/openshift/kubernetes repo, there is no relate PR to merge the upstream PR 97428. So what test need I to do? For PR  https://github.com/openshift/kubernetes/pull/736  or https://github.com/kubernetes/kubernetes/pull/97428?

Comment 5 Abu Kashem 2021-05-12 13:00:57 UTC

oops, i gave you the wrong PR link in the previous comment, please use the following PR link:

> add auto update for priority & fairness bootstrap configuration objects
> https://github.com/kubernetes/kubernetes/pull/98028

> I'm a little confused about the PR https://github.com/openshift/kubernetes/pull/736 which has not been merged
That's fine, let's work on a test plan.

We can collaborate on slack if you have any further questions.

Comment 6 Abu Kashem 2021-05-20 12:53:13 UTC
Downgrading to high/high since there is no consensus on the approach yet. We can track it here - https://github.com/kubernetes/kubernetes/pull/102067

Comment 7 Ke Wang 2021-06-03 07:45:29 UTC
This bug's PR is dev-approved and not yet merged, so I'm following issue DPTP-660 to do the pre-merge verifying for QE pre-merge verification goal of issue OCPQE-815 by using the bot to launch a cluster with the open PR https://github.com/openshift/kubernetes/pull/736.  Here is the verification steps:

Upgrade existed 4.7 cluster to the 4,8 payload with PR,
$ oc adm upgrade --to-image=registry.build01.ci.openshift.org/ci-ln-30zhd22/release:latest --force=true --allow-explicit-upgrade=true

$ oc get clusterversion
NAME      VERSION                                                  AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.8.0-0.ci.test-2021-06-03-020057-ci-ln-30zhd22-latest   True        False         7m13s   Cluster version is 4.8.0-0.ci.test-2021-06-03-020057-ci-ln-30zhd22-latest

$ oc get clusterversion -o json|jq ".items[0].status.history"
    "completionTime": "2021-06-03T07:26:19Z",
    "image": "registry.build01.ci.openshift.org/ci-ln-30zhd22/release:latest",
    "startedTime": "2021-06-03T05:11:03Z",
    "state": "Completed",
    "verified": false,
    "version": "4.8.0-0.ci.test-2021-06-03-020057-ci-ln-30zhd22-latest"
    "completionTime": "2021-06-03T00:55:20Z",
    "image": "registry.ci.openshift.org/ocp/release@sha256:547e974a77fcf2e7c3197283873a4a7a50e32fe735e91a425ccd588f935b4c29",
    "startedTime": "2021-06-03T00:24:34Z",
    "state": "Completed",
    "verified": false,
    "version": "4.7.0-0.nightly-2021-06-02-150742"

$ oc get FlowSchema
NAME                                PRIORITYLEVEL                       MATCHINGPRECEDENCE   DISTINGUISHERMETHOD   AGE     MISSINGPL
exempt                              exempt                              1                    <none>                7h12m   False
openshift-apiserver-sar             exempt                              2                    ByUser                7h3m    False
openshift-oauth-apiserver-sar       exempt                              2                    ByUser                6h41m   False
probes                              exempt                              2                    <none>                135m    False
system-leader-election              leader-election                     100                  ByUser                7h12m   False
workload-leader-election            leader-election                     200                  ByUser                7h12m   False
openshift-sdn                       system                              500                  ByUser                7h10m   False
system-nodes                        system                              500                  ByUser                7h12m   False
kube-controller-manager             workload-high                       800                  ByNamespace           7h12m   False
kube-scheduler                      workload-high                       800                  ByNamespace           7h12m   False
kube-system-service-accounts        workload-high                       900                  ByNamespace           7h12m   False
openshift-apiserver                 workload-high                       1000                 ByUser                7h3m    False
openshift-controller-manager        workload-high                       1000                 ByUser                7h11m   False
openshift-oauth-apiserver           workload-high                       1000                 ByUser                6h41m   False
openshift-oauth-server              workload-high                       1000                 ByUser                6h41m   False
openshift-apiserver-operator        openshift-control-plane-operators   2000                 ByUser                7h3m    False
openshift-authentication-operator   openshift-control-plane-operators   2000                 ByUser                6h41m   False
openshift-etcd-operator             openshift-control-plane-operators   2000                 ByUser                7h12m   False
openshift-kube-apiserver-operator   openshift-control-plane-operators   2000                 ByUser                7h7m    False
openshift-monitoring-metrics        workload-high                       2000                 ByUser                7h7m    False
service-accounts                    workload-low                        9000                 ByUser                7h12m   False
global-default                      global-default                      9900                 ByUser                7h12m   False
catch-all                           catch-all                           10000                ByUser                7h12m   False

$ oc get prioritylevelconfiguration workload-low -o jsonpath='{.spec.limited.assuredConcurrencyShares}'

$ oc get prioritylevelconfiguration global-default -o jsonpath='{.spec.limited.assuredConcurrencyShares}'

After upgraded to 4.8, the prioritylevelconfiguration value of workload-low and global-default were changed as expected values and new probes FlowSchema of exempt was added. 

So the bug is pre-merge verified. After the PR gets merged, the bug will be moved to VERIFIED by the bot automatically or, if not working, by me manually.

Comment 9 Ke Wang 2021-06-15 06:38:56 UTC
The PR has been landed into 4.8.0-0.nightly-2021-06-14-145150 nightly release and the bug has been verified via pre-merge Comment #7 but the bot likely did not move it to "verified". Hence manually the appropriate state.

Comment 12 errata-xmlrpc 2021-07-27 22:43:44 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.