This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity/priority. If you have further information on the current state of the bug, please update it, otherwise this bug can be closed in about 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant. Additionally, you can add LifecycleFrozen into Keywords if you think this bug should never be marked as stale. Please consult with bug assignee before you do that.
kewang, The upstream PR https://github.com/kubernetes/kubernetes/pull/97428 is close to being approved/merged. Can you please go through the PR description and the changes to come up with the test plan? We want to get it in 4.8 as early as possible, we have added exempt flowschema for health probes but these new p&f bootstrap configuration won't be created in an upgrade. so bumping the priority/severity to urgent.
The LifecycleStale keyword was removed because the bug got commented on recently. The bug assignee was notified.
akashem, I'm a little confused about the PR https://github.com/openshift/kubernetes/pull/736 which has not been merged, and the upstream PR https://github.com/kubernetes/kubernetes/pull/97428, I checked the https://github.com/openshift/kubernetes repo, there is no relate PR to merge the upstream PR 97428. So what test need I to do? For PR https://github.com/openshift/kubernetes/pull/736 or https://github.com/kubernetes/kubernetes/pull/97428?
kewang, oops, i gave you the wrong PR link in the previous comment, please use the following PR link: > add auto update for priority & fairness bootstrap configuration objects > https://github.com/kubernetes/kubernetes/pull/98028 > I'm a little confused about the PR https://github.com/openshift/kubernetes/pull/736 which has not been merged That's fine, let's work on a test plan. We can collaborate on slack if you have any further questions.
Downgrading to high/high since there is no consensus on the approach yet. We can track it here - https://github.com/kubernetes/kubernetes/pull/102067
This bug's PR is dev-approved and not yet merged, so I'm following issue DPTP-660 to do the pre-merge verifying for QE pre-merge verification goal of issue OCPQE-815 by using the bot to launch a cluster with the open PR https://github.com/openshift/kubernetes/pull/736. Here is the verification steps: Upgrade existed 4.7 cluster to the 4,8 payload with PR, $ oc adm upgrade --to-image=registry.build01.ci.openshift.org/ci-ln-30zhd22/release:latest --force=true --allow-explicit-upgrade=true $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.8.0-0.ci.test-2021-06-03-020057-ci-ln-30zhd22-latest True False 7m13s Cluster version is 4.8.0-0.ci.test-2021-06-03-020057-ci-ln-30zhd22-latest $ oc get clusterversion -o json|jq ".items[0].status.history" [ { "completionTime": "2021-06-03T07:26:19Z", "image": "registry.build01.ci.openshift.org/ci-ln-30zhd22/release:latest", "startedTime": "2021-06-03T05:11:03Z", "state": "Completed", "verified": false, "version": "4.8.0-0.ci.test-2021-06-03-020057-ci-ln-30zhd22-latest" }, { "completionTime": "2021-06-03T00:55:20Z", "image": "registry.ci.openshift.org/ocp/release@sha256:547e974a77fcf2e7c3197283873a4a7a50e32fe735e91a425ccd588f935b4c29", "startedTime": "2021-06-03T00:24:34Z", "state": "Completed", "verified": false, "version": "4.7.0-0.nightly-2021-06-02-150742" } ] $ oc get FlowSchema NAME PRIORITYLEVEL MATCHINGPRECEDENCE DISTINGUISHERMETHOD AGE MISSINGPL exempt exempt 1 <none> 7h12m False openshift-apiserver-sar exempt 2 ByUser 7h3m False openshift-oauth-apiserver-sar exempt 2 ByUser 6h41m False probes exempt 2 <none> 135m False system-leader-election leader-election 100 ByUser 7h12m False workload-leader-election leader-election 200 ByUser 7h12m False openshift-sdn system 500 ByUser 7h10m False system-nodes system 500 ByUser 7h12m False kube-controller-manager workload-high 800 ByNamespace 7h12m False kube-scheduler workload-high 800 ByNamespace 7h12m False kube-system-service-accounts workload-high 900 ByNamespace 7h12m False openshift-apiserver workload-high 1000 ByUser 7h3m False openshift-controller-manager workload-high 1000 ByUser 7h11m False openshift-oauth-apiserver workload-high 1000 ByUser 6h41m False openshift-oauth-server workload-high 1000 ByUser 6h41m False openshift-apiserver-operator openshift-control-plane-operators 2000 ByUser 7h3m False openshift-authentication-operator openshift-control-plane-operators 2000 ByUser 6h41m False openshift-etcd-operator openshift-control-plane-operators 2000 ByUser 7h12m False openshift-kube-apiserver-operator openshift-control-plane-operators 2000 ByUser 7h7m False openshift-monitoring-metrics workload-high 2000 ByUser 7h7m False service-accounts workload-low 9000 ByUser 7h12m False global-default global-default 9900 ByUser 7h12m False catch-all catch-all 10000 ByUser 7h12m False $ oc get prioritylevelconfiguration workload-low -o jsonpath='{.spec.limited.assuredConcurrencyShares}' 100 $ oc get prioritylevelconfiguration global-default -o jsonpath='{.spec.limited.assuredConcurrencyShares}' 20 After upgraded to 4.8, the prioritylevelconfiguration value of workload-low and global-default were changed as expected values and new probes FlowSchema of exempt was added. So the bug is pre-merge verified. After the PR gets merged, the bug will be moved to VERIFIED by the bot automatically or, if not working, by me manually.
The PR has been landed into 4.8.0-0.nightly-2021-06-14-145150 nightly release and the bug has been verified via pre-merge Comment #7 but the bot likely did not move it to "verified". Hence manually the appropriate state.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2021:2438