This bug hasn't had any activity in the last 30 days. Maybe the problem got resolved, was a duplicate of something else, or became less pressing for some reason - or maybe it's still relevant but just hasn't been looked at yet. As such, we're marking this bug as "LifecycleStale" and decreasing the severity/priority. If you have further information on the current state of the bug, please update it, otherwise this bug can be closed in about 7 days. The information can be, for example, that the problem still occurs, that you still want the feature, that more information is needed, or that the bug is (for whatever reason) no longer relevant. Additionally, you can add LifecycleFrozen into Keywords if you think this bug should never be marked as stale. Please consult with bug assignee before you do that.
The LifecycleStale keyword was removed because the bug moved to QE.
The bug assignee was notified.
@kewang for verification purposes, the goal is to make sure that the kube-apiserver operator will auto-migrate the storage version of APF resources from v1alpha1 to v1beta1. See https://github.com/openshift/library-go/pull/1091 for the tests I did.
It caused upgrade fail in https://issues.redhat.com/browse/OCPQE-4308 (see the log pasted in the issue: https://mastern-jenkins-csb-openshift-qe.apps.ocp4.prod.psi.redhat.com/job/upgrade_CI/14962/console), where network degraded in the upgarde with message about can't find v1alpha1 flow related resoures
Refer to the https://bugzilla.redhat.com/show_bug.cgi?id=1907211#c2, the flowcontrol.apiserver.k8s.io uses v1alpha1 in OCP 4.7, After upgraded 4.7 to 4.8, confirm if the flowcontrol.apiserver.k8s.io uses v1beta1 in 4.8,
$ oc get clusterversion -o json|jq ".items.status.history"
$ oc get clusterversion
NAME VERSION AVAILABLE PROGRESSING SINCE STATUS
version 4.8.0-0.nightly-2021-06-25-182927 True False 129m Cluster version is 4.8.0-0.nightly-2021-06-25-182927
$ oc debug node/<master node>
sh-4.4# chroot /host
sh-4.4# cd /var/log/pods
sh-4.4# grep -nR 'flowcontrol.apiserver.k8s.io\/v1alpha1' /var/log/pods/openshift-* | grep -v 'debug'
/var/log/pods/openshift-kube-apiserver_kube-apiserver-ip-10-0-55-130.us-east-2.compute.internal_6efa4434-a83c-4627-bb0d-e44eee1eae8d/kube-apiserver/0.log:613:2021-06-28T13:13:41.977032694+00:00 stderr F W0628 13:13:41.976988 19 genericapiserver.go:461] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
/var/log/pods/openshift-kube-apiserver_kube-apiserver-ip-10-0-55-130.us-east-2.compute.internal_6efa4434-a83c-4627-bb0d-e44eee1eae8d/kube-apiserver/1.log:819:2021-06-28T14:28:28.705021313+00:00 stderr F W0628 14:28:28.704965 21 genericapiserver.go:461] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
sh-4.4# grep -nR 'flowcontrol.apiserver.k8s.io\/v1beta1' /var/log/pods/openshift-* | grep -v 'debug' | wc -l
sh-4.4# grep -nR 'flowcontrol.apiserver.k8s.io\/v1beta1' /var/log/pods/openshift-* | grep -v 'debug' | head -1
openshift-kube-apiserver_kube-apiserver-ip-10-0-55-130.us-east-2.compute.internal_6efa4434-a83c-4627-bb0d-e44eee1eae8d/kube-apiserver/1.log:15535:2021-06-28T16:37:06.713311074+00:00 stderr F I0628 16:37:06.713224 21 apiaccess_count_controller.go:130] updating top flowcontrol.apiserver.k8s.io/v1beta1, Resource=prioritylevelconfigurations APIRequest counts
From above, we can see the flowcontrol.apiserver.k8s.io uses v1beta1 in 4.8 after upgrade,
Logged into etcd container and checked if the objects are now in the beta version,
$ oc rsh -n openshift-etcd etcd-ip-10-0-55-130.us-east-2.compute.internal
sh-4.4# etcdctl get /kubernetes.io/flowschemas/catch-all --prefix --print-value-only
DanglingFals���"Found*`This FlowSchema references the PriorityLevelConfiguration object named "catch-all" and it exists�"
Since values are stored in protobuf encoding, not JSON, but we still can see the object saved with flowcontrol.apiserver.k8s.io/v1beta version, refer to the PR https://github.com/openshift/library-go/pull/1091, it is as expected, so move the bug VERIFIED.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Moderate: OpenShift Container Platform 4.8.2 bug fix and security update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.