Description of problem: Making an initial attempt at standing up a Starter integration cluster of OpenShift 4.2. Upon applying our custom "KubeAPIServerConfig", the API Server has become degraded and the kube-apiserver pod is cycling with the following error: Error: [enable-admission-plugins plugin "autoscaling.openshift.io/ClusterResourceOverride" is unknown, enable-admission-plugins plugin "autoscaling.openshift.io/RunOnceDuration" is unknown] Version-Release number of selected component (if applicable): $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.2.0-0.nightly-2019-08-29-062233 True False 4d2h Error while reconciling 4.2.0-0.nightly-2019-08-29-062233: the cluster operator kube-apiserver is degraded How reproducible: Unknown. This is the first attempt at applying our custom config to an OpenShift 4.2 cluster. Steps to Reproduce: 1. Install an OpenShift 4.2 (nightly) build 2. Apply custom "KubeAPIServerConfig" that contains either, or both, of these admission plugins: "autoscaling.openshift.io/ClusterResourceOverride" "autoscaling.openshift.io/RunOnceDuration" 3. Observe that the kube-apiserver becomes degraded Actual results: The fist kube-apiserver pod attempts to apply the configuration and then begins cycling due to the "unknown" admission plugins. This then cause the kube-apiserver to become degraded. Expected results: The custom configuration should apply and roll out successfully. Additional info:
These two admission plugins are owned by the scheduler (workloads).
As it turned out, this was indeed problem in kube-apiserver (admission registration for openshift admission plugins was wrong).
Verified in 4.2.0-0.nightly-2019-09-08-180038. Applying `oc edit kubeapiserver`: unsupportedConfigOverrides: admission: enabledPlugins: - autoscaling.openshift.io/ClusterResourceOverride - autoscaling.openshift.io/RunOnceDuration pluginConfig: autoscaling.openshift.io/ClusterResourceOverride: configuration: apiVersion: autoscaling.openshift.io/v1 cpuRequestToLimitPercent: 2 kind: ClusterResourceOverrideConfig limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 autoscaling.openshift.io/RunOnceDuration: configuration: activeDeadlineSecondsLimit: 3600 apiVersion: autoscaling.openshift.io/v1 kind: RunOnceDurationConfig apiVersion: kubecontrolplane.config.openshift.io/v1 kind: KubeAPIServerConfig Pods re-run well and co is good: oc get po -l apiserver -n openshift-kube-apiserver NAME READY STATUS RESTARTS AGE kube-apiserver-ip-10-0-135-39.sa-east-1.compute.internal 3/3 Running 0 2m23s kube-apiserver-ip-10-0-139-165.sa-east-1.compute.internal 3/3 Running 0 6m1s kube-apiserver-ip-10-0-145-203.sa-east-1.compute.internal 3/3 Running 0 4m14s oc get co kube-apiserver NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE kube-apiserver 4.2.0-0.nightly-2019-09-08-180038 True False False 23h
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:2922