Hide Forgot
Observed in a CI run that the metrics apiservice was down, causing e2es to timeout and fail (due to namespace deletion not continuing). The pods appeared fine, with only the following in the logs: oc logs -n openshift-monitoring deploy/prometheus-adapter Found 2 pods, using pod/prometheus-adapter-69bd595d44-5plvk I0314 21:55:24.335450 1 adapter.go:91] successfully using in-cluster auth I0314 21:55:25.393188 1 serve.go:96] Serving securely on [::]:6443 E0314 21:56:14.948618 1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=131, ErrCode=NO_ERROR, debug="" E0314 21:56:14.949009 1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=131, ErrCode=NO_ERROR, debug="" E0314 21:57:40.183961 1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug="" E0314 21:57:40.187336 1 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug="" The apiservice reported v1beta1.metrics.k8s.io openshift-monitoring/prometheus-adapter False (FailedDiscoveryCheck) 104m Run was https://openshift-gce-devel.appspot.com/build/origin-ci-test/logs/release-openshift-ocp-installer-e2e-aws-4.0/78 I've seen similar failures in other e2e runs. Not clear where problem resides - apiserver, network, or the endpoint.
It seems that we see similar errors in other components in the openshift stack, including Prometheus itself, but some of these components are able to recover from failure. We need to investigate if the Prometheus adapter is simply missing retry logic.
Regarding the error in the logs, it seems that this was fixed in newer versions of Kubernetes apimachinery. I'd say we should upgrade to v1.14 (as we only use the pod and node API this should be safe to do).
For what it's worth the log lines are from the list/watch and are suggesting the apiserver is closing connections unexpectedly, so it doesn't seem to me that these log lines have anything to do with the failure. However, we're going to go through all the components necessary to update everything to the latest apimachinery code.
Moving to assigned as we're working on updating the Kubernetes dependencies throughout the stack.
Note that GOAWAY isn't an actual error. It's just an informational message and isn't indicative of any error state.
tested with payload: 4.1.0-0.nightly-2019-04-23-223857, no such issue now # oc get apiservice v1beta1.metrics.k8s.io NAME SERVICE AVAILABLE AGE v1beta1.metrics.k8s.io openshift-monitoring/prometheus-adapter True 28h
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758