Bug 1937916 - p&f: probes should not get 429s
Summary: p&f: probes should not get 429s
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-apiserver
Version: 4.7
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.8.z
Assignee: Abu Kashem
QA Contact: Ke Wang
URL:
Whiteboard:
: 1939732 (view as bug list)
Depends On: 1948703
Blocks: 1939537
TreeView+ depends on / blocked
 
Reported: 2021-03-11 17:53 UTC by Steve Kuznetsov
Modified: 2021-11-23 11:33 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1939537 1948703 (view as bug list)
Environment:
Last Closed: 2021-11-23 11:33:27 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-kube-apiserver-operator pull 1060 0 None open bug 1937916: add a flowschema to ensure that probes never get 429s 2021-03-16 15:22:39 UTC
Red Hat Product Errata RHBA-2021:4716 0 None None None 2021-11-23 11:33:50 UTC

Description Steve Kuznetsov 2021-03-11 17:53:57 UTC
Messages like

Get "https://10.0.175.171:17697/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

appear to be produced by kubelet even when the pod is able to communicate with itself. This causes outage of system services as they are not deemed healthy:

	Liveness probe failed: HTTP probe failed with statuscode: 429

More details in https://coreos.slack.com/archives/C01RLRP2F9N

Comment 1 David Eads 2021-03-11 18:29:01 UTC
I've opened https://github.com/openshift/cluster-kube-apiserver-operator/pull/1060 as a possibility, but I'd like a review from Abu

Comment 2 Michal Fojtik 2021-03-29 11:57:02 UTC
*** Bug 1939732 has been marked as a duplicate of this bug. ***

Comment 4 Ke Wang 2021-04-08 05:26:44 UTC
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.8.0-0.nightly-2021-04-07-115443   True        False         75m     Cluster version is 4.8.0-0.nightly-2021-04-07-115443

Per the PR https://github.com/openshift/cluster-kube-apiserver-operator/pull/1060 change, one new flowschema named probes should be created, but actually not found.

$ oc get flowschema
NAME                                PRIORITYLEVEL                       MATCHINGPRECEDENCE   DISTINGUISHERMETHOD   AGE    MISSINGPL
exempt                              exempt                              1                    <none>                105m   False
openshift-apiserver-sar             exempt                              2                    ByUser                92m    False
openshift-oauth-apiserver-sar       exempt                              2                    ByUser                75m    False
system-leader-election              leader-election                     100                  ByUser                105m   False
workload-leader-election            leader-election                     200                  ByUser                105m   False
openshift-sdn                       system                              500                  ByUser                98m    False
system-nodes                        system                              500                  ByUser                105m   False
kube-controller-manager             workload-high                       800                  ByNamespace           105m   False
kube-scheduler                      workload-high                       800                  ByNamespace           105m   False
kube-system-service-accounts        workload-high                       900                  ByNamespace           105m   False
openshift-apiserver                 workload-high                       1000                 ByUser                92m    False
openshift-controller-manager        workload-high                       1000                 ByUser                104m   False
openshift-oauth-apiserver           workload-high                       1000                 ByUser                75m    False
openshift-oauth-server              workload-high                       1000                 ByUser                75m    False
openshift-apiserver-operator        openshift-control-plane-operators   2000                 ByUser                92m    False
openshift-authentication-operator   openshift-control-plane-operators   2000                 ByUser                75m    False
openshift-etcd-operator             openshift-control-plane-operators   2000                 ByUser                96m    False
openshift-kube-apiserver-operator   openshift-control-plane-operators   2000                 ByUser                95m    False
openshift-monitoring-metrics        workload-high                       2000                 ByUser                95m    False
service-accounts                    workload-low                        9000                 ByUser                105m   False
global-default                      global-default                      9900                 ByUser                105m   False
catch-all                           catch-all                           10000                ByUser                105m   False

Check the cvo pod, make sure if the bug's PR manifests is used in cvo file system, 

$ oc get pods -A | grep openshift-cluster-version
openshift-cluster-version          cluster-version-operator-6555549458-r5bdn     1/1     Running     0          107m

$ oc exec -n openshift-cluster-version cluster-version-operator-6555549458-r5bdn -it -- cat /release-manifests/0000_20_kube-apiserver-operator_08_flowschema.yaml | grep -A100 '# probes'

# probes need to always work.  If probes get 429s, then the kubelet will treat them as probe failures.
# Since probes are cheap to run, we won't rate limit these at all.
apiVersion: flowcontrol.apiserver.k8s.io/v1beta1
kind: FlowSchema
metadata:
  name: probes
spec:
  distinguisherMethod:
    type: ByUser
  matchingPrecedence: 2
  priorityLevelConfiguration:
    name: exempt
  rules:
    - nonResourceRules:
        - nonResourceURLs:
            - '/healthz'
            - '/readyz'
            - '/livez'
          verbs:
            - 'get'
      subjects:
        - group:
            name: system:authenticated
          kind: Group
        - group:
            name: system:unauthenticated
          kind: Group

The the bug's PR update part of manifest is in, but doesn't take effect, compared with others flowschema, missed the following annotations, 

annotations:
    include.release.openshift.io/ibm-cloud-managed: "true"
    include.release.openshift.io/self-managed-high-availability: "true"
    include.release.openshift.io/single-node-developer: "true"

I copied the new probes flowschema to a yaml file and applied, the new probes flowschema was created.

$ cat probes-fs.yaml 
apiVersion: flowcontrol.apiserver.k8s.io/v1beta1
kind: FlowSchema
metadata:
  name: probes
spec:
  distinguisherMethod:
    type: ByUser
  matchingPrecedence: 2
  priorityLevelConfiguration:
    name: exempt
  rules:
    - nonResourceRules:
        - nonResourceURLs:
            - '/healthz'
            - '/readyz'
            - '/livez'
          verbs:
            - 'get'
      subjects:
        - group:
            name: system:authenticated
          kind: Group
        - group:
            name: system:unauthenticated
          kind: Group

$ oc apply -f probes-fs.yaml 
flowschema.flowcontrol.apiserver.k8s.io/probes created

$ oc get flowschema | grep probes
probes                  exempt              2               ByUser          6m6s   False

Since the PR fix doesn't work as expected, so assign it back.

Comment 5 Ke Wang 2021-11-10 14:56:59 UTC
$ oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.8.20    True        False         3h56m   Cluster version is 4.8.20

$ oc get flowschema | grep probe
probes                   exempt                              2                    <none>                3h58m   False

$ oc edit  kubeapiserver/cluster # change the loglevel to TraceAll
kubeapiserver.operator.openshift.io/cluster edited

After the kube-apiservers fnished the restarting, make some readyz requests to the apiserver,

$ for i in {1..30}; do curl -k https://api.kewang-1048g1.qe.gcp...com:6443/readyz;done;done

$ kas_pods=$(oc get pods -n openshift-kube-apiserver | grep 'kube-apiserver' | awk '{print $1}'); for pod in $kas_pods; do oc -n openshift-kube-apiserver logs $pod -c kube-apiserver | grep 'exempt' | grep 'readyz' | head -1;done

I1110 14:53:13.679381      20 apf_controller.go:792] startRequest(RequestDigest{RequestInfo: &request.RequestInfo{IsResourceRequest:false, Path:"/readyz", Verb:"get", APIPrefix:"", APIGroup:"", APIVersion:"", Namespace:"", Resource:"", Subresource:"", Name:"", Parts:[]string(nil)}, User: &user.DefaultInfo{Name:"system:anonymous", UID:"", Groups:[]string{"system:unauthenticated"}, Extra:map[string][]string(nil)}}) => fsName="probes", distMethod=(*v1beta1.FlowDistinguisherMethod)(nil), plName="exempt", immediate

The new flowschema probes works as expected.

Comment 7 Ke Wang 2021-11-10 15:10:01 UTC
Per Comment 5, move the bug VERIFIED.

Comment 10 errata-xmlrpc 2021-11-23 11:33:27 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.8.21 bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:4716


Note You need to log in before you can comment on or make changes to this bug.