Bug 1885358 - add p&f configuration to protect openshift traffic
Summary: add p&f configuration to protect openshift traffic
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: kube-apiserver
Version: 4.5
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 4.7.0
Assignee: Abu Kashem
QA Contact: Ke Wang
URL:
Whiteboard:
Depends On:
Blocks: 1885356
TreeView+ depends on / blocked
 
Reported: 2020-10-05 17:58 UTC by Abu Kashem
Modified: 2021-02-24 15:23 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of: 1885356
Environment:
Last Closed: 2021-02-24 15:23:10 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-kube-apiserver-operator pull 966 0 None closed bug 1885358: protect openshift traffic by using dedicated flowschema 2021-02-10 16:39:25 UTC
Red Hat Product Errata RHSA-2020:5633 0 None None None 2021-02-24 15:23:51 UTC

Description Abu Kashem 2020-10-05 17:58:27 UTC
+++ This bug was initially created as a clone of Bug #1885356 +++

+++ This bug was initially created as a clone of Bug #1885353 +++

add p&f configuration to protect openshift traffic. Define dedicated flowschema and priority configuration that will protect openshift specific traffic.

- subjectaccessreviews (SAR) and tokenreviews from oas or oauth server is very importnant.
- openshift controller manager, other `oas` requests, '/metrics' requests from openshift-monitoring is as important as kcm traffic.
- control plane operators are important (kas-o, auth operator, etcd operator)
- The default `workloads-low` goes below the traffic defined above.

Comment 1 Abu Kashem 2020-10-06 14:01:33 UTC
These are all the relevant PRs for this BZ:
- https://github.com/openshift/cluster-kube-apiserver-operator/pull/966
- https://github.com/openshift/cluster-etcd-operator/pull/462
- https://github.com/openshift/cluster-authentication-operator/pull/356
- https://github.com/openshift/cluster-openshift-apiserver-operator/pull/398
- https://github.com/openshift/cluster-openshift-controller-manager-operator/pull/181

One way to test this would be to enable `Tracing` level logging for the kube-apiserver. priority and fairness logs which flowschema it used for each incoming request at "--v=7"

So, we need to go through each flowschema and take the service account and search with it in the apiserver log to see if the requests have the right priority level selected.
For example, take the following flowschema. it says requests from the kube-apiserver-operator should be assigned to "openshift-control-plane-operators" priority level configuration. 

apiVersion: flowcontrol.apiserver.k8s.io/v1alpha1
kind: FlowSchema
metadata:
  name: openshift-kube-apiserver-operator
spec:
  distinguisherMethod:
    type: ByUser
  matchingPrecedence: 2000
  priorityLevelConfiguration:
    name: openshift-control-plane-operators
  rules:
  - resourceRules:
    - apiGroups:
      - '*'
      clusterScope: true
      namespaces:
      - '*'
      resources:
      - '*'
      verbs:
      - '*'
    subjects:
    - kind: ServiceAccount
      serviceAccount:
        name: kube-apiserver-operator
        namespace: openshift-kube-apiserver-operator

We can do the following search 
> oc -n openshift-kube-apiserver logs kube-apiserver-ip-10-0-142-52.ec2.internal -c kube-apiserver | grep 'dispatching request' | grep 'system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator'


The above search should yield the following line(s)
> I1006 13:56:49.528305       1 queueset.go:572] QS(openshift-control-plane-operators) at r=2020-10-06 13:56:49.528295756 v=42732.803929426s: dispatching request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/openshift-config-managed/secrets", Verb:"list", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"openshift-config-managed", Resource:"secrets", Subresource:"", Name:"", Parts:[]string{"secrets"}} &user.DefaultInfo{Name:"system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator", UID:"929d69a1-fe70-4bb7-a4cf-df2cb386cb5c", Groups:[]string{"system:serviceaccounts", "system:serviceaccounts:openshift-kube-apiserver-operator", "system:authenticated"}, Extra:map[string][]string(nil)} from queue 58 with virtual start time 42732.803929426s, queue will have 0 waiting & 1 executing

The "QS(openshift-control-plane-operators)" matches the desired "priorityLevelConfiguration" defined in the flowschema.

Requests can come to any kube-apiserver instance. So we should search logs from all instances for match.

Comment 3 Ke Wang 2020-10-15 08:21:36 UTC
$ oc get clusterversion
NAME      VERSION                             AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.7.0-0.nightly-2020-10-15-011122   True        False         104m    Cluster version is 4.7.0-0.nightly-2020-10-15-011122

$ oc get FlowSchema
NAME                                PRIORITYLEVEL                             MATCHINGPRECEDENCE   DISTINGUISHERMETHOD   AGE   MISSINGPL
exempt                              exempt                                    1                    <none>                82m   False
system-leader-election              leader-election                           100                  ByUser                82m   False
workload-leader-election            leader-election                           200                  ByUser                82m   False
system-nodes                        system                                    500                  ByUser                82m   False
openshift-apiserver-sar             openshift-aggregated-api-delegated-auth   600                  ByUser                66m   False
openshift-oauth-apiserver-sar       openshift-aggregated-api-delegated-auth   600                  ByUser                54m   False
kube-controller-manager             workload-high                             800                  ByNamespace           82m   False
kube-scheduler                      workload-high                             800                  ByNamespace           82m   False
kube-system-service-accounts        workload-high                             900                  ByNamespace           82m   False
openshift-apiserver                 workload-high                             1000                 ByUser                66m   False
openshift-controller-manager        workload-high                             1000                 ByUser                82m   False
openshift-oauth-apiserver           workload-high                             1000                 ByUser                54m   False
openshift-oauth-server              workload-high                             1000                 ByUser                54m   False
openshift-apiserver-operator        openshift-control-plane-operators         2000                 ByUser                66m   False
openshift-authentication-operator   openshift-control-plane-operators         2000                 ByUser                54m   False
openshift-etcd-operator             openshift-control-plane-operators         2000                 ByUser                66m   False
openshift-kube-apiserver-operator   openshift-control-plane-operators         2000                 ByUser                66m   False
openshift-monitoring-metrics        workload-high                             2000                 ByUser                66m   False
service-accounts                    workload-low                              9000                 ByUser                82m   False
global-default                      global-default                            9900                 ByUser                82m   False
catch-all                           catch-all                                 10000                ByUser                82m   False

After changed kubeapiserver/cluster loglevel to Trace, will catch the following message which we want.

$ oc get kubeapiserver/cluster -oyaml | grep ' logLevel:'
  logLevel: Trace
  
$ kas_pods=$(oc get pods -n openshift-kube-apiserver | grep 'kube-apiserver' | awk '{print $1}')
$ for pod in $kas_pods; do oc -n openshift-kube-apiserver logs $pod -c kube-apiserver | grep 'dispatching request' | grep 'system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator';done
...
kube-apiserver-ip-10-0-205-139.us-east-2.compute.internal.log:I1015 07:59:05.599819      18 queueset.go:601] QS(openshift-control-plane-operators) at r=2020-10-15 07:59:05.599808091 v=91.926565280s: dispatching request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/openshift-kube-apiserver/pods", Verb:"list", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"openshift-kube-apiserver", Resource:"pods", Subresource:"", Name:"", Parts:[]string{"pods"}} &user.DefaultInfo{Name:"system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator", UID:"4635bc95-9bf7-4a9e-a3f5-f27db7eaa24f", Groups:[]string{"system:serviceaccounts", "system:serviceaccounts:openshift-kube-apiserver-operator", "system:authenticated"}, Extra:map[string][]string(nil)} from queue 58 with virtual start time 91.926565280s, queue will have 0 waiting & 1 executing
kube-apiserver-ip-10-0-205-139.us-east-2.compute.internal.log:I1015 07:59:05.799884      18 queueset.go:601] QS(openshift-control-plane-operators) at r=2020-10-15 07:59:05.799866250 v=91.936706025s: dispatching request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/openshift-kube-apiserver/pods", Verb:"list", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"openshift-kube-apiserver", Resource:"pods", Subresource:"", Name:"", Parts:[]string{"pods"}} &user.DefaultInfo{Name:"system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator", UID:"4635bc95-9bf7-4a9e-a3f5-f27db7eaa24f", Groups:[]string{"system:serviceaccounts", "system:serviceaccounts:openshift-kube-apiserver-operator", "system:authenticated"}, Extra:map[string][]string(nil)} from queue 58 with virtual start time 91.936706025s, queue will have 0 waiting & 1 executing
kube-apiserver-ip-10-0-205-139.us-east-2.compute.internal.log:I1015 07:59:05.999912      18 queueset.go:601] QS(openshift-control-plane-operators) at r=2020-10-15 07:59:05.999903224 v=91.948841891s: dispatching request &request.RequestInfo{IsResourceRequest:true, Path:"/api/v1/namespaces/openshift-kube-apiserver/pods", Verb:"list", APIPrefix:"api", APIGroup:"", APIVersion:"v1", Namespace:"openshift-kube-apiserver", Resource:"pods", Subresource:"", Name:"", Parts:[]string{"pods"}} &user.DefaultInfo{Name:"system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator", UID:"4635bc95-9bf7-4a9e-a3f5-f27db7eaa24f", Groups:[]string{"system:serviceaccounts", "system:serviceaccounts:openshift-kube-apiserver-operator", "system:authenticated"}, Extra:map[string][]string(nil)} from queue 58 with virtual start time 91.948841891s, queue will have 0 waiting & 1 executing
...

Searched logs from all instances for match, so move the bug VERIFIED.

Comment 6 errata-xmlrpc 2021-02-24 15:23:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: OpenShift Container Platform 4.7.0 security, bug fix, and enhancement update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2020:5633


Note You need to log in before you can comment on or make changes to this bug.