The apirequestcount audit events add noise to the audit log without providing much value.
As part of pre-merge testing for https://github.com/openshift/cluster-kube-apiserver-operator/pull/1304 , I did create a cluster and tested changes. It looks the changes are not working fully as expected. Please refer below details. Check the cluster version. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.0-0.ci.test-2022-02-08-060432-ci-ln-y0jzcpt-latest True False 137m Cluster version is 4.10.0-0.ci.test-2022-02-08-060432-ci-ln-y0jzcpt-latest Run the below script to filter the apirequestcount entries from audit.log $ cat manage_field_check_api.sh NOW=$(date -u "+%s"); echo "$NOW"; echo "$NOW" > now oc get apirequestcount oc get apirequestcount pods.v1 oc get pods -A sleep 20 oc get apirequestcount pods.v1 PATTERN="apirequestcount" KAS_PODS=$(oc get pods -n openshift-kube-apiserver | grep 'apiserver' | grep -v 'guard'| awk '{print $1}') for i in $KAS_PODS; do oc exec -n openshift-kube-apiserver $i -- grep -iEr $PATTERN /var/log/kube-apiserver/audit.log || true done | jq -c 'select (.requestReceivedTimestamp | .[0:19] + "Z" | fromdateiso8601 > '"`cat now`)" > kas_auditlog_api.log OAS_PODS=$(oc get pods -n openshift-apiserver | grep 'apiserver' | awk '{print $1}') for i in $OAS_PODS; do oc exec -n openshift-apiserver $i -- grep -iEr $PATTERN /var/log/openshift-apiserver done | jq -c 'select (.requestReceivedTimestamp | .[0:19] + "Z" | fromdateiso8601 > '"`cat now`)" > oas_auditlog_api.log Check the audit logs filtered $ cat kas_auditlog_api.log | wc -l 9 $ cat kas_auditlog_api.log {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"fc2fcd92-b431-4f52-b5a5-a31b0a254065","stage":"ResponseComplete","requestURI":"/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io","verb":"get","user":{"username":"system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator","uid":"26306cda-734f-4b57-9f6a-33a088da706e","groups":["system:serviceaccounts","system:serviceaccounts:openshift-kube-apiserver-operator","system:authenticated"],"extra":{"authentication.kubernetes.io/pod-name":["kube-apiserver-operator-d4f549658-q4bn5"],"authentication.kubernetes.io/pod-uid":["dcce00f5-712f-43c5-8251-03cef98d0c8f"]}},"sourceIPs":["10.0.0.3"],"userAgent":"Go-http-client/2.0","objectRef":{"resource":"customresourcedefinitions","name":"apirequestcounts.apiserver.openshift.io","apiGroup":"apiextensions.k8s.io","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2022-02-08T08:54:32.436996Z","stageTimestamp":"2022-02-08T08:54:32.441290Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:openshift:operator:kube-apiserver-operator\" of ClusterRole \"cluster-admin\" to ServiceAccount \"kube-apiserver-operator/openshift-kube-apiserver-operator\""}} {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"ce4238fd-ee3b-4303-87ae-fc2519b0eb72","stage":"ResponseComplete","requestURI":"/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io","verb":"get","user":{"username":"system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator","uid":"26306cda-734f-4b57-9f6a-33a088da706e","groups":["system:serviceaccounts","system:serviceaccounts:openshift-kube-apiserver-operator","system:authenticated"],"extra":{"authentication.kubernetes.io/pod-name":["kube-apiserver-operator-d4f549658-q4bn5"],"authentication.kubernetes.io/pod-uid":["dcce00f5-712f-43c5-8251-03cef98d0c8f"]}},"sourceIPs":["10.0.0.3"],"userAgent":"Go-http-client/2.0","objectRef":{"resource":"customresourcedefinitions","name":"apirequestcounts.apiserver.openshift.io","apiGroup":"apiextensions.k8s.io","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2022-02-08T08:54:34.437658Z","stageTimestamp":"2022-02-08T08:54:34.441866Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:openshift:operator:kube-apiserver-operator\" of ClusterRole \"cluster-admin\" to ServiceAccount \"kube-apiserver-operator/openshift-kube-apiserver-operator\""}} {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"a5e7c707-3062-4047-bdfd-edbb7801858c","stage":"ResponseComplete","requestURI":"/apis/apiserver.openshift.io/v1/apirequestcounts/pods.v1","verb":"get","user":{"username":"system:admin","groups":["system:masters","system:authenticated"]},"sourceIPs":["157.44.140.28"],"userAgent":"oc/4.9.0 (linux/amd64) kubernetes/96e95ce","objectRef":{"resource":"apirequestcounts","name":"pods.v1","apiGroup":"apiserver.openshift.io","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2022-02-08T08:54:50.978400Z","stageTimestamp":"2022-02-08T08:54:50.987686Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}} {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"93882eb0-0257-4d97-8a98-460fbb4f4d4b","stage":"ResponseComplete","requestURI":"/apis/apiextensions.k8s.io/v1/customresourcedefinitions/apirequestcounts.apiserver.openshift.io","verb":"get","user":{"username":"system:serviceaccount:openshift-kube-apiserver-operator:kube-apiserver-operator","uid":"26306cda-734f-4b57-9f6a-33a088da706e","groups":["system:serviceaccounts","system:serviceaccounts:openshift-kube-apiserver-operator","system:authenticated"],"extra":{"authentication.kubernetes.io/pod-name":["kube-apiserver-operator-d4f549658-q4bn5"],"authentication.kubernetes.io/pod-uid":["dcce00f5-712f-43c5-8251-03cef98d0c8f"]}},"sourceIPs":["10.0.0.3"],"userAgent":"Go-http-client/2.0","objectRef":{"resource":"customresourcedefinitions","name":"apirequestcounts.apiserver.openshift.io","apiGroup":"apiextensions.k8s.io","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2022-02-08T08:54:58.274306Z","stageTimestamp":"2022-02-08T08:54:58.278157Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:openshift:operator:kube-apiserver-operator\" of ClusterRole \"cluster-admin\" to ServiceAccount \"kube-apiserver-operator/openshift-kube-apiserver-operator\""}} {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"a039e56b-80f9-42cc-8960-c85099c2922b","stage":"ResponseComplete","requestURI":"/apis/apiserver.openshift.io/v1/apirequestcounts?limit=500","verb":"list","user":{"username":"system:admin","groups":["system:masters","system:authenticated"]},"sourceIPs":["157.44.140.28"],"userAgent":"oc/4.9.0 (linux/amd64) kubernetes/96e95ce","objectRef":{"resource":"apirequestcounts","apiGroup":"apiserver.openshift.io","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2022-02-08T08:54:25.766004Z","stageTimestamp":"2022-02-08T08:54:26.064351Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}} {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"fed58c10-4698-47a5-a1f4-8cd996966d7f","stage":"ResponseComplete","requestURI":"/apis/apiserver.openshift.io/v1/apirequestcounts/pods.v1","verb":"get","user":{"username":"system:admin","groups":["system:masters","system:authenticated"]},"sourceIPs":["157.44.140.28"],"userAgent":"oc/4.9.0 (linux/amd64) kubernetes/96e95ce","objectRef":{"resource":"apirequestcounts","name":"pods.v1","apiGroup":"apiserver.openshift.io","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2022-02-08T08:54:27.726753Z","stageTimestamp":"2022-02-08T08:54:27.734659Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}} From above logs, it is clear that we have log entries with "resource":"apirequestcounts" for "apiGroup":"apiserver.openshift.io" and this is ideally we want to avoid it from the audit.log. sanchezl Could you please check if this is expected here?
Fix included in bug 2053582.
Hello jmekkatt, your results looks correct: - Three of the events are to the CRD API, looking up the apirequestcounts CRD. - The rest are from the system:admin user (i.e. your oc commands) and we only filter events from system:apiserver.
Inline test steps used to verify the fix. Scenario on unfixed build : Tests carried out on older OCP version where fix is NOT present( to cross check the current behavior) $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.0-0.nightly-2022-02-26-230022 True False 3h51m Cluster version is 4.10.0-0.nightly-2022-02-26-230022 $ cat check_and_filter_apirequestcount.sh PATTERN="apirequestcount" PATTERN_SECOND="system:apiserver" PATTERN_EXCLUDE="customresourcedefinitions" KAS_PODS=$(oc get pods -n openshift-kube-apiserver | grep 'apiserver' | grep -v 'guard'| awk '{print $1}') for i in $KAS_PODS; do oc exec -n openshift-kube-apiserver $i -- grep -iEr $PATTERN /var/log/kube-apiserver | grep $PATTERN_SECOND | grep -v $PATTERN_EXCLUDE || true done > kas_auditlog_api.log $ sh check_and_filter_apirequestcount.sh $ cat kas_auditlog_api.log | wc -l 55441 $ head kas_auditlog_api.log -n 2 /var/log/kube-apiserver/audit.log:{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"de0ca14f-d3fe-48f7-bc6f-d84bd37baf96","stage":"ResponseComplete","requestURI":"/apis/apiserver.openshift.io/v1/apirequestcounts","verb":"list","user":{"username":"system:apiserver","uid":"e927d0a5-4ee2-4ee4-b49d-28d370071abf","groups":["system:masters"]},"sourceIPs":["::1"],"userAgent":"kube-apiserver/v1.23.3+e419edf (linux/amd64) kubernetes/6f5a529","objectRef":{"resource":"apirequestcounts","apiGroup":"apiserver.openshift.io","apiVersion":"v1"},"responseStatus":{"metadata":{},"status":"Failure","reason":"ServiceUnavailable","code":503},"requestReceivedTimestamp":"2022-03-03T03:19:58.661436Z","stageTimestamp":"2022-03-03T03:19:58.661560Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}} /var/log/kube-apiserver/audit.log:{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"6de5acbf-8419-4cc8-8c86-4765626b793c","stage":"ResponseComplete","requestURI":"/apis/apiserver.openshift.io/v1/apirequestcounts","verb":"list","user":{"username":"system:apiserver","uid":"e927d0a5-4ee2-4ee4-b49d-28d370071abf","groups":["system:masters"]},"sourceIPs":["::1"],"userAgent":"kube-apiserver/v1.23.3+e419edf (linux/amd64) kubernetes/6f5a529","objectRef":{"resource":"apirequestcounts","apiGroup":"apiserver.openshift.io","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2022-03-03T03:20:03.704216Z","stageTimestamp":"2022-03-03T03:20:03.990843Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}} Above confirms the bug i.e. the logs filtered with apirequestcounts by system:apiserver user seen in audit logs. Scenario on Fixed builds: Tests on fixed build with different kube-apiserver cluster profiles as follows. Step 1 : Get the cluster version $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.11.0-0.nightly-2022-02-27-122819 True False 5h38m Cluster version is 4.11.0-0.nightly-2022-02-27-122819 Step 2 : Check the apirequestcount definition in policy.yaml file. $ kas_pod=$(oc get pods -n openshift-kube-apiserver | grep 'apiserver' | grep -v 'guard'| awk 'NR==1{print $1}') $ oc exec -n openshift-kube-apiserver $kas_pod -- cat /etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-audit-policies/policy.yaml | grep -A5 "group: apiserver" - group: apiserver.openshift.io resources: - apirequestcounts - apirequestcounts/* users: - system:apiserver Step 3 : Check the cluster profile value( by default it is "default") & Run the inline script to filter the logs with apirequestcount created by the user system:apiserver. $ oc get apiserver/cluster -ojson | jq .spec.audit { "profile": "Default" } $ cat check_and_filter_apirequestcount.sh PATTERN="apirequestcount" PATTERN_SECOND="system:apiserver" PATTERN_EXCLUDE="customresourcedefinitions" KAS_PODS=$(oc get pods -n openshift-kube-apiserver | grep 'apiserver' | grep -v 'guard'| awk '{print $1}') for i in $KAS_PODS; do oc exec -n openshift-kube-apiserver $i -- grep -iEr $PATTERN /var/log/kube-apiserver | grep $PATTERN_SECOND | grep -v $PATTERN_EXCLUDE || true done > kas_auditlog_api.log $ sh check_and_filter_apirequestcount.sh $ cat kas_auditlog_api.log | wc -l 0 Step 4 : Patch the profile value "WriteRequestBodies" to the cluster and wait for kube-apiserver pods to be in new version then repeat the step 3. $ oc patch apiserver cluster -p '{"spec": {"audit": {"profile": "WriteRequestBodies"}}}' --type merge apiserver.config.openshift.io/cluster patched $ oc get apiserver/cluster -ojson | jq .spec.audit { "profile": "WriteRequestBodies" } $ sh check_and_filter_apirequestcount.sh $ cat kas_auditlog_api.log | wc -l 0 Step 5 : Patch the profile value "AllRequestBodies" to the cluster and wait for kube-apiserver pods to be in new version then repeat the step 3. $ oc patch apiserver cluster -p '{"spec": {"audit": {"profile": "AllRequestBodies"}}}' --type merge apiserver.config.openshift.io/cluster patched $ oc get apiserver/cluster -ojson | jq .spec.audit { "profile": "AllRequestBodies" } $ sh check_and_filter_apirequestcount.sh $ cat kas_auditlog_api.log | wc -l 0 Step 6 : Patch the profile value "None" to the cluster and wait for kube-apiserver pods to be in new version then repeat the step 3. $ oc patch apiserver cluster -p '{"spec": {"audit": {"profile": "None"}}}' --type merge apiserver.config.openshift.io/cluster patched $ oc get apiserver/cluster -ojson | jq .spec.audit { "profile": "None" } $ sh check_and_filter_apirequestcount.sh $ cat kas_auditlog_api.log | wc -l 0 From above steps, In any of the kube-apiserver supported profiles, the request entries w.r.t apirequestcounts requests made by system:apiserver not part of audit logs. Hence moved bug to verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5069