ManagedFields from audit request and response bodies are drooped via new fileds introduced. We need to update the audit configuration object accordingly: - update audit configuration assets in library-go - revendor library-go for all kas, oas, oauth apiserver operators KEP: https://github.com/kubernetes/enhancements/pull/2982/files PR: https://github.com/kubernetes/kubernetes/pull/94986
1. Checked the clusterversion. $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.0-0.nightly-2022-01-26-234447 True False 96m Cluster version is 4.10.0-0.nightly-2022-01-26-234447 2."omitManagedFields" value in KAS and OAS is true by default. kas_pod=$(oc get pods -n openshift-kube-apiserver | grep 'apiserver' | grep -v 'guard'| awk 'NR==1{print $1}') $ oc exec -n openshift-kube-apiserver $kas_pod -- cat /etc/kubernetes/static-pod-resources/configmaps/kube-apiserver-audit-policies/policy.yaml | grep -iE "omitManagedFields" omitManagedFields: true $ oas_pod=$(oc get pods -n openshift-apiserver | grep 'apiserver' | awk 'NR==1{print $1}') $ oc exec -n openshift-apiserver $oas_pod -- cat /var/run/configmaps/audit/policy.yaml | grep -iE "omitManagedFields" Defaulted container "openshift-apiserver" out of: openshift-apiserver, openshift-apiserver-check-endpoints, fix-audit-permissions (init) omitManagedFields: true 3.Changed profile to AllRequestBodies $ oc patch apiserver/cluster --type=merge -p '{"spec": {"audit": {"profile": "AllRequestBodies"}}}' apiserver.config.openshift.io/cluster patched $ oc get apiserver/cluster -ojson | jq .spec.audit { "profile": "AllRequestBodies" } 4. Made few API requests. 5. Checked the presence of "ManagedFields" in the audit logs via inline script for both KAS and OAS. $ cat manage_field_check.sh PATTERN="\"ManagedFields\"\:\{|\"ManagedFields\"\:\[" KAS_PODS=$(oc get pods -n openshift-kube-apiserver | grep 'apiserver' | grep -v 'guard'| awk '{print $1}') for i in $KAS_PODS; do oc exec -n openshift-kube-apiserver $i -- grep -iEr $PATTERN /var/log/kube-apiserver done > kas_auditlog_filter.log OAS_PODS=$(oc get pods -n openshift-apiserver | grep 'apiserver' | awk '{print $1}') for i in $KAS_PODS; do oc exec -n openshift-apiserver $OAS_PODS -- grep -iEr $PATTERN /var/log/openshift-apiserver done > oas_auditlog_filter.log $ sh manage_field_check.sh $ cat kas_auditlog_filter.log $ cat oas_auditlog_filter.log 6.Changed profile to WriteRequestBodies. $ oc patch apiserver/cluster --type=merge -p '{"spec": {"audit": {"profile": "WriteRequestBodies"}}}' apiserver.config.openshift.io/cluster patched $ oc get apiserver/cluster -ojson | jq .spec.audit { "profile": "WriteRequestBodies" } 7. Executed again step 5. With both profile cases , We don't see the managedFields with values are written in audit logs. Please note that , we have seen managedFields with null value is included as part of KAS audit logs as below. Example log as follows, {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"RequestResponse","auditID":"f7b82454-7918-4ab1-b7a7-32193ba6be9a","stage":"ResponseComplete","requestURI":"/apis/rbac.authorization.k8s.io/v1/clusterroles/openshift-cluster-monitoring-view","verb":"patch"...,"metadata":{"creationTimestamp":null,"managedFields":null,"resourceVersion":null,"uid":null}},"responseObject":{"kind":"ClusterRole","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"openshift-cluster-monitoring-view","uid":"cb040ae7-3573-4f65-9a96-ea0159c2b2d2"..}} As next step of validation, need to change the policy.yaml key value pair to omitManagedFields: false and repeat the tests.
Test step as follows 1. Check OCP version $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.0-0.nightly-2022-01-27-221656 True False 102m Cluster version is 4.10.0-0.nightly-2022-01-27-221656 2.Change managementState of openshift-apiserver from "Managed" to "Unmanaged" and save the cluster. $oc edit openshiftapiserver cluster $ oc get cm -n openshift-apiserver NAME DATA AGE audit 1 122m audit-1 1 122m ---- 3. Edit cm , change omitManagedFields value from true to false and saved the CM $ oc edit cm audit-1 -n openshift-apiserver apiVersion: v1 data: policy.yaml: | apiVersion: audit.k8s.io/v1 kind: Policy metadata: creationTimestamp: null name: policy omitManagedFields: false 4. Delete all the pods to sync audit cm changes $ oc delete pods --all -n openshift-apiserver pod "apiserver-59c7786c6d-fcmmk" deleted pod "apiserver-59c7786c6d-fl4lm" deleted pod "apiserver-59c7786c6d-vqxbb" deleted 5.Run the below script $ cat manage_field_check.sh PATTERN="\"ManagedFields\"\:\{|\"ManagedFields\"\:\[" OAS_PODS=$(oc get pods -n openshift-apiserver | grep 'apiserver' | awk '{print $1}') for i in $OAS_PODS; do oc exec -n openshift-apiserver $i -- grep -iEr $PATTERN /var/log/openshift-apiserver done > oas_auditlog_filter.log $ cat oas_auditlog_filter.log | wc -l 49 It confirmed that the audit log holds "managedFields" when omitManagedFields set to false. To test further as part of policyRule side , updated inline audit policy definition. $ oc edit cm audit-2 -n openshift-apiserver apiVersion: v1 data: policy.yaml: | apiVersion: audit.k8s.io/v1 kind: Policy metadata: creationTimestamp: null name: policy omitManagedFields: true omitStages: - RequestReceived rules: - level: RequestResponse omitManagedFields: false ---- Repeated the steps 4 & 5 and Could see the respective audit logs with managedFields.
Note, when you use one cluster to repeat multiple times of test, "wc -l" is not enough, existing audit logs in previous grep can still appear. You need pay attention to the timestamps of the grep results, only those of timestamps AFTER you update audit policy should be used as test proof of the policy change. Use my https://coreos.slack.com/archives/CS05TR7BK/p1643297817116900?thread_ts=1643020961.079000&cid=CS05TR7BK commands (also used in audit case auto scripts) which recorded timestamp of change and only checked those after the change.
Following steps are performed as part of validation of the fix. 1.Check OCP version $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.10.0-0.nightly-2022-01-30-073053 True False 102m Cluster version is 4.10.0-0.nightly-2022-01-30-073053 2.Change managementState of openshift-apiserver from "Managed" to "Unmanaged" and save the cluster. $oc edit openshiftapiserver cluster 3. Identify audit CM. $ oc get cm -n openshift-apiserver | grep audit audit 1 4h51m audit-1 1 4h51m audit-2 1 3h47m ------------------------------------------------- 4. omitManagedFields vs policy rule : omitManagedFields is true for the rule level:RequestResponse $ oc edit cm audit-2 -n openshift-apiserver apiVersion: v1 data: policy.yaml: | apiVersion: audit.k8s.io/v1 kind: Policy metadata: creationTimestamp: null name: policy omitManagedFields: false rules: - level: RequestResponse omitManagedFields: true kind: ConfigMap 5.Delete all the pods to sync audit cm changes. $ oc delete pods --all -n openshift-apiserver 6.Execute the inline script to capture audit logs with "managedFields" $ cat oas_managed_fields.sh NOW=$(date -u "+%s"); echo "$NOW"; echo "$NOW" > now NS="testing-ticket" oc new-project $NS sleep 5 PATTERN="managedFields" MASTERS=`oc get no | grep master | grep -o '^[^ ]*'` for i in $MASTERS; do oc debug no/$i -- chroot /host bash -c "grep -hE '$PATTERN' /var/log/openshift-apiserver/audit*.log || true" done | jq -c 'select (.requestReceivedTimestamp | .[0:19] + "Z" | fromdateiso8601 > '"`cat now`)" > audit_oas_after_the_now.log oc delete project $NS 7.Check if there are managedFields included in audit.log - the managedFields aren't logged for the level "RequestResponse". $ LEVEL='"level":"RequestResponse"' $ grep $LEVEL audit_oas_after_the_now.log | wc -l 0 ------------------------------------------------- 8.omitManagedFields vs policy rule : i.e omitManagedFields is false for the policy rule level:RequestResponse $ oc edit cm audit-2 -n openshift-apiserver apiVersion: v1 data: policy.yaml: | apiVersion: audit.k8s.io/v1 kind: Policy metadata: creationTimestamp: null name: policy omitManagedFields: true rules: - level: RequestResponse omitManagedFields: false kind: ConfigMap 9. Repeat step 5 & 6 10.Check if audit managedFields included in audit logs - the managedFields are included for rule level requestResponse. $ LEVEL='"level":"RequestResponse"' $ grep $LEVEL audit_oas_after_the_now.log | wc -l 20 ------------------------------------------------- 11. omitManagedFields vs policy definition : omitManagedFields is false and no policy rule definition. $ oc edit cm audit-2 -n openshift-apiserver apiVersion: v1 data: policy.yaml: | apiVersion: audit.k8s.io/v1 kind: Policy metadata: creationTimestamp: null name: policy omitManagedFields: false omitStages: - RequestReceived 12. Repeat step 5 & 6 13.Check if audit managedFields included - audit log file was contain the log entries with managedFields. $cat audit_oas_after_the_now.log | wc -l 20 Example entry was like below {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"RequestResponse","auditID":"653ca4ac-34e8-4c30-9d93-24b33521a96a","stage":"ResponseComplete","requestURI":"/apis/project.openshift.io/v1/projects/testing-ticket","verb":"get","user":{"username":"system:serviceaccount:openshift-apiserver:openshift-apiserver-sa","groups":["system:serviceaccounts","system:serviceaccounts:openshift-apiserver","system:authenticated"],...,"managedFields":[{"manager":"Go-http-client","operation":"Update","apiVersion":"v1","time":"2022-01-31T14:42:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/description":{},"f:openshift.io/display-name":{},"f:openshift.io/requester":{}},"f:labels":{".":{},"f:kubernetes.io/metadata.name":{}}}}},{"manager":"openshift-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-01-31T14:42:49Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:finalizers":{}}},"subresource":"finalize"}]},"spec":{"finalizers":["kubernetes"]},"status":{"phase":"Active"}},"requestReceivedTimestamp":"2022-01-31T14:42:49.883880Z","stageTimestamp":"2022-01-31T14:42:49.901576Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:openshift:openshift-apiserver\" of ClusterRole \"cluster-admin\" to ServiceAccount \"openshift-apiserver-sa/openshift-apiserver\""}} ------------------------------------------------- 14.omitManagedFields vs policy definition : omitManagedFields is true and no policy rule definition. $ oc edit cm audit-2 -n openshift-apiserver apiVersion: v1 data: policy.yaml: | apiVersion: audit.k8s.io/v1 kind: Policy metadata: creationTimestamp: null name: policy omitManagedFields: true omitStages: - RequestReceived 15. Repeat step 5 & 6 16.Check if audit managedFields included in audit logs - audit log wasn't contain any entry with managedFields. $ cat audit_oas_after_the_now.log | wc -l 0 Hence all the above checks works as expected from the tests, Moved the ticket to verified. Thanks.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0056