Hide Forgot
Audit logs lack the information to trace logins of users because oauth access/authorize tokens are not logged. They cannot be logged by default because pre-4.6 there name was sensitive information. Since 4.6 new tokens are hashed with sha256. We can audit log them without leaking sensitive information. Customers with strong audit requirements need this to adopt the 4.x platform.
BU has agreed to fix this in 4.6, however, it cannot breach the Oct 22 GA date
Reducing priority to non-blocker (matching "it cannot breach the Oct 22 GA date"). PRs are up and will merged when CI is green.
$ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.6.0-0.nightly-2020-09-26-202331 True False 3h56m Cluster version is 4.6.0-0.nightly-2020-09-26-202331 $ oc get apiserver/cluster -ojson |jq .metadata.annotations { "oauth-apiserver.openshift.io/secure-token-storage": "true", "release.openshift.io/create-only": "true" } Checked the oauth tokens logged in audig.log, sh-4.4# cat audit.log |head -1 | jq . { "kind": "Event", "apiVersion": "audit.k8s.io/v1", "level": "RequestResponse", "auditID": "2c341d37-7810-402a-bf1f-0adb4c2d33e8", "stage": "ResponseComplete", "requestURI": "/api/v1/namespaces/openshift-kube-controller-manager/configmaps/trusted-ca-bundle", "verb": "get", "user": { "username": "system:serviceaccount:openshift-kube-controller-manager-operator:kube-controller-manager-operator", "uid": "ae5f8238-c6b5-430c-b2ae-8214d712f3d1", "groups": [ "system:serviceaccounts", "system:serviceaccounts:openshift-kube-controller-manager-operator", "system:authenticated" ] }, ... "responseObject": { "kind": "ConfigMap", "apiVersion": "v1", "metadata": { "name": "trusted-ca-bundle", "namespace": "openshift-kube-controller-manager", "selfLink": "/api/v1/namespaces/openshift-kube-controller-manager/configmaps/trusted-ca-bundle", "uid": "e986715d-f2eb-4281-a118-fe100afb1817", "resourceVersion": "64450", "creationTimestamp": "2020-09-27T04:56:17Z", "labels": { "config.openshift.io/inject-trusted-cabundle": "true" } }, "data": { "ca-bundle.crt": "# ACCVRAIZ1\n-----BEGIN CERTIFICATE----- ... ... -----END CERTIFICATE-----\n" } }, ... "kind": "TokenReview", "apiVersion": "authentication.k8s.io/v1", "metadata": { "creationTimestamp": null }, "spec": { "token": "eyJhbGciOiJSUzI1NiIsImtpZCI6Imt6eHE4ZWVQbkxrMlkycEsyVDhZRHl2SWh0R0w3T1R0M0NRWGZvc2JEQkkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VdC9uYW1lc3BhY2UiOiJvcGVuc2hpZnQtbW9uaXRvcmluZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJwcm9tZXRoZXVzLWs4cy10b2tlbi10cTh6cyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmjZS1hY2NvdW50Lm5hbWUiOiJwcm9tZXRoZXVzLWs4cyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjgxN2ZmNGViLTN..." }, ... Worked as expected, so move the bug verified.
Ke Wang, thanks for verifying. But the checkpoint after "Checked the oauth tokens logged in audig.log" is incorrect. Per the PR https://github.com/openshift/library-go/pull/894/files: - level: RequestResponse verbs: ["create", "update", "patch"] resources: - group: "user.openshift.io" resources: ["identities"] - group: "oauth.openshift.io" resources: ["oauthaccesstokens", "oauthauthorizetokens"] Should check oauthaccesstokens: First oc login to ensure there is token created: $ oc login -u testuser-40 -p xxx Then check the oauthaccesstokens with cluster-admin: $ oc get oauthaccesstoken sha256~XZLcOmRYF9cTjmpDv4XVb9hOVIEgOp30CUBgfy1i69w testuser-40 openshift-challenging-client 2020-09-27T11:27:47Z ... Then check on all masters: $ grep -lr XZLcOmRYF9cTjmpDv4XVb9hOVIEgOp30CUBgfy1i69w /var/log/oauth-apiserver /var/log/openshift-apiserver /var/log/kube-apiserver Got: /var/log/oauth-apiserver/audit.log $ grep XZLcOmRYF9cTjmpDv4XVb9hOVIEgOp30CUBgfy1i69w /var/log/oauth-apiserver/audit.log {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"3d4e9862-8b5c-4c3b-a7d9-ccde130a4bd0","stage":"ResponseComplete","requestURI":"/apis/oauth.openshift.io/v1/oauthaccesstokens/sha256~XZLcOmRYF9cTjmpDv4XVb9hOVIEgOp30CUBgfy1i69w","verb":"get","user":{"username":"system:apiserver","groups":["system:masters","system:authenticated"]},"sourceIPs":["::1","10.128.0.1"],"userAgent":"kube-apiserver/v1.19.0+e465e66 (linux/amd64) kubernetes/e465e66","objectRef":{"resource":"oauthaccesstokens","name":"sha256~XZLcOmRYF9cTjmpDv4XVb9hOVIEgOp30CUBgfy1i69w","apiGroup":"oauth.openshift.io","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2020-09-27T11:32:41.209680Z","stageTimestamp":"2020-09-27T11:32:41.221378Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}} You should see "requestURI":"/apis/oauth.openshift.io/v1/oauthaccesstokens/sha256~XZLcOmRYF9cTjmpDv4XVb9hOVIEgOp30CUBgfy1i69w" is audited. In version < 4.6, this should not be allowed because the tokens in version < 4.6 are not sha246 format, thus are sensitive.
In addition, as per the PRs, should also verify oc get authentication.operator cluster -o yaml uses: audit-policy-file: - /var/run/configmaps/audit/secure-oauth-storage-default.yaml If verifying more, should also verify upgrade and downgrade as what Stefan says in https://github.com/openshift/cluster-openshift-apiserver-operator/pull/392#issue-487260262 : After downgrade to 4.5 there is a controller removing the annotation. After upgrade from 4.5, the annotation is not set. The admin can set it manually after having made sure that no non-sha256 tokens exist. I think the whole Stefan statements in https://github.com/openshift/cluster-openshift-apiserver-operator/pull/392#issue-487260262 is worthy to be in new release note, will confirm with Stefan in #forum-apiserver and CC your (Ke Wang).
F(In reply to Xingxing Xia from comment #6) > In addition, as per the PRs, should also verify oc get > authentication.operator cluster -o yaml uses: > audit-policy-file: > - /var/run/configmaps/audit/secure-oauth-storage-default.yaml > > If verifying more, should also verify upgrade and downgrade as what Stefan > says in > https://github.com/openshift/cluster-openshift-apiserver-operator/pull/ > 392#issue-487260262 : > After downgrade to 4.5 there is a controller removing the annotation. Bug https://bugzilla.redhat.com/show_bug.cgi?id=1879492#c0 will track this. > After upgrade from 4.5, the annotation is not set. The admin can set it > manually after having made sure that no non-sha256 tokens exist. > > I think the whole Stefan statements in > https://github.com/openshift/cluster-openshift-apiserver-operator/pull/ > 392#issue-487260262 is worthy to be in new release note, will confirm with > Stefan in #forum-apiserver and CC your (Ke Wang).
Tried the case upgrade 4.5 to 4.6, the result is as expected. $ oc get clusterversion -o json|jq ".items[0].status.history" [ { "completionTime": "2020-09-30T03:17:47Z", "image": "registry.svc.ci.openshift.org/ocp/release:4.6.0-0.nightly-2020-09-29-170625", "startedTime": "2020-09-30T02:13:08Z", "state": "Completed", "verified": false, "version": "4.6.0-0.nightly-2020-09-29-170625" }, { "completionTime": "2020-09-30T00:42:08Z", "image": "quay.io/openshift-release-dev/ocp-release@sha256:8d104847fc2371a983f7cb01c7c0a3ab35b7381d6bf7ce355d9b32a08c0031f0", "startedTime": "2020-09-30T00:12:59Z", "state": "Completed", "verified": false, "version": "4.5.13" } ] After upgrade to 4.6, edit the apiserver config and add the following annotation, $ oc edit apiserver cluster apiVersion: config.openshift.io/v1 kind: APIServer metadata: annotations: oauth-apiserver.openshift.io/secure-token-storage: "true" Repeat the step https://bugzilla.redhat.com/show_bug.cgi?id=1878648#c5, # pwd /var/log/oauth-apiserver # grep 'sha256' audit.log {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"Metadata","auditID":"9c0ffe13-f9a5-4233-a910-a8cece213114","stage":"ResponseComplete","requestURI":"/apis/oauth.openshift.io/v1/oauthauthorizetokens/sha256~PVDsiYuMxM7jtK3fTZ3G0u_z3O7_A9Bf4CnSMs9oN_8","verb":"delete","user":{"username":"system:serviceaccount:openshift-authentication:oauth-openshift","groups":["system:serviceaccounts","system:serviceaccounts:openshift-authentication","system:authenticated"]},"sourceIPs":["10.129.0.6","10.129.0.1"],"userAgent":"oauth-server/v0.0.0 (linux/amd64) kubernetes/$Format","objectRef":{"resource":"oauthauthorizetokens","name":"sha256~PVDsiYuMxM7jtK3fTZ3G0u_z3O7_A9Bf4CnSMs9oN_8","apiGroup":"oauth.openshift.io","apiVersion":"v1"},"responseStatus":{"metadata":{},"status":"Success","code":200},"requestReceivedTimestamp":"2020-09-30T06:52:37.928317Z","stageTimestamp":"2020-09-30T06:52:37.946143Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:openshift:openshift-authentication\" of ClusterRole \"cluster-admin\" to ServiceAccount \"oauth-openshift/openshift-authentication\""}}
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196