Description of problem: The cluster-admin users couldn't view the project logs after the project is deleted. I searched in the ES, the project logs are still there. Curl with cluster admin user token: $ oc exec elasticsearch-cdm-tn50nwm0-1-77bd88df96-4jp5f -- curl -sk -X GET -H "Authorization: Bearer DQiSBhgpV_Flucm4FIXiQsPo6gCEajfPozbTcDktRO0" -H "Content-Type: application/json" "https://172.30.42.23:9200/app*/_count?pretty" -d '{"query": { "match": { "kubernetes.namespace_name": "qitang1" } } }' Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-tn50nwm0-1-77bd88df96-4jp5f -n openshift-logging' to see all of the containers in this pod. { "count" : 0, "_shards" : { "total" : 3, "successful" : 3, "skipped" : 0, "failed" : 0 } } Curl without token: $ oc exec elasticsearch-cdm-tn50nwm0-1-77bd88df96-4jp5f -- es_util --query=app*/_count?pretty -d '{"query": {"match": {"kubernetes.namespace_name": "qitang1"}}}' Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-tn50nwm0-1-77bd88df96-4jp5f -n openshift-logging' to see all of the containers in this pod. { "count" : 44, "_shards" : { "total" : 3, "successful" : 3, "skipped" : 0, "failed" : 0 } } Version-Release number of selected component (if applicable): Logging images are from 4.5.0-0.ci-2020-05-18-204038 How reproducible: Always Steps to Reproduce: 1. deploy logging 2. create some projects, then create some pods to generate logs 3. log in to Kibana console with cluster-admin user, check the project logs in step2, the user could view all the projects' logs 4. delete the projects created in step2 5. log in to Kibana with the same user in step3, check the project logs again, the user couldn't view the logs in the deleted projects Actual results: Expected results: Additional info:
(In reply to Qiaoling Tang from comment #0) > Description of problem: > The cluster-admin users couldn't view the project logs after the project is > deleted. I searched in the ES, the project logs are still there. By cluster admin, you are referring to a user who can satisfy the query as described here [1]. An actual user who can see the log resource in the default namespace [1] https://bugzilla.redhat.com/show_bug.cgi?id=1832668#c1 > > Curl with cluster admin user token: If you are attempting to verify multi-tenant aspects and restrictions then this is not a valid test since it bypasses the proxy. All tests should be executed against the logging service and not on the container. > $ oc exec elasticsearch-cdm-tn50nwm0-1-77bd88df96-4jp5f -- curl -sk -X GET > -H "Authorization: Bearer DQiSBhgpV_Flucm4FIXiQsPo6gCEajfPozbTcDktRO0" -H > "Content-Type: application/json" > "https://172.30.42.23:9200/app*/_count?pretty" -d '{"query": { "match": { > "kubernetes.namespace_name": "qitang1" } } }' > Defaulting container name to elasticsearch. > Use 'oc describe pod/elasticsearch-cdm-tn50nwm0-1-77bd88df96-4jp5f -n > openshift-logging' to see all of the containers in this pod. > { > "count" : 0, > "_shards" : { > "total" : 3, > "successful" : 3, > "skipped" : 0, > "failed" : 0 > } > } > > > Curl without token: This is not a valid test. The tool on the image 'es_util' utilizes the admin certs to perform the query which by definition has access to everything in ES. > $ oc exec elasticsearch-cdm-tn50nwm0-1-77bd88df96-4jp5f -- es_util > --query=app*/_count?pretty -d '{"query": {"match": > {"kubernetes.namespace_name": "qitang1"}}}' > Defaulting container name to elasticsearch. > Use 'oc describe pod/elasticsearch-cdm-tn50nwm0-1-77bd88df96-4jp5f -n > openshift-logging' to see all of the containers in this pod. > { > "count" : 44, > "_shards" : { > "total" : 3, > "successful" : 3, > "skipped" : 0, > "failed" : 0 > } > } > > Closing NOTABUG as users who are able to utilize 'oc exec' against ES by nature already have full access to cluster logs. Please test against the service directly and reopen the bz if applicable
(In reply to Jeff Cantrill from comment #1) > (In reply to Qiaoling Tang from comment #0) > > Description of problem: > > The cluster-admin users couldn't view the project logs after the project is > > deleted. I searched in the ES, the project logs are still there. > > By cluster admin, you are referring to a user who can satisfy the query as > described here [1]. An actual user who can see the log resource in the > default namespace > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1832668#c1 > > > > > Curl with cluster admin user token: > > If you are attempting to verify multi-tenant aspects and restrictions then > this is not a valid test since it bypasses the proxy. All tests should be > executed against the logging service and not on the container. > 172.30.42.23 was the elasticsearch service IP, I ran curl command in the ES pod because this IP was an internal IP, I couldn't access it outside of the OCP cluster. > > $ oc exec elasticsearch-cdm-tn50nwm0-1-77bd88df96-4jp5f -- curl -sk -X GET > > -H "Authorization: Bearer DQiSBhgpV_Flucm4FIXiQsPo6gCEajfPozbTcDktRO0" -H > > "Content-Type: application/json" > > "https://172.30.42.23:9200/app*/_count?pretty" -d '{"query": { "match": { > > "kubernetes.namespace_name": "qitang1" } } }' > > Defaulting container name to elasticsearch. > > Use 'oc describe pod/elasticsearch-cdm-tn50nwm0-1-77bd88df96-4jp5f -n > > openshift-logging' to see all of the containers in this pod. > > { > > "count" : 0, > > "_shards" : { > > "total" : 3, > > "successful" : 3, > > "skipped" : 0, > > "failed" : 0 > > } > > } > > > > > > Curl without token: > > This is not a valid test. The tool on the image 'es_util' utilizes the > admin certs to perform the query which by definition has access to > everything in ES. > I put the result here because I want to show that the project logs exist in the ES after I deleted the project, but when I curl with cluster-admin user's token, it returns `count: 0` and I couldn't find these logs in Kibana console with cluster-admin user after I deleted the project. > > $ oc exec elasticsearch-cdm-tn50nwm0-1-77bd88df96-4jp5f -- es_util > > --query=app*/_count?pretty -d '{"query": {"match": > > {"kubernetes.namespace_name": "qitang1"}}}' > > Defaulting container name to elasticsearch. > > Use 'oc describe pod/elasticsearch-cdm-tn50nwm0-1-77bd88df96-4jp5f -n > > openshift-logging' to see all of the containers in this pod. > > { > > "count" : 44, > > "_shards" : { > > "total" : 3, > > "successful" : 3, > > "skipped" : 0, > > "failed" : 0 > > } > > } > > > > > > Closing NOTABUG as users who are able to utilize 'oc exec' against ES by > nature already have full access to cluster logs. Please test against the > service directly and reopen the bz if applicable I ran these commands with the kubeconfig file, so I could exec in the ES pod. I'm sorry for misleading you.
(In reply to Qiaoling Tang from comment #2) > (In reply to Jeff Cantrill from comment #1) > > (In reply to Qiaoling Tang from comment #0) > > > Description of problem: > > > The cluster-admin users couldn't view the project logs after the project is > > > deleted. I searched in the ES, the project logs are still there. > > > > By cluster admin, you are referring to a user who can satisfy the query as > > described here [1]. An actual user who can see the log resource in the > > default namespace > > > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1832668#c1 Can you please verify this user is considered a cluster admin by answering the following 'yes': $ oc auth can-i get pods --subresource=log -n default --as=$username You will note in the ref'd bug the misleading responses to "kube:admin" > I ran these commands with the kubeconfig file, so I could exec in the ES pod. > > I'm sorry for misleading you. I misread the initial query
$ oc exec elasticsearch-cdm-df8gkojm-1-fc7d77bc-f4wjx -- es_util --query=app*/_count?pretty -d '{"query": {"match": {"kubernetes.namespace_name": "qitang"}}}' Defaulting container name to elasticsearch. Use 'oc describe pod/elasticsearch-cdm-df8gkojm-1-fc7d77bc-f4wjx -n openshift-logging' to see all of the containers in this pod. { "count" : 1113, "_shards" : { "total" : 3, "successful" : 3, "skipped" : 0, "failed" : 0 } } $ oc whoami -t wpntw92VFSWHbI_v3xs7mFoJb0n-pacW65SHd1uFIXI $ oc rsh cluster-logging-operator-6d8f84f956-hh8ff sh-4.2$ curl -sk -X GET -H "Authorization: Bearer wpntw92VFSWHbI_v3xs7mFoJb0n-pacW65SHd1uFIXI" -H "Content-Type: application/json" "https://172.30.171.57:9200/app*/_count?pretty" -d '{"query": { "match": { "kubernetes.namespace_name": "qitang" } } }' { "count" : 0, "_shards" : { "total" : 3, "successful" : 3, "skipped" : 0, "failed" : 0 } } sh-4.2$ exit exit $ oc whoami qitang $ oc auth can-i get pods --subresource=log -n default --as=qitang yes
Moving to severity:low as this is not a 4.5 blocker. Viewing logs from a deleted namespaces is a corner case that we can fix post 4.5 if needed
Verified by csv:clusterlogging.4.5.0-202005291637 and elasticsearch-operator.4.5.0-202005291637
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409