Description of problem: Users with view access not able to see all pod logs. Version-Release number of selected component (if applicable): 3.11.98 How reproducible: - User is having view access not able to see all project logs in kibana. - Also after login to kibana, only default project index are visible in dropdown menu. - Due to view access user is not allowed to create index pattern for project.* Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info:
Please verify logs are actually collected for the user's projects: 'oc exec -c elasticsearch $espod -- indices | grep $userproject' Index patterns are only seeded for indices where we have gathered logs from pods in a given namespace. Additionally, creating an index pattern manually like 'project.*' will likely fail to retrieve logs with a 403 error unless the user is a cluster-admin [1] https://github.com/openshift/origin-aggregated-logging/blob/master/docs/access-control.md#role-definitions-and-permissions
Please provide the information asked in https://bugzilla.redhat.com/show_bug.cgi?id=1746482#c2
I can reproduce this issue, but only when user is granted with "view" role over the project: $ oc rsh logging-es-data-master-eecal10u-1-bms2m indices | grep test Defaulting container name to elasticsearch. Use 'oc describe pod/logging-es-data-master-eecal10u-1-bms2m -n openshift-logging' to see all of the containers in this pod. green open project.test1.42a3e029-cd78-11e9-b5f4-fa163e557f5c.2019.09.02 LouvL1rxRxmBwvsP-yUjhA 1 0 47 0 0 0 green open project.test2.2733369d-cd79-11e9-b5f4-fa163e557f5c.2019.09.02 6q8eq9ANR7-1FemZeQ0ieA 1 0 1421 0 0 0 green open project.test3.ef4974b7-cd7c-11e9-b5f4-fa163e557f5c.2019.09.02 qYrLq_JiQXCeymuA5JYv9A 1 0 144 0 0 0 $ oc policy add-role-to-user view nnosenzo -n test3 role "view" added: "nnosenzo" $ oc login -u nnosenzo $ oc get projects NAME DISPLAY NAME STATUS test1 Active test2 Active test3 Active Findings, nnosenzo can only display logs for test1 and test2 (both owned by nnosenzo) but not for test3, for which has been granted with "view" role. Attaching Kibana printscreen
Created attachment 1610753 [details] Kibana printscreen
Images used: v3.11.135
*** Bug 1746377 has been marked as a duplicate of this bug. ***
Permissions to cluster logging are only loosley based on OpenShift RBAC as it does not directly map to what is possible with the security library used in the Elasticsearch deployment. Permissions and their seeding are described here [1]. https://github.com/openshift/origin-aggregated-logging/blob/master/docs/access-control.md Data is accessible directly via ES endpoint exposed as a route. I will shortly post how to accomplish
In regards to the originally reported issue against v3.11.98: This is not a bug. As identified in [1], this occurs when a user does not have an admin role to the projects in question. Per my previous comment and described in detail in [2], index-patterns and permissions are generated using the list of projects visible to a user as if the user were to execute: 'oc get projects'. The solution is to grant a user admin permissions to the project in question: oc policy add-role-to-user admin $username -n $namespace or any other role which would return the project in an 'oc get projects' call [1]https://bugzilla.redhat.com/show_bug.cgi?id=1746482#c5 [2]https://github.com/openshift/origin-aggregated-logging/blob/master/docs/access-control.md Moving on to investigating a later release (v3.11.141) which may be exhibiting an actual bug as reported here https://bugzilla.redhat.com/show_bug.cgi?id=1746377
Just noting here that the set of projects returned by `oc get projects` is a superset the set of projects the user has at least view access for. Our openshift deployment has granted "cluster-monitoring-view" to all users for access to grafana and cluster metrics. This grant get on all projects but not permissions within them. A side effect of basing permissions of whats returned from `oc get projects` is potentially granting view access to all project logs. This is probably not intended if the only permission granted was to cluster-monitoring.
(In reply to Matthew Sweikert from comment #27) > 15 various customer cases linked with many of them at high severity. Am I > missing something or are we only getting work-around suggestions (that are > not actually working)? Multiple customers have escalated this issue and has > significant impact. Please let me know what is necessary to escalate this > to a higher priority within engineering. Note [1]. This suggestion explicitly identifies the work around to resolving against v3.11.98. If the customer has a later version of logging they may be subject to [2] which has a workaround to manually create the pattern until a new release [1]https://bugzilla.redhat.com/show_bug.cgi?id=1746482#c22 [2]https://bugzilla.redhat.com/show_bug.cgi?id=1746377#c11 Please verify the vesion which affects your customer as this is critical in understanding what will and will not work.
Closing NOTABUG specifically for v3.11.98. Please use https://bugzilla.redhat.com/show_bug.cgi?id=1752853 if appropriate to address index-pattern seeding