Bug 1445425
Summary: | Visualization errors with multiple indices | ||||||
---|---|---|---|---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Ruben Romero Montes <rromerom> | ||||
Component: | Logging | Assignee: | Jeff Cantrill <jcantril> | ||||
Status: | CLOSED ERRATA | QA Contact: | Anping Li <anli> | ||||
Severity: | urgent | Docs Contact: | |||||
Priority: | high | ||||||
Version: | 3.4.1 | CC: | anli, aos-bugs, jcantril, pportant, rmeggins, rromerom, wsun | ||||
Target Milestone: | --- | ||||||
Target Release: | 3.7.0 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: |
Cause: Role permissions were generated based upon project
Consequence: Queries were disallowed if they involved multiple indices
Fix: Generate role permissions base on the user and not the project
Result: Users can query across multiple indices
|
Story Points: | --- | ||||
Clone Of: | Environment: | ||||||
Last Closed: | 2017-11-28 21:53:29 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Attachments: |
|
Description
Ruben Romero Montes
2017-04-25 16:06:17 UTC
Created attachment 1273959 [details]
dashboard error
This should be a simple update to the searchguard rules. Are you able to view each individual visualization as that user without errors? @Peter, yes, visualizations show data independently but the mentioned error appears when both are added to the same dashboard. reproducer is simple * copy the JSON in https://bugzilla.redhat.com/show_bug.cgi?id=1445425#c0 to a file e.g. req.json * create a "normal" user (assumes using AllowAll identity provider): oc login --username=system:admin oc login --username=loguser --password=loguser oc login --username=system:admin # add user to project logging oc project logging > /dev/null oadm policy add-role-to-user view loguser # add user to project default oc project default > /dev/null oadm policy add-role-to-user view loguser oc project logging > /dev/null * get a token for loguser oc login --username=loguser --password=loguser test_token="$(oc whoami -t)" test_name="$(oc whoami)" test_ip="127.0.0.1" oc login --username=system:admin > /dev/null oc project logging > /dev/null * get the es pod - oc get pods * curl es cat req.json | oc exec -i $espod -- curl -s -k -H "X-Proxy-Remote-User: $test_name" -H "Authorization: Bearer $test_token" -H "X-Forwarded-For: 127.0.0.1" https://localhost:9200/_msearch -XPOST --data-binary @- | python -mjson.tool { "error": { "reason": "no permissions for indices:data/read/msearch", "root_cause": [ { "reason": "no permissions for indices:data/read/msearch", "type": "security_exception" } ], "type": "security_exception" }, "status": 403 } However, it's not just _msearch that is the problem: cat simple-req.json {"index" : "project.logging.*"} {"query" : {"match_all" : {}}} cat simple-req.json | oc exec -i $espod -- curl -s -k -H "X-Proxy-Remote-User: $test_name" -H "Authorization: Bearer $test_token" -H "X-Forwarded-For: 127.0.0.1" https://localhost:9200/project.logging.*/com.redhat.viaq.common/_msearch -XPOST --data-binary @-|python -mjson.tool | more { "responses": [ { "_shards": { "failed": 0, "successful": 1, "total": 1 }, "hits": { "hits": [ { "_id": "AVzSB2v3si2tF9ff4uSX", "_index": "project.logging.5bef27cb-579b-11e7-a170-0e4e1 1f5cce4.2017.06.22", "_score": 1.0, "_source": { "@timestamp": "2017-06-22T22:58:31.032594+00:00", ... cat simple-req.json | oc exec -i $espod -- curl -s -k -H "X-Proxy-Remote-User: $test_name" -H "Authorization: Bearer $test_token" -H "X-Forwarded-For: 127.0.0.1" https://localhost:9200/_msearch -XPOST --data-binary @-|python -mjson.tool | more Works too. nope - cannot add regular user to default project :-( Added user to kube-public project and manually added index to that project. I can reproduce the problem. This works: {"index":["project.logging.*","project.logging.*"],"search_type":"count","ignore _unavailable":true} {"query":{"filtered":{"query":{"query_string":{"query":"*","analyze_wildcard":tr ue}},"filter":{"bool":{"must":[{"query":{"query_string":{"analyze_wildcard":true ,"query":"*"}}},{"range":{"@timestamp":{"gte":1484067542156,"lte":1493135942156, "format":"epoch_millis"}}}],"must_not":[]}}}},"size":0,"aggs":{}} This works too: {"index":["project.kube-public.*","project.kube-public.*"],"search_type":"count","ignore_unavailable":true} {"query":{"filtered":{"query":{"query_string":{"query":"*","analyze_wildcard":true}},"filter":{"bool":{"must":[{"query":{"query_string":{"analyze_wildcard":true,"query":"*"}}},{"range":{"@timestamp":{"gte":1484067542156,"lte":1493135942156,"format":"epoch_millis"}}}],"must_not":[]}}}},"size":0,"aggs":{}} This does not work: {"index":["project.logging.*","project.kube-public.*"],"search_type":"count","ignore_unavailable":true} {"query":{"filtered":{"query":{"query_string":{"query":"*","analyze_wildcard":true}},"filter":{"bool":{"must":[{"query":{"query_string":{"analyze_wildcard":true,"query":"*"}}},{"range":{"@timestamp":{"gte":1484067542156,"lte":1493135942156,"format":"epoch_millis"}}}],"must_not":[]}}}},"size":0,"aggs":{}} That is, doing a multi-search with two different indices specified. Investigating more . . . can't dynamically update acls - openshift-elasticsearch-plugin wipes them out :P will need to modify the plugin with something like https://github.com/jcantrill/openshift-elasticsearch-plugin/commit/63b77f891630a3762e8e2f7d760b452cf774d676 It isn't a problem with search guard or acls. The problem is the way the openshift-elasticsearch-plugin dynamically creates the roles and role mappings. When a user logs in, the plugin creates searchguard roles for each namespace, like this (/.searchguard.$esdc/roles/0): "gen_project_kube-public_b88b73bf-5aa7-11e7-baab-0e0c721e66b4": { "cluster": [], "indices": { "kube-public?b88b73bf-5aa7-11e7-baab-0e0c721e66b4?*": { "*": [ "indices:admin/validate/query*", "indices:admin/get*", "indices:admin/mappings/fields/get*", "indices:data/read*" ] }, "project?kube-public?b88b73bf-5aa7-11e7-baab-0e0c721e66b4?*": { "*": [ "indices:admin/validate/query*", "indices:admin/get*", "indices:admin/mappings/fields/get*", "indices:data/read*" ] } } }, "gen_project_logging_bd363195-5aa7-11e7-baab-0e0c721e66b4": { "cluster": [], "indices": { "logging?bd363195-5aa7-11e7-baab-0e0c721e66b4?*": { "*": [ "indices:admin/validate/query*", "indices:admin/get*", "indices:admin/mappings/fields/get*", "indices:data/read*" ] }, "project?logging?bd363195-5aa7-11e7-baab-0e0c721e66b4?*": { "*": [ "indices:admin/validate/query*", "indices:admin/get*", "indices:admin/mappings/fields/get*", "indices:data/read*" ] } } }, The plugin also assigns the user to have each of these roles that correspond to the user's project membership - in this case, the user is a member of "kube-public" and "logging" (/.searchguard.$esdc/rolesmapping/0): "gen_project_kube-public_b88b73bf-5aa7-11e7-baab-0e0c721e66b4": { "users": [ "loguser" ] }, "gen_project_logging_bd363195-5aa7-11e7-baab-0e0c721e66b4": { "users": [ "loguser" ] }, If the request is for indices ["project.logging.*","project.kube-public.*"], searchguard will evaluate the roles for the user until it finds a role that matches _both_ indices. That is, what we really want is something like this: "gen_project_something": { "cluster": [], "indices": { "project?kube-public?b88b73bf-5aa7-11e7-baab-0e0c721e66b4?*": { "*": [ "indices:admin/validate/query*", "indices:admin/get*", "indices:admin/mappings/fields/get*", "indices:data/read*" ] }, "project?logging?bd363195-5aa7-11e7-baab-0e0c721e66b4?*": { "*": [ "indices:admin/validate/query*", "indices:admin/get*", "indices:admin/mappings/fields/get*", "indices:data/read*" ] } } }, because, otherwise, searchguard will never find a role that matches _all_ of the indices specified in the search request. I've only verified this behavior in 2.4.4.x - it is possible it was different in 1.x (OSE 3.3 and earlier), and it is possible it is different in 5.x, I don't know, I haven't tried. I'm not sure if there is an acceptable workaround. Having a dashboard or visualization or shared search that does not span multiple projects should work. Otherwise, granting cluster-reader rights to the user will also work, but that will allow the user to access _all_ namespaces including .operations, which may not be acceptable in this case. We need to go back to the drawing board and redesign how we want to do multi-tenancy to accommodate multi-project search requests. We're not going to be able to fix this for 3.6 in a satisfactory manner, because it's going to require some rearchitecture of the way we do ACLs in search guard. However, we will provide a workaround so that the customer can, on a case-by-case or user-by-user basis, enable multi-index access in Kibana. I'm working on a script that can be used to do this. Link to script: https://raw.githubusercontent.com/richm/origin-aggregated-logging/0be0fa2fd9e825bd7d272f3a960b59cd33e85e56/hack/enable-kibana-msearch-access.sh PR: https://github.com/openshift/origin-aggregated-logging/pull/507 The script should work in 3.4 and later. The arguments are the name of the user, and a list of projects that you want to allow the user to search. If you specify a user name with no projects, the access for that user will be removed. We will address this with a better plan in 3.7 This is addressed in 3.7 with [1] which moves ACLs to be user based or with [2] which allows admins to create roles manually. [1] https://github.com/openshift/origin-aggregated-logging/pull/571 [2] https://github.com/openshift/origin-aggregated-logging/pull/507 Tested on openshift v3.7.0-0.126.6, the verification work blocked by https://bugzilla.redhat.com/show_bug.cgi?id=1492576 The dashboard can display the virtualindex1 and virtualindex2 with v3.7.0-0.144.0.0 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:3188 |