Hide Forgot
Please confirm you are a cluster admin: oc auth can-i view pods/log -n default -t $token This is the way this role is determined.
Please also confirm you can do this by hitting the service endpoint directly. It should be unrelated to the fact it is exposed via a route: curl --silent --insecure -H "Authorization: Bearer $token" "https://logging-es:9200/_cat/indices?v" | jq This may require you to hit the service endpoint via IP directly from the node unless you can resolve the service.
Use kubeadmin user: $ oc whoami -t whTuHxHVAOnx4wpZUt2f_RGfUDpDZlaRTkpd5tTPvZ0 $ oc auth can-i view pods/log -n default -t whTuHxHVAOnx4wpZUt2f_RGfUDpDZlaRTkpd5tTPvZ0 Error: unknown shorthand flag: 't' in -t $ oc auth can-i view pods/log -n default yes $ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE elasticsearch ClusterIP 172.30.31.41 <none> 9200/TCP 112m $ curl --silent --insecure -H "Authorization: Bearer whTuHxHVAOnx4wpZUt2f_RGfUDpDZlaRTkpd5tTPvZ0" https://172.30.31.41:9200/_cat/indices?v |python -mjson.tool { "error": { "reason": "no permissions for [indices:monitor/stats] and User [name=kube:admin, roles=[gen_project_operations, gen_kibana_81378af5ec74f2e854679768a37b6d13cf25cbfa, gen_user_81378af5ec74f2e854679768a37b6d13cf25cbfa, prometheus, jaeger]]", "root_cause": [ { "reason": "no permissions for [indices:monitor/stats] and User [name=kube:admin, roles=[gen_project_operations, gen_kibana_81378af5ec74f2e854679768a37b6d13cf25cbfa, gen_user_81378af5ec74f2e854679768a37b6d13cf25cbfa, prometheus, jaeger]]", "type": "security_exception" } ], "type": "security_exception" }, "status": 403 }
I could reproduce the same issue [1] in my test env. And I verified that I can see the indices with "oc exec" [2]. [2] oc exec $ESPOD -- curl -s -k --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key "https://localhost:9200/_cat/indices?v" health status index uuid pri rep docs.count docs.deleted store.size pri.store.size yellow open .operations.2019.02.25 m1gOWTzgROSZswz6myDSfw 1 1 2619338 0 2.2gb 2.2gb yellow open .kibana _A_z3C7mSzKOS1dUTsD4Kw 1 1 5 0 56.8kb 56.8kb yellow open .operations.2019.02.26 i7gyDz4MTNeEEDZvJwxF0g 1 1 2760813 0 3.4gb 3.4gb yellow open .searchguard 40q2ua1HQ62EeCF3mqsjwA 1 1 5 0 35.7kb 35.7kb Just in case I added the admin-key and admin-cert to the reencrypt route and reran the test, but it did not change the result... Please note that the admin-key and admin-cert are identical to the one used in the command line [2]. Another note: _nodes/stats returns the same error "type":"index_not_found_exception","reason":"no such index",... If this is a permission problem, where I could look into for debugging? Also, this should be a regression, shouldn't it? That is, the same operation should have worked on 3.11 (and older)? [1] curl --silent --insecure -H "Authorization: Bearer $( oc whoami -t )" "t}' )/.operations.*/_cat/indices" | jq { "error": { "root_cause": [ { "type": "index_not_found_exception", "reason": "no such index", "resource.type": "index_expression", "resource.id": ".operations.*", "index_uuid": "_na_", "index": ".operations.*" } ], "type": "index_not_found_exception", "reason": "no such index", "resource.type": "index_expression", "resource.id": ".operations.*", "index_uuid": "_na_", "index": ".operations.*" }, "status": 404 }
Modified the title as this is unrelated to the route. Creating a forthcoming PR against the sg_action_group.yml to include INDICES_ALL into INDEX_ANY_OPERATIONS to allow listing of indices
Verified in quay.io/openshift/origin-logging-elasticsearch5@sha256:957f768629f77220210027cffd252d657830bcf327ed14880d3f596881c267c4
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758