Hide Forgot
Description of problem: The cluster-admin couldn't get _cat endpoints. curl -vk -X GET -H "Authorization: Bearer $token" "https://${URL}/_cat/indices?v" curl -vk -X GET -H "Authorization: Bearer $token" "https://${URL}/_cat/aliases?v" curl -vk -X GET -H "Authorization: Bearer $token" "https://${URL}/_cat/nodes?v" The _cluster/health can be retrieved curl -vk -X GET -H "Authorization: Bearer $token" "https://${URL}/_cluster/health?pretty" How reproducible: always Version-Release number of selected component (if applicable): atomic-openshift-3.9.0-0.51.0. logging-elasticsearch/images/v3.9.0-0.51.0.0" Steps to Reproduce: 1. Deploy logging with openshift_logging_es_allow_external=True 2. Grant use1 cluster-admin roles oc adm policy add-cluster-role-to-user cluster-admin user1 3. Get the token and route oc login -u user1 token=$(oc whoami -t) URL=$(oc get route logging-es -o custom-columns=ROUTER:.spec.host |grep -v 'ROUTER') 4. Access _cat endpoints using token curl -vk -X GET -H "Authorization: Bearer $token" "https://${URL}/_cat/indices?v" curl -vk -X GET -H "Authorization: Bearer $token" "https://${URL}/_cat/aliases?v" curl -vk -X GET -H "Authorization: Bearer $token" "https://${URL}/_cat/nodes?v" 5. Access _cluster endpoints curl -vk -X GET -H "Authorization: Bearer $token" "https://${URL}/_cluster/health?pretty" Actual results: Step 4: < HTTP/1.1 403 Forbidden < Content-Type: application/json; charset=UTF-8 < Content-Length: 201 < Set-Cookie: b996a9a16827ba0b2128327e4995fdd9=b767e4df32faa957e433959b7a52398e; path=/; HttpOnly; Secure < {"error":{"root_cause":[{"type":"security_exception","reason":"no permissions for cluster:monitor/state"}],"type":"security_exception","reason":"no permissions for cluster:monitor/state"},"status":403} Step 5: < HTTP/1.1 200 OK < Content-Type: application/json; charset=UTF-8 < Content-Length: 463 < Set-Cookie: b996a9a16827ba0b2128327e4995fdd9=b767e4df32faa957e433959b7a52398e; path=/; HttpOnly; Secure < Cache-control: private < { "cluster_name" : "logging-es", "status" : "green", "timed_out" : false, "number_of_nodes" : 1, "number_of_data_nodes" : 1, "active_primary_shards" : 8, "active_shards" : 8, "relocating_shards" : 0, "initializing_shards" : 0, "unassigned_shards" : 0, "delayed_unassigned_shards" : 0, "number_of_pending_tasks" : 0, "number_of_in_flight_fetch" : 0, "task_max_waiting_in_queue_millis" : 0, "active_shards_percent_as_number" : 100.0 } Expected results: Step 4: can return Elasticsearch Endpoints Additional info:
You may use 'oc exec -c elasticsearch $POD -- es_util --query=_cat/indices' which allows you to use the admin certs. If you have cluster permissions then you have access to the pod which allows you access to these endpoints. You may also need to simply update the rolesmapping document by rsh to the pod, update the document with the appropriate actions and then running 'es_seed_acl'. @Eric, thoughts on if this is something we missed and should allow and or change? Lowering the priority as this is not a blocker
Is this a regression? If not, it is an RFE.
https://github.com/openshift/origin-aggregated-logging/pull/1270
# curl -k -H "Authorization: Bearer $token" https://172.30.250.118:9200/ -H"x-forwarded-for: 127.0.0.1" or # curl -k -H "Authorization: Bearer $token" https://172.30.250.118:9200/_cat/indices -H"x-forwarded-for: 127.0.0.1" yellow open project.kube-system.50884c2b-8f47-11e8-a173-525400c5b2ed.2018.07.31 R0svyTlISvqMlFKUzfjk9g 5 1 2 0 55.8kb 55.8kb green open .kibana.d033e22ae348aeb5660fc2140aec35850c4da997 nH6UXiHyQvewZRzupdqkzg 1 0 5 0 54.9kb 54.9kb yellow open .kibana 095dIdIwSnqQhEc5XPg21g 1 1 1 0 3.2kb 3.2kb green open .operations.2018.08.01 CxdEmTeKSI2tRkgFdaZzcA 1 0 29786 0 44.1mb 44.1mb green open .searchguard yZc8mDnKTfiJW4BalFkHZg 1 0 0 0 67kb 67kb green open .operations.2058.09.22 JKYv9YDHQhWhtI3nvmIXIQ 1 0 3 0 19.1kb 19.1kb yellow open project.kube-proxy.d0edea0d-945a-11e8-ad9b-525400c5b2ed.2018.07.31 38lwSa_OQJGpY36vpOXQ7A 5 1 16 0 163kb 163kb yellow open project.kube-dns.d04f6a8a-945a-11e8-ad9b-525400c5b2ed.2018.07.31 w49FM3HiQLaIMe6PFLh-hA 5 1 8 0 123.3kb 123.3kb yellow open .operations.2018.07.31 G19DbKGcRFeNyxS1fwdTIQ 5 1 180853 0 234.3mb 234.3mb yellow open project.kube-system.ca5f63a5-945a-11e8-ad9b-525400c5b2ed.2018.07.31 ULT2hAF4S8KKxqJf9KkTtw 5 1 10129 0 12.1mb 12.1mb
*** Bug 1685792 has been marked as a duplicate of this bug. ***
This is a blocker for RHV release
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0636