Hide Forgot
When implemented an instance of "elastalert" (https://github.com/Yelp/elastalert) that uses a service account to access the elasticsearch backend. After the upgrade from 3.10 to 3.11 the Elastalert refuses to start because it cannot longer access the root URL of the elasticsearch cluster ("/") Error message is somethin like: {"error":{"root_cause":[{"type":"security_exception","reason":"no permissions for [cluster:monitor/main] and User [name=system:serviceaccount:dbms-preprod:elastalert, roles=[gen_kibana_b08432e716e206e07b426a7d344ea27b9bc7f96e, gen_user_b08432e716e206e07b426a7d344ea27b9bc7f96e]]"}],"type":"security_exception","reason":"no permissions for [cluster:monitor/main] and User [name=system:serviceaccount:dbms-preprod:elastalert, roles=[gen_kibana_b08432e716e206e07b426a7d344ea27b9bc7f96e, gen_user_b08432e716e206e07b426a7d344ea27b9bc7f96e]]"}, All other Endpoints and searching the user's project indices still works as expected. Elastalert uses the "/" endpoint to check the elasticsearch version, but if that fails it refuses to start. There is already a Openshift Origin issue for that. https://github.com/openshift/origin-aggregated-logging/issues/1641 Was managed to workaround this by adding a static "/" info page to the auth-proxy. But as this behavior looks like a regression introduced in 3.11 a proper fix seems suitable.
(In reply to hgomes from comment #0) > But as this behavior looks like a regression introduced in 3.11 a proper fix > seems suitable. Correction - it was technically not a regression - it was only an "accident" that the root "/" URL was readable in 3.10 and earlier. It was not part of the public API. The supported OpenShift logging EFK stack does not require permission to view "/". I'm not saying we won't fix it, but be careful about the use of the term "regression", and technically this is an RFE, not a bug.
Are you able to tell me what role this SA had in 3.10 "system:serviceaccount:dbms-preprod:elastalert"? Looking at 3.10 action groups [1] and our declared permissions [2], there are none which match "cluster:monitor/main" unless the user/SA is in a role that can answer 'oc -n default auth can-i view pods/logs' [3] which would give them admin rights for ES. The alternative reasoning: * Maybe this endpoint was not guarded in 2.x * User manually adjusted the permissions for it to be open [1] https://github.com/openshift/origin-aggregated-logging/blob/release-3.10/elasticsearch/sgconfig/sg_action_groups.yml [2] https://github.com/fabric8io/openshift-elasticsearch-plugin/blob/2.4.4/src/test/resources/io/fabric8/elasticsearch/plugin/user_role_with_shared_kibana_index_with_unique.yml#L13 [3] https://github.com/openshift/origin-aggregated-logging/blob/master/docs/access-control.md
In support of one of my theories I don't see the failed permission listed in the original values available [1] which makes me think it was originally unguarded [1] https://www.elastic.co/guide/en/shield/2.2/privileges-list.html#ref-actions-list
Pass when using openshift3/ose-logging-elasticsearch5:v3.11.128
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:1753