Description of problem: The error in the kibana logs: 2017-11-29T14:47:36.881182145Z [2017-11-29 14:47:36,881][WARN ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] _all does not exist in cluster metadata The error in the kibana UI: Discover: [security_exception] no permissions for indices:data/read/msearch Facts: - User has been granted with a cluster-reader role directly, not through groups - User can query the index pattern and it returned all the hits: # curl -sv -H "X-Proxy-Remote-User: `oc whoami`" -H "Authorization: Bearer `oc whoami -t`" -k https://`oc get svc logging-kibana -o jsonpath='{.spec.clusterIP}'`/elasticsearch/.all/_search?sort=@timestamp:desc | python -mjson.tool $ head es_all_20171129.txt { "_shards": { "failed": 0, "successful": 656, "total": 656 ..... Version-Release number of selected component (if applicable): It's seems to be also hitting efk clusters with below components: /root/buildinfo/Dockerfile-openshift3-logging-curator-v3.4.1.44.26-4 /root/buildinfo/Dockerfile-openshift3-logging-elasticsearch-3.4.1-45 /root/buildinfo/Dockerfile-openshift3-logging-fluentd-3.4.1-30 /root/buildinfo/Dockerfile-openshift3-logging-kibana-3.4.1-36 /root/buildinfo/Dockerfile-openshift3-logging-auth-proxy-3.4.0-7 How reproducible: 100% in customer env. Steps to Reproduce: 1. Update to latest 3.4 logging image Actual results: Kibana UI fails Expected results: Kibana UI shows the .all index to a cluster-admin/reader user Additional info: This has been reported on https://bugzilla.redhat.com/show_bug.cgi?id=1499762 for 3.6
I'm collecting now the list of indices associated with the alias ".all" QUERY=/_alias/.all?pretty es_util
The amount of indices under the .all alias: $ wc -l 20171130_es_util_aliases.log 3112 20171130_es_util_aliases.log I've also confirmed that user used for testing has the cluster-reader role.
Nicolas, Can you tell me if this cluster was deployed with the ops cluster enabled? Are you possibly seeing behavior as described here: https://bugzilla.redhat.com/show_bug.cgi?id=1519705
(In reply to Jeff Cantrill from comment #3) > Nicolas, > > Can you tell me if this cluster was deployed with the ops cluster enabled? > Are you possibly seeing behavior as described here: > https://bugzilla.redhat.com/show_bug.cgi?id=1519705 Jeff, the cluster was deployed with ops disabled. Anyhow, I don't see the "IndexNotFoundException[no such index]" error message you mention on that bugzilla.
@Jeff, Is this a duplicate of https://bugzilla.redhat.com/show_bug.cgi?id=1499762 ?
They are different OCP versions for which we backported the same fix. We are unlikely to fix in 3.4 so if this is resolved in later releases please close this issue.
This may be resolved by v3.4.1.44.38 of the elasticsearch image which includes a fix to the openshift elasticsearch plugin
for non-ops cluster, issue is fixed, .all index could be shown on kibana, there is not error on kibana UI, and project logs could be shown on kibana UI but for enabled ops cluster, there is not project logs under .all index and separated project indcies, see the attached pictures. rshed es pods, _all and project.** does not exist in cluster metadata # cat /elasticsearch/logging-es/logs/logging-es.log [2018-04-08 09:13:10,648][WARN ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] _all does not exist in cluster metadata [2018-04-08 09:13:10,648][WARN ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] _all does not exist in cluster metadata [2018-04-08 09:13:10,649][WARN ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] _all does not exist in cluster metadata [2018-04-08 09:13:10,649][WARN ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] _all does not exist in cluster metadata [2018-04-08 09:13:10,760][WARN ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] project.logging.6c1f95df-3af7-11e8-bb90-fa163e531066.2018.04.08" does not exist in cluster metadata [2018-04-08 09:13:10,760][WARN ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] project.logging.6c1f95df-3af7-11e8-bb90-fa163e531066.2018.04.08" does not exist in cluster metadata [2018-04-08 09:13:11,780][WARN ][com.floragunn.searchguard.configuration.PrivilegesEvaluator] project.logging.6c1f95df-3af7-11e8-bb90-fa163e531066.2018.04.08" does not exist in cluster metadata
Created attachment 1418836 [details] Enabled ops cluster, kibana UI, no log entries under .all index
Created attachment 1418837 [details] Enabled ops cluster, kibana-ops UI, there are log entries under .all index
# openshift version openshift v3.4.1.44.52 kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 Images logging-deployer/images/v3.4.1.44.52-2 logging-curator/images/v3.4.1.44.52-2 logging-fluentd/images/v3.4.1.44.38-11 logging-elasticsearch/images/v3.4.1.44.38-12 logging-kibana/images/v3.4.1.44.38-10 logging-auth-proxy/images/v3.4.1.44.38-10
Created attachment 1418838 [details] logging dump output