Created attachment 1336313 [details] kibana UI, [exception] The index returned an empty result Description of problem: ops and non-ops cluseter, there are not .all index on kibana UI, see the attached pictures. The default kibana pages are all diplayed .operations.* results, for kibana-ops UI, it is fine, but for kibana UI,it will throw out error " Discover: [exception] The index returned an empty result. You can use the Time Picker to change the time filter or select a higher time interval" Version-Release number of selected component (if applicable): # rpm -qa | grep openshift-ansible openshift-ansible-3.6.173.0.45-1.git.0.dc70c99.el7.noarch openshift-ansible-roles-3.6.173.0.45-1.git.0.dc70c99.el7.noarch openshift-ansible-docs-3.6.173.0.45-1.git.0.dc70c99.el7.noarch openshift-ansible-lookup-plugins-3.6.173.0.45-1.git.0.dc70c99.el7.noarch openshift-ansible-filter-plugins-3.6.173.0.45-1.git.0.dc70c99.el7.noarch openshift-ansible-playbooks-3.6.173.0.45-1.git.0.dc70c99.el7.noarch openshift-ansible-callback-plugins-3.6.173.0.45-1.git.0.dc70c99.el7.noarch Images logging-auth-proxy:v3.6.173.0.47-1 logging-elasticsearch:v3.6.173.0.47-1 logging-curator:v3.6.173.0.47-1 logging-kibana:v3.6.173.0.21-20 logging-fluentd:v3.6.173.0.45-1 How reproducible: Always Steps to Reproduce: 1. Deploy logging and login kibana UI. 2. 3. Actual results: .all index is missing on kibana UI Expected results: .all index should be on kibana UI Additional info:
Created attachment 1336314 [details] kibana ops UI
please run the following commands, where $espod is the name of the es pod, and $esopspod is the name of the es-ops pod: oc exec $espod -- es_util --query /_cat/indices oc exec $espod -- es_util --query /_cat/aliases oc exec $esopspod -- es_util --query /_cat/indices oc exec $esopspod -- es_util --query /_cat/aliases
This should be included in the latest images along with: openshift-elasticsearch-plugin-2.4.4.15__redhat_1-1.el7.noarch.rpm which was added as part of: https://github.com/openshift/origin-aggregated-logging/commit/1a1722616bacd4f5ec25f3a895223202a0225eba and pulled in: http://download-node-02.eng.bos.redhat.com/brewroot/packages/logging-elasticsearch-docker/v3.6.173.0.47/1/data/logs/x86_64-build.log Try running as @rich suggested something like: oc exec -c elasticsearch $podname -- es_util -query=.all which should return the list of indices that are aliased by '.all'
Created attachment 1336606 [details] index info
(In reply to Jeff Cantrill from comment #3) > This should be included in the latest images along with: > > openshift-elasticsearch-plugin-2.4.4.15__redhat_1-1.el7.noarch.rpm > > which was added as part of: > > https://github.com/openshift/origin-aggregated-logging/commit/ > 1a1722616bacd4f5ec25f3a895223202a0225eba > > and pulled in: > > http://download-node-02.eng.bos.redhat.com/brewroot/packages/logging- > elasticsearch-docker/v3.6.173.0.47/1/data/logs/x86_64-build.log > > Try running as @rich suggested something like: > > oc exec -c elasticsearch $podname -- es_util -query=.all This should be --query=.all and try oc exec -c elasticsearch $podname -- es_util --query=_cat/indices and oc exec -c elasticsearch $podname -- es_util --query=_cat/aliases > > which should return the list of indices that are aliased by '.all'
Based upon c#4 this proves the alias is being created. Maybe its a timing issue between when the alias is created and when the user access the cluster via Kibana. 1. Have you tried logging out and or refreshing the browser to see if appears? 2. Only 'ops' users should see the '.all' alias; there is no such feature for 'non-ops' users. Once https://github.com/fabric8io/openshift-elasticsearch-plugin/pull/108 is merged into origin-aggregated-logging there will be no concept of 'all_HASH' for a user's aliases. 3. Discover: [exception] The index returned an empty result. You can use the Time Picker to change the time filter or select a higher time interval" this tells me you have no data collected which also means we can not create an alias to indexes that don't exist which leads me to believe at the time you visited Kibana there may be no indices for which an alias can be created.
(In reply to Rich Megginson from comment #6) > > oc exec -c elasticsearch $podname -- es_util -query=.all > > This should be --query=.all > > and try > > oc exec -c elasticsearch $podname -- es_util --query=_cat/indices > > and > > oc exec -c elasticsearch $podname -- es_util --query=_cat/aliases > > > > > which should return the list of indices that are aliased by '.all' # oc get po NAME READY STATUS RESTARTS AGE logging-curator-1-33bvm 1/1 Running 0 20h logging-curator-ops-1-qv3z2 1/1 Running 0 20h logging-es-data-master-kmlfqzxk-1-66x0f 1/1 Running 0 20h logging-es-ops-data-master-8l1qiwg2-1-gtwf1 1/1 Running 0 20h logging-fluentd-g5zt5 1/1 Running 0 20h logging-fluentd-j08sr 1/1 Running 0 20h logging-kibana-1-1qk40 2/2 Running 0 20h logging-kibana-ops-1-5009p 2/2 Running 3 20h # oc exec logging-es-data-master-kmlfqzxk-1-66x0f -- es_util --query=.all | python -m json.tool { "error": { "index": ".all", "reason": "no such index", "resource.id": ".all", "resource.type": "index_or_alias", "root_cause": [ { "index": ".all", "reason": "no such index", "resource.id": ".all", "resource.type": "index_or_alias", "type": "index_not_found_exception" } ], "type": "index_not_found_exception" }, "status": 404 } # oc exec logging-es-data-master-kmlfqzxk-1-66x0f -- es_util --query=_cat/indices green open .searchguard.logging-es-data-master-kmlfqzxk 1 0 5 0 30kb 30kb green open .kibana 1 0 1 0 3.1kb 3.1kb green open project.logging.82c3259d-ac87-11e7-8b60-fa163e646efa.2017.10.10 1 0 29038 0 20.8mb 20.8mb green open project.install-test.1179b970-ac88-11e7-8b60-fa163e646efa.2017.10.11 1 0 796 0 590.5kb 590.5kb green open .kibana.ef0b7ff169fdc9202e567ce53aa5e17320cb2d7d 1 0 4 0 44kb 44kb green open project.logging.82c3259d-ac87-11e7-8b60-fa163e646efa.2017.10.11 1 0 1699 0 1.3mb 1.3mb green open project.install-test.1179b970-ac88-11e7-8b60-fa163e646efa.2017.10.10 1 0 14315 0 7.9mb 7.9mb # oc exec logging-es-data-master-kmlfqzxk-1-66x0f -- es_util --query=_cat/aliases no result # oc exec logging-es-ops-data-master-8l1qiwg2-1-gtwf1 -- es_util --query=_cat/indices green open .kibana.ef0b7ff169fdc9202e567ce53aa5e17320cb2d7d 1 0 2 0 25.9kb 25.9kb green open .searchguard.logging-es-ops-data-master-8l1qiwg2 1 0 5 0 30kb 30kb green open .operations.2017.10.11 1 0 232421 0 111.1mb 111.1mb green open .kibana 1 0 1 0 3.1kb 3.1kb green open .operations.2017.10.10 1 0 3990303 0 1.8gb 1.8gb # oc exec logging-es-ops-data-master-8l1qiwg2-1-gtwf1 -- es_util --query=_cat/aliases .all .operations.2017.10.10 - - - # oc exec logging-es-ops-data-master-8l1qiwg2-1-gtwf1 -- es_util --query=.all | python -m json.tool see attached fild
Created attachment 1336949 [details] index info, see this one
(In reply to Jeff Cantrill from comment #7) > Based upon c#4 this proves the alias is being created. Maybe its a timing > issue between when the alias is created and when the user access the cluster > via Kibana. > > 1. Have you tried logging out and or refreshing the browser to see if > appears? logged out and logged in, it's still the same error. > 2. Only 'ops' users should see the '.all' alias; there is no such feature > for 'non-ops' users. Once > https://github.com/fabric8io/openshift-elasticsearch-plugin/pull/108 is > merged into origin-aggregated-logging there will be no concept of 'all_HASH' > for a user's aliases. from https://bugzilla.redhat.com/show_bug.cgi?id=1473153#c3 .all index is seen for ops and non-ops user, are we going to change the behaviour? > 3. Discover: [exception] The index returned an empty result. You can use the > Time Picker to change the time filter or select a higher time interval" > this tells me you have no data collected which also means we can not create > an alias to indexes that don't exist which leads me to believe at the time > you visited Kibana there may be no indices for which an alias can be created. Selected "Last 7 days", it is still the same issue,see the attached picture
Created attachment 1336950 [details] selected "Last 7 days" UI
Non-ops users should never see the '.all' index as it references operations indices that are unavailable to non-ops users. Can you please attach additional information from the logging stack [1]. Logs from the ES cluster would be useful [1] https://github.com/openshift/origin-aggregated-logging/blob/master/hack/logging-dump.sh
Created attachment 1338015 [details] logging environment dump
.all index is shown on kibana UI and kibana-ops UI. but enabled ops cluster, and log in kibana UI, there is no result under .all index.I think we can close this defect and open another one to track. # openshift version openshift v3.6.173.0.65 kubernetes v1.6.1+5115d708d7 etcd 3.2.1 logging-kibana/images/v3.6.173.0.65-1 logging-elasticsearch/images/v3.6.173.0.65-1 logging-fluentd/images/v3.6.173.0.65-1 logging-auth-proxy/images/v3.6.173.0.65-1 logging-curator/images/v3.6.173.0.65-1
Created attachment 1348791 [details] enabled ops cluster, kibana UI, there is not result under .all index
Used v3.6.173.0.78-1 logging images, covered non-ops cluster and ops enabled cluster, .all index could be shown on kibana and kibana-ops UI now, see the attached pictures. Please set this defect to ON_QA
Created attachment 1358068 [details] kibana UI, there is .all index
Created attachment 1358069 [details] kibana-ops UI, there is .all index
Set it to VERIFIED based on Comment 16
Hello, ihac with a very similar issue. BUT, the timeframe for missing .index is of two weeks. Can you confirm if is the same issue ? three of us confirmed: ISSUE: After upgrade to ocp 3.6, es & kibana show logs fine, but it seems that the records from 2 weeks ago are inacessable. -We have kibana && kibana-ops. In kibana, we had an index ".all", and that index is gone. We created index "*", and can access that index.
There was a security fix that caused an issue related to the missing .all alias. Given you have not seen messages in a two weeks, you could be seeing them curated. I don't believe this to be the same issue. Maybe this [1] is your issue? Can you please provide more information in that one or open a new issue. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1494612
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:1106