Bug 1499762 - .all index is missing on kibana UI
Summary: .all index is missing on kibana UI
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.6.1
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: 3.6.z
Assignee: Jeff Cantrill
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-10-09 10:56 UTC by Junqi Zhao
Modified: 2021-06-10 13:13 UTC (History)
16 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2018-04-12 05:59:07 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
kibana UI, [exception] The index returned an empty result (80.84 KB, image/png)
2017-10-09 10:56 UTC, Junqi Zhao
no flags Details
kibana ops UI (292.94 KB, image/png)
2017-10-09 10:57 UTC, Junqi Zhao
no flags Details
index info (36.37 KB, text/plain)
2017-10-10 04:34 UTC, Junqi Zhao
no flags Details
index info, see this one (70.02 KB, text/plain)
2017-10-11 01:23 UTC, Junqi Zhao
no flags Details
selected "Last 7 days" UI (82.23 KB, image/png)
2017-10-11 01:32 UTC, Junqi Zhao
no flags Details
logging environment dump (56.10 KB, application/x-gzip)
2017-10-13 01:57 UTC, Junqi Zhao
no flags Details
enabled ops cluster, kibana UI, there is not result under .all index (111.45 KB, image/png)
2017-11-07 00:36 UTC, Junqi Zhao
no flags Details
kibana UI, there is .all index (227.89 KB, image/png)
2017-11-23 08:13 UTC, Junqi Zhao
no flags Details
kibana-ops UI, there is .all index (233.65 KB, image/png)
2017-11-23 08:14 UTC, Junqi Zhao
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 3352681 0 None None None 2018-02-14 02:34:48 UTC
Red Hat Product Errata RHBA-2018:1106 0 None None None 2018-04-12 05:59:32 UTC

Description Junqi Zhao 2017-10-09 10:56:26 UTC
Created attachment 1336313 [details]
kibana UI, [exception] The index returned an empty result

Description of problem:
ops and non-ops cluseter, there are not .all index on kibana UI, see the attached pictures.
The default kibana pages are all diplayed .operations.* results, for kibana-ops UI, it is fine,
but for kibana UI,it will throw out error "
Discover: [exception] The index returned an empty result. You can use the Time Picker to change the time filter or select a higher time interval"

Version-Release number of selected component (if applicable):
# rpm -qa | grep openshift-ansible
openshift-ansible-3.6.173.0.45-1.git.0.dc70c99.el7.noarch
openshift-ansible-roles-3.6.173.0.45-1.git.0.dc70c99.el7.noarch
openshift-ansible-docs-3.6.173.0.45-1.git.0.dc70c99.el7.noarch
openshift-ansible-lookup-plugins-3.6.173.0.45-1.git.0.dc70c99.el7.noarch
openshift-ansible-filter-plugins-3.6.173.0.45-1.git.0.dc70c99.el7.noarch
openshift-ansible-playbooks-3.6.173.0.45-1.git.0.dc70c99.el7.noarch
openshift-ansible-callback-plugins-3.6.173.0.45-1.git.0.dc70c99.el7.noarch

Images
logging-auth-proxy:v3.6.173.0.47-1
logging-elasticsearch:v3.6.173.0.47-1
logging-curator:v3.6.173.0.47-1
logging-kibana:v3.6.173.0.21-20
logging-fluentd:v3.6.173.0.45-1

How reproducible:
Always

Steps to Reproduce:
1. Deploy logging and login kibana UI.
2.
3.

Actual results:
.all index is missing on kibana UI

Expected results:
.all index should be on kibana UI

Additional info:

Comment 1 Junqi Zhao 2017-10-09 10:57:38 UTC
Created attachment 1336314 [details]
kibana ops UI

Comment 2 Rich Megginson 2017-10-09 13:17:27 UTC
please run the following commands, where $espod is the name of the es pod, and $esopspod is the name of the es-ops pod:

oc exec $espod -- es_util --query /_cat/indices
oc exec $espod -- es_util --query /_cat/aliases
oc exec $esopspod -- es_util --query /_cat/indices
oc exec $esopspod -- es_util --query /_cat/aliases

Comment 3 Jeff Cantrill 2017-10-09 21:34:05 UTC
This should be included in the latest images along with:

openshift-elasticsearch-plugin-2.4.4.15__redhat_1-1.el7.noarch.rpm

which was added as part of:

https://github.com/openshift/origin-aggregated-logging/commit/1a1722616bacd4f5ec25f3a895223202a0225eba

and pulled in:

http://download-node-02.eng.bos.redhat.com/brewroot/packages/logging-elasticsearch-docker/v3.6.173.0.47/1/data/logs/x86_64-build.log

Try running as @rich suggested something like:

oc exec -c elasticsearch $podname -- es_util -query=.all

which should return the list of indices that are aliased by '.all'

Comment 5 Junqi Zhao 2017-10-10 04:34:29 UTC
Created attachment 1336606 [details]
index info

Comment 6 Rich Megginson 2017-10-10 13:45:19 UTC
(In reply to Jeff Cantrill from comment #3)
> This should be included in the latest images along with:
> 
> openshift-elasticsearch-plugin-2.4.4.15__redhat_1-1.el7.noarch.rpm
> 
> which was added as part of:
> 
> https://github.com/openshift/origin-aggregated-logging/commit/
> 1a1722616bacd4f5ec25f3a895223202a0225eba
> 
> and pulled in:
> 
> http://download-node-02.eng.bos.redhat.com/brewroot/packages/logging-
> elasticsearch-docker/v3.6.173.0.47/1/data/logs/x86_64-build.log
> 
> Try running as @rich suggested something like:
> 
> oc exec -c elasticsearch $podname -- es_util -query=.all

This should be --query=.all

and try

oc exec -c elasticsearch $podname -- es_util --query=_cat/indices

and

oc exec -c elasticsearch $podname -- es_util --query=_cat/aliases

> 
> which should return the list of indices that are aliased by '.all'

Comment 7 Jeff Cantrill 2017-10-10 17:03:38 UTC
Based upon c#4 this proves the alias is being created.  Maybe its a timing issue between when the alias is created and when the user access the cluster via Kibana.  

1. Have you tried logging out and or refreshing the browser to see if appears?  
2. Only 'ops' users should see the '.all' alias; there is no such feature for 'non-ops' users.  Once https://github.com/fabric8io/openshift-elasticsearch-plugin/pull/108 is merged into origin-aggregated-logging there will be no concept of 'all_HASH' for a user's aliases.
3. Discover: [exception] The index returned an empty result. You can use the Time Picker to change the time filter or select a higher time interval"  this tells me you have no data collected which also means we can not create an alias to indexes that don't exist which leads me to believe at the time you visited Kibana there may be no indices for which an alias can be created.

Comment 8 Junqi Zhao 2017-10-11 01:22:52 UTC
(In reply to Rich Megginson from comment #6)
> > oc exec -c elasticsearch $podname -- es_util -query=.all
> 
> This should be --query=.all
> 
> and try
> 
> oc exec -c elasticsearch $podname -- es_util --query=_cat/indices
> 
> and
> 
> oc exec -c elasticsearch $podname -- es_util --query=_cat/aliases
> 
> > 
> > which should return the list of indices that are aliased by '.all'

# oc get po
NAME                                          READY     STATUS    RESTARTS   AGE
logging-curator-1-33bvm                       1/1       Running   0          20h
logging-curator-ops-1-qv3z2                   1/1       Running   0          20h
logging-es-data-master-kmlfqzxk-1-66x0f       1/1       Running   0          20h
logging-es-ops-data-master-8l1qiwg2-1-gtwf1   1/1       Running   0          20h
logging-fluentd-g5zt5                         1/1       Running   0          20h
logging-fluentd-j08sr                         1/1       Running   0          20h
logging-kibana-1-1qk40                        2/2       Running   0          20h
logging-kibana-ops-1-5009p                    2/2       Running   3          20h

# oc exec logging-es-data-master-kmlfqzxk-1-66x0f -- es_util --query=.all | python -m json.tool
{
    "error": {
        "index": ".all",
        "reason": "no such index",
        "resource.id": ".all",
        "resource.type": "index_or_alias",
        "root_cause": [
            {
                "index": ".all",
                "reason": "no such index",
                "resource.id": ".all",
                "resource.type": "index_or_alias",
                "type": "index_not_found_exception"
            }
        ],
        "type": "index_not_found_exception"
    },
    "status": 404
}

# oc exec logging-es-data-master-kmlfqzxk-1-66x0f -- es_util --query=_cat/indices
green open .searchguard.logging-es-data-master-kmlfqzxk                         1 0     5 0    30kb    30kb 
green open .kibana                                                              1 0     1 0   3.1kb   3.1kb 
green open project.logging.82c3259d-ac87-11e7-8b60-fa163e646efa.2017.10.10      1 0 29038 0  20.8mb  20.8mb 
green open project.install-test.1179b970-ac88-11e7-8b60-fa163e646efa.2017.10.11 1 0   796 0 590.5kb 590.5kb 
green open .kibana.ef0b7ff169fdc9202e567ce53aa5e17320cb2d7d                     1 0     4 0    44kb    44kb 
green open project.logging.82c3259d-ac87-11e7-8b60-fa163e646efa.2017.10.11      1 0  1699 0   1.3mb   1.3mb 
green open project.install-test.1179b970-ac88-11e7-8b60-fa163e646efa.2017.10.10 1 0 14315 0   7.9mb   7.9mb 


# oc exec logging-es-data-master-kmlfqzxk-1-66x0f -- es_util --query=_cat/aliases
no result


# oc exec logging-es-ops-data-master-8l1qiwg2-1-gtwf1 -- es_util --query=_cat/indices
green open .kibana.ef0b7ff169fdc9202e567ce53aa5e17320cb2d7d 1 0       2 0  25.9kb  25.9kb 
green open .searchguard.logging-es-ops-data-master-8l1qiwg2 1 0       5 0    30kb    30kb 
green open .operations.2017.10.11                           1 0  232421 0 111.1mb 111.1mb 
green open .kibana                                          1 0       1 0   3.1kb   3.1kb 
green open .operations.2017.10.10                           1 0 3990303 0   1.8gb   1.8gb 


# oc exec logging-es-ops-data-master-8l1qiwg2-1-gtwf1 -- es_util --query=_cat/aliases
.all .operations.2017.10.10 - - - 

# oc exec logging-es-ops-data-master-8l1qiwg2-1-gtwf1 -- es_util --query=.all | python -m json.tool
see attached fild

Comment 9 Junqi Zhao 2017-10-11 01:23:34 UTC
Created attachment 1336949 [details]
index info, see this one

Comment 10 Junqi Zhao 2017-10-11 01:31:26 UTC
(In reply to Jeff Cantrill from comment #7)
> Based upon c#4 this proves the alias is being created.  Maybe its a timing
> issue between when the alias is created and when the user access the cluster
> via Kibana.  
> 
> 1. Have you tried logging out and or refreshing the browser to see if
> appears?  
logged out and logged in, it's still the same error.

> 2. Only 'ops' users should see the '.all' alias; there is no such feature
> for 'non-ops' users.  Once
> https://github.com/fabric8io/openshift-elasticsearch-plugin/pull/108 is
> merged into origin-aggregated-logging there will be no concept of 'all_HASH'
> for a user's aliases.

from https://bugzilla.redhat.com/show_bug.cgi?id=1473153#c3
.all index is seen for ops and non-ops user, are we going to change the behaviour?

> 3. Discover: [exception] The index returned an empty result. You can use the
> Time Picker to change the time filter or select a higher time interval" 
> this tells me you have no data collected which also means we can not create
> an alias to indexes that don't exist which leads me to believe at the time
> you visited Kibana there may be no indices for which an alias can be created.

Selected "Last 7 days", it is still the same issue,see the attached picture

Comment 11 Junqi Zhao 2017-10-11 01:32:04 UTC
Created attachment 1336950 [details]
selected "Last 7 days" UI

Comment 12 Jeff Cantrill 2017-10-12 13:43:21 UTC
Non-ops users should never see the '.all' index as it references operations indices that are unavailable to non-ops users.  Can you please attach additional information from the logging stack [1].  Logs from the ES cluster would be useful

[1] https://github.com/openshift/origin-aggregated-logging/blob/master/hack/logging-dump.sh

Comment 13 Junqi Zhao 2017-10-13 01:57:35 UTC
Created attachment 1338015 [details]
logging environment dump

Comment 14 Junqi Zhao 2017-11-07 00:34:02 UTC
.all index is shown on kibana UI and kibana-ops UI.
but enabled ops cluster, and log in kibana UI, there is no result under .all index.I think we can close this defect and open another one to track.

# openshift version
openshift v3.6.173.0.65
kubernetes v1.6.1+5115d708d7
etcd 3.2.1


logging-kibana/images/v3.6.173.0.65-1
logging-elasticsearch/images/v3.6.173.0.65-1
logging-fluentd/images/v3.6.173.0.65-1
logging-auth-proxy/images/v3.6.173.0.65-1
logging-curator/images/v3.6.173.0.65-1

Comment 15 Junqi Zhao 2017-11-07 00:36:22 UTC
Created attachment 1348791 [details]
enabled ops cluster, kibana UI, there is not result under .all index

Comment 16 Junqi Zhao 2017-11-23 08:12:45 UTC
Used v3.6.173.0.78-1 logging images, covered non-ops cluster and ops enabled cluster, .all index could be shown on kibana and kibana-ops UI now, see the attached pictures.

Please set this defect to ON_QA

Comment 17 Junqi Zhao 2017-11-23 08:13:29 UTC
Created attachment 1358068 [details]
kibana UI, there is .all index

Comment 18 Junqi Zhao 2017-11-23 08:14:05 UTC
Created attachment 1358069 [details]
kibana-ops UI, there is .all index

Comment 19 Junqi Zhao 2017-11-28 00:53:44 UTC
Set it to VERIFIED based on Comment 16

Comment 20 Vítor Corrêa 2017-12-06 13:18:21 UTC
Hello, ihac with a very similar issue. BUT, the timeframe for missing 
.index is of two weeks. Can you confirm if is the same issue ? three of us confirmed:

ISSUE: After upgrade to ocp 3.6, es & kibana show logs fine, but it seems that the records from 2 weeks ago are inacessable.  
-We have kibana && kibana-ops. In kibana, we had an index ".all", and that index is gone. We created index "*", and can access that index.

Comment 21 Jeff Cantrill 2017-12-06 15:50:16 UTC
There was a security fix that caused an issue related to the missing .all alias.  Given you have not seen messages in a two weeks, you could be seeing them curated.  I don't believe this to be the same issue.  Maybe this [1] is your issue?  Can you please provide more information in that one or open a new issue.

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1494612

Comment 38 errata-xmlrpc 2018-04-12 05:59:07 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1106


Note You need to log in before you can comment on or make changes to this bug.