Bug 1839961 - Sometimes the Kibana console couldn't display all the indices when creating index patterns
Summary: Sometimes the Kibana console couldn't display all the indices when creating i...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.5.0
Assignee: Lukas Vlcek
QA Contact: Qiaoling Tang
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-05-26 06:25 UTC by Qiaoling Tang
Modified: 2020-07-13 17:41 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-13 17:41:29 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Screenshot of Kibana (275.13 KB, image/png)
2020-05-26 06:25 UTC, Qiaoling Tang
no flags Details
ES logs and mappings (20.28 KB, application/gzip)
2020-05-27 03:21 UTC, Qiaoling Tang
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:41:45 UTC

Description Qiaoling Tang 2020-05-26 06:25:42 UTC
Created attachment 1692135 [details]
Screenshot of Kibana

Description of problem:
Sometimes the Kibana console couldn't display all the indices when creating index patterns.

In the ES, there have app*, infra* indices, however, when log in to Kibana console with cluster-admin user, there only have infra* indices:

$ oc exec elasticsearch-cdm-nq7v02wg-1-965b6fcc8-sz52c -- indices
Defaulting container name to elasticsearch.
Use 'oc describe pod/elasticsearch-cdm-nq7v02wg-1-965b6fcc8-sz52c -n openshift-logging' to see all of the containers in this pod.
Tue May 26 06:16:19 UTC 2020
health status index                         uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   audit-000001                  oH0tBnK6RuiXCBdI2kbPoQ   1   0          0            0          0              0
green  open   infra-000002                  RxebAOdGSdGARWmreZ4vEg   1   0     121602            0         78             78
green  open   .security                     bp2j2eP3T3mBSFw-2Vn_jw   1   0          5            0          0              0
green  open   app-000001                    IzWBUJLGRja_cJkeFCWMng   1   0       1640            0          0              0
green  open   .kibana_1                     A7PiJhZLSO-7wL_vLrrTJg   1   0          0            0          0              0
green  open   infra-000001                  CfGu47pZRR2cgvrPJaon8A   1   0     134683            0         88             88
green  open   infra-000003                  xVrhn-HKRhm34jXLHVmh8A   1   0      17584            0         11             11
green  open   .kibana_-1595131456_testuser0 4TEdEajKR8G5Synnoz14lA   1   0          1            0          0              0
green  open   .kibana_-1595131455_testuser1 NwUkEpVDQDSIKE9qiEQ3fg   1   0          1            0          0              0
Kibana console screenshot is in the attachment.


ES container log:
[2020-05-26T06:08:10,555][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-cdm-nq7v02wg-1] [app-000001][0], node[TJQxpd3URf6JcDw1wbgxzg], [P], s[STARTED], a[id=HU7IegoMR3OUP_2cZjZ_vQ]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[*], indicesOptions=IndicesOptions[ignore_unavailable=true, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_aliases_to_multiple_indices=true, forbid_closed_indices=true, ignore_aliases=false, ignore_throttled=true], types=[], routing='null', preference='null', requestCache=false, scroll=null, maxConcurrentShardRequests=5, batchedReduceSize=512, preFilterShardSize=128, allowPartialSearchResults=true, localClusterAlias=null, getOrCreateAbsoluteStartMillis=-1, source={"size":0,"query":{"terms":{"_index":[".security","infra-000001","infra-000002","audit-000001","app-000001",".kibana_-1595131456_testuser0",".kibana_-1595131455_testuser1",".kibana_1"],"boost":1.0}},"aggregations":{"indices":{"terms":{"field":"_index","size":200,"min_doc_count":1,"shard_min_doc_count":0,"show_term_doc_count_error":false,"order":[{"_count":"desc"},{"_key":"asc"}]}}}}}] lastShard [true]
org.elasticsearch.transport.RemoteTransportException: [elasticsearch-cdm-nq7v02wg-1][10.129.2.127:9300][indices:data/read/search[phase/query]]
Caused by: org.elasticsearch.ElasticsearchException: java.util.concurrent.ExecutionException: ScriptException[runtime error]; nested: IllegalArgumentException[Fielddata is disabled on text fields by default. Set fielddata=true on [kubernetes.namespace_name] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.];
	at org.elasticsearch.ExceptionsHelper.convertToElastic(ExceptionsHelper.java:64) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.cache.bitset.BitsetFilterCache$QueryWrapperBitSetProducer.getBitSet(BitsetFilterCache.java:194) ~[elasticsearch-6.8.1.jar:6.8.1]
	at com.amazon.opendistroforelasticsearch.security.configuration.DlsFlsFilterLeafReader.<init>(DlsFlsFilterLeafReader.java:214) ~[?:?]
	at com.amazon.opendistroforelasticsearch.security.configuration.DlsFlsFilterLeafReader$DlsFlsSubReaderWrapper.wrap(DlsFlsFilterLeafReader.java:259) ~[?:?]
	at org.apache.lucene.index.FilterDirectoryReader$SubReaderWrapper.wrap(FilterDirectoryReader.java:62) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
	at org.apache.lucene.index.FilterDirectoryReader.<init>(FilterDirectoryReader.java:91) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
	at com.amazon.opendistroforelasticsearch.security.configuration.DlsFlsFilterLeafReader$DlsFlsDirectoryReader.<init>(DlsFlsFilterLeafReader.java:280) ~[?:?]
	at com.amazon.opendistroforelasticsearch.security.configuration.OpenDistroSecurityFlsDlsIndexSearcherWrapper.dlsFlsWrap(OpenDistroSecurityFlsDlsIndexSearcherWrapper.java:123) ~[?:?]
	at com.amazon.opendistroforelasticsearch.security.configuration.OpenDistroSecurityIndexSearcherWrapper.wrap(OpenDistroSecurityIndexSearcherWrapper.java:89) ~[?:?]
	at org.elasticsearch.index.shard.IndexSearcherWrapper.wrap(IndexSearcherWrapper.java:77) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:1271) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:1260) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.createSearchContext(SearchService.java:677) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.createSearchContext(SearchService.java:668) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.createContext(SearchService.java:631) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:596) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:387) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.access$100(SearchService.java:126) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:359) [elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:355) [elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService$4.doRun(SearchService.java:1107) [elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751) [elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.8.1.jar:6.8.1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_252]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_252]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_252]
Caused by: java.util.concurrent.ExecutionException: ScriptException[runtime error]; nested: IllegalArgumentException[Fielddata is disabled on text fields by default. Set fielddata=true on [kubernetes.namespace_name] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.];
	at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:436) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.cache.bitset.BitsetFilterCache.getAndLoadIfNotPresent(BitsetFilterCache.java:134) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.cache.bitset.BitsetFilterCache.access$000(BitsetFilterCache.java:73) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.cache.bitset.BitsetFilterCache$QueryWrapperBitSetProducer.getBitSet(BitsetFilterCache.java:192) ~[elasticsearch-6.8.1.jar:6.8.1]
	at com.amazon.opendistroforelasticsearch.security.configuration.DlsFlsFilterLeafReader.<init>(DlsFlsFilterLeafReader.java:214) ~[?:?]
	at com.amazon.opendistroforelasticsearch.security.configuration.DlsFlsFilterLeafReader$DlsFlsSubReaderWrapper.wrap(DlsFlsFilterLeafReader.java:259) ~[?:?]
	at org.apache.lucene.index.FilterDirectoryReader$SubReaderWrapper.wrap(FilterDirectoryReader.java:62) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
	at org.apache.lucene.index.FilterDirectoryReader.<init>(FilterDirectoryReader.java:91) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
	at com.amazon.opendistroforelasticsearch.security.configuration.DlsFlsFilterLeafReader$DlsFlsDirectoryReader.<init>(DlsFlsFilterLeafReader.java:280) ~[?:?]
	at com.amazon.opendistroforelasticsearch.security.configuration.OpenDistroSecurityFlsDlsIndexSearcherWrapper.dlsFlsWrap(OpenDistroSecurityFlsDlsIndexSearcherWrapper.java:123) ~[?:?]
	at com.amazon.opendistroforelasticsearch.security.configuration.OpenDistroSecurityIndexSearcherWrapper.wrap(OpenDistroSecurityIndexSearcherWrapper.java:89) ~[?:?]
	at org.elasticsearch.index.shard.IndexSearcherWrapper.wrap(IndexSearcherWrapper.java:77) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:1271) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:1260) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.createSearchContext(SearchService.java:677) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.createSearchContext(SearchService.java:668) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.createContext(SearchService.java:631) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:596) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:387) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.access$100(SearchService.java:126) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:359) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:355) [elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService$4.doRun(SearchService.java:1107) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.8.1.jar:6.8.1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_252]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_252]
	at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_252]
Caused by: org.elasticsearch.script.ScriptException: runtime error
	at org.elasticsearch.painless.PainlessScript.convertToScriptException(PainlessScript.java:94) ~[?:?]
	at org.elasticsearch.painless.PainlessScript$Script.execute(String namespace = doc['kubernetes.namespace_name'][0];StringTokenizer st = new StringTokenizer(params.param1,",");while (st.hasMoreTokens()){if (st.nextToken().equalsIgnoreCase(namespace)){return true;}}return false;:161) ~[?:?]
	at org.elasticsearch.index.query.ScriptQueryBuilder$ScriptQuery$1$1.matches(ScriptQueryBuilder.java:187) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.apache.lucene.search.TwoPhaseIterator$TwoPhaseIteratorAsDocIdSetIterator.doNext(TwoPhaseIterator.java:89) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
	at org.apache.lucene.search.TwoPhaseIterator$TwoPhaseIteratorAsDocIdSetIterator.nextDoc(TwoPhaseIterator.java:77) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
	at org.apache.lucene.util.BitSet.or(BitSet.java:95) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
	at org.apache.lucene.util.FixedBitSet.or(FixedBitSet.java:271) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
	at org.apache.lucene.util.BitSet.of(BitSet.java:41) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
	at org.elasticsearch.index.cache.bitset.BitsetFilterCache.lambda$getAndLoadIfNotPresent$1(BitsetFilterCache.java:144) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:433) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.cache.bitset.BitsetFilterCache.getAndLoadIfNotPresent(BitsetFilterCache.java:134) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.cache.bitset.BitsetFilterCache.access$000(BitsetFilterCache.java:73) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.cache.bitset.BitsetFilterCache$QueryWrapperBitSetProducer.getBitSet(BitsetFilterCache.java:192) ~[elasticsearch-6.8.1.jar:6.8.1]
	at com.amazon.opendistroforelasticsearch.security.configuration.DlsFlsFilterLeafReader.<init>(DlsFlsFilterLeafReader.java:214) ~[?:?]
	at com.amazon.opendistroforelasticsearch.security.configuration.DlsFlsFilterLeafReader$DlsFlsSubReaderWrapper.wrap(DlsFlsFilterLeafReader.java:259) ~[?:?]
	at org.apache.lucene.index.FilterDirectoryReader$SubReaderWrapper.wrap(FilterDirectoryReader.java:62) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
	at org.apache.lucene.index.FilterDirectoryReader.<init>(FilterDirectoryReader.java:91) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
	at com.amazon.opendistroforelasticsearch.security.configuration.DlsFlsFilterLeafReader$DlsFlsDirectoryReader.<init>(DlsFlsFilterLeafReader.java:280) ~[?:?]
	at com.amazon.opendistroforelasticsearch.security.configuration.OpenDistroSecurityFlsDlsIndexSearcherWrapper.dlsFlsWrap(OpenDistroSecurityFlsDlsIndexSearcherWrapper.java:123) ~[?:?]
	at com.amazon.opendistroforelasticsearch.security.configuration.OpenDistroSecurityIndexSearcherWrapper.wrap(OpenDistroSecurityIndexSearcherWrapper.java:89) ~[?:?]
	at org.elasticsearch.index.shard.IndexSearcherWrapper.wrap(IndexSearcherWrapper.java:77) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:1271) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:1260) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.createSearchContext(SearchService.java:677) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.createSearchContext(SearchService.java:668) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.createContext(SearchService.java:631) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:596) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:387) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.access$100(SearchService.java:126) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:359) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:355) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService$4.doRun(SearchService.java:1107) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.8.1.jar:6.8.1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_252]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_252]
	at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_252]
Caused by: java.lang.IllegalArgumentException: Fielddata is disabled on text fields by default. Set fielddata=true on [kubernetes.namespace_name] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.
	at org.elasticsearch.index.mapper.TextFieldMapper$TextFieldType.fielddataBuilder(TextFieldMapper.java:779) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.fielddata.IndexFieldDataService.getForField(IndexFieldDataService.java:116) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.query.QueryShardContext.lambda$lookup$0(QueryShardContext.java:294) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.lookup.LeafDocLookup$1.run(LeafDocLookup.java:88) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.lookup.LeafDocLookup$1.run(LeafDocLookup.java:85) ~[elasticsearch-6.8.1.jar:6.8.1]
	at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_252]
	at org.elasticsearch.search.lookup.LeafDocLookup.get(LeafDocLookup.java:85) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.lookup.LeafDocLookup.get(LeafDocLookup.java:39) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.painless.PainlessScript$Script.execute(String namespace = doc['kubernetes.namespace_name'][0];StringTokenizer st = new StringTokenizer(params.param1,",");while (st.hasMoreTokens()){if (st.nextToken().equalsIgnoreCase(namespace)){return true;}}return false;:24) ~[?:?]
	at org.elasticsearch.index.query.ScriptQueryBuilder$ScriptQuery$1$1.matches(ScriptQueryBuilder.java:187) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.apache.lucene.search.TwoPhaseIterator$TwoPhaseIteratorAsDocIdSetIterator.doNext(TwoPhaseIterator.java:89) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
	at org.apache.lucene.search.TwoPhaseIterator$TwoPhaseIteratorAsDocIdSetIterator.nextDoc(TwoPhaseIterator.java:77) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
	at org.apache.lucene.util.BitSet.or(BitSet.java:95) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
	at org.apache.lucene.util.FixedBitSet.or(FixedBitSet.java:271) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
	at org.apache.lucene.util.BitSet.of(BitSet.java:41) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
	at org.elasticsearch.index.cache.bitset.BitsetFilterCache.lambda$getAndLoadIfNotPresent$1(BitsetFilterCache.java:144) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.common.cache.Cache.computeIfAbsent(Cache.java:433) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.cache.bitset.BitsetFilterCache.getAndLoadIfNotPresent(BitsetFilterCache.java:134) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.cache.bitset.BitsetFilterCache.access$000(BitsetFilterCache.java:73) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.cache.bitset.BitsetFilterCache$QueryWrapperBitSetProducer.getBitSet(BitsetFilterCache.java:192) ~[elasticsearch-6.8.1.jar:6.8.1]
	at com.amazon.opendistroforelasticsearch.security.configuration.DlsFlsFilterLeafReader.<init>(DlsFlsFilterLeafReader.java:214) ~[?:?]
	at com.amazon.opendistroforelasticsearch.security.configuration.DlsFlsFilterLeafReader$DlsFlsSubReaderWrapper.wrap(DlsFlsFilterLeafReader.java:259) ~[?:?]
	at org.apache.lucene.index.FilterDirectoryReader$SubReaderWrapper.wrap(FilterDirectoryReader.java:62) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
	at org.apache.lucene.index.FilterDirectoryReader.<init>(FilterDirectoryReader.java:91) ~[lucene-core-7.7.0.jar:7.7.0 8c831daf4eb41153c25ddb152501ab5bae3ea3d5 - jimczi - 2019-02-04 23:16:28]
	at com.amazon.opendistroforelasticsearch.security.configuration.DlsFlsFilterLeafReader$DlsFlsDirectoryReader.<init>(DlsFlsFilterLeafReader.java:280) ~[?:?]
	at com.amazon.opendistroforelasticsearch.security.configuration.OpenDistroSecurityFlsDlsIndexSearcherWrapper.dlsFlsWrap(OpenDistroSecurityFlsDlsIndexSearcherWrapper.java:123) ~[?:?]
	at com.amazon.opendistroforelasticsearch.security.configuration.OpenDistroSecurityIndexSearcherWrapper.wrap(OpenDistroSecurityIndexSearcherWrapper.java:89) ~[?:?]
	at org.elasticsearch.index.shard.IndexSearcherWrapper.wrap(IndexSearcherWrapper.java:77) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:1271) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:1260) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.createSearchContext(SearchService.java:677) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.createSearchContext(SearchService.java:668) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.createContext(SearchService.java:631) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.createAndPutContext(SearchService.java:596) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:387) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService.access$100(SearchService.java:126) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:359) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService$2.onResponse(SearchService.java:355) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.search.SearchService$4.doRun(SearchService.java:1107) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:751) ~[elasticsearch-6.8.1.jar:6.8.1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.8.1.jar:6.8.1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[?:1.8.0_252]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[?:1.8.0_252]
	at java.lang.Thread.run(Thread.java:748) ~[?:1.8.0_252]
[2020-05-26T06:08:39,477][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-nq7v02wg-1] adding template [ocp-gen-app] for index patterns [app*]
[2020-05-26T06:08:39,537][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-nq7v02wg-1] adding template [ocp-gen-infra] for index patterns [infra*]
[2020-05-26T06:08:39,601][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-nq7v02wg-1] adding template [ocp-gen-audit] for index patterns [audit*]
[2020-05-26T06:09:11,395][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-nq7v02wg-1] adding template [ocp-gen-app] for index patterns [app*]
[2020-05-26T06:09:11,456][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-nq7v02wg-1] adding template [ocp-gen-infra] for index patterns [infra*]
[2020-05-26T06:09:11,515][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-nq7v02wg-1] adding template [ocp-gen-audit] for index patterns [audit*]
[2020-05-26T06:09:42,613][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-nq7v02wg-1] adding template [ocp-gen-app] for index patterns [app*]
[2020-05-26T06:09:42,674][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-nq7v02wg-1] adding template [ocp-gen-infra] for index patterns [infra*]
[2020-05-26T06:09:42,733][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-nq7v02wg-1] adding template [ocp-gen-audit] for index patterns [audit*]
[2020-05-26T06:10:14,068][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-nq7v02wg-1] adding template [ocp-gen-app] for index patterns [app*]
[2020-05-26T06:10:14,136][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-nq7v02wg-1] adding template [ocp-gen-infra] for index patterns [infra*]
[2020-05-26T06:10:14,197][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-nq7v02wg-1] adding template [ocp-gen-audit] for index patterns [audit*]

EO logs:
time="2020-05-26T05:44:51Z" level=warning msg="Unable to list existing templates in order to reconcile stale ones: There was an error retrieving list of templates. Error code: true, map[results:Open Distro not initialized]"
time="2020-05-26T05:44:51Z" level=error msg="Error creating index template for mapping app: There was an error creating index template ocp-gen-app. Error code: true, map[results:Open Distro not initialized]"
{"level":"error","ts":1590471891.5235353,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"elasticsearch-controller","request":"openshift-logging/elasticsearch","error":"Failed to reconcile IndexMangement for Elasticsearch cluster: There was an error creating index template ocp-gen-app. Error code: true, map[results:Open Distro not initialized]","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2020-05-26T05:44:53Z" level=warning msg="Unable to evaluate the number of replicas for index \"results\": Open Distro not initialized. cluster: elasticsearch, namespace: openshift-logging "
time="2020-05-26T05:44:53Z" level=error msg="Unable to evaluate number of replicas for index"
time="2020-05-26T05:44:53Z" level=warning msg="Unable to list existing templates in order to reconcile stale ones: There was an error retrieving list of templates. Error code: true, map[results:Open Distro not initialized]"
time="2020-05-26T05:44:53Z" level=error msg="Error creating index template for mapping app: There was an error creating index template ocp-gen-app. Error code: true, map[results:Open Distro not initialized]"
{"level":"error","ts":1590471893.576944,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"elasticsearch-controller","request":"openshift-logging/elasticsearch","error":"Failed to reconcile IndexMangement for Elasticsearch cluster: There was an error creating index template ocp-gen-app. Error code: true, map[results:Open Distro not initialized]","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2020-05-26T05:44:55Z" level=warning msg="Unable to evaluate the number of replicas for index \"results\": Open Distro not initialized. cluster: elasticsearch, namespace: openshift-logging "
time="2020-05-26T05:44:55Z" level=error msg="Unable to evaluate number of replicas for index"
time="2020-05-26T05:45:09Z" level=info msg="Updating status of Kibana"

elasticsearch CR:
    nodeSpec:
      resources:
        requests:
          memory: 1Gi
    nodes:
    - genUUID: nq7v02wg
      nodeCount: 1
      resources: {}
      roles:
      - client
      - data
      - master
      storage: {}
    redundancyPolicy: ZeroRedundancy
  status:
    cluster:
      activePrimaryShards: 9
      activeShards: 9
      initializingShards: 0
      numDataNodes: 1
      numNodes: 1
      pendingTasks: 0
      relocatingShards: 0
      status: green
      unassignedShards: 0
    clusterHealth: ""
    conditions: []
    nodes:
    - deploymentName: elasticsearch-cdm-nq7v02wg-1
      upgradeStatus: {}
    pods:
      client:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-nq7v02wg-1-965b6fcc8-sz52c
      data:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-nq7v02wg-1-965b6fcc8-sz52c
      master:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-nq7v02wg-1-965b6fcc8-sz52c
    shardAllocationEnabled: all

Version-Release number of selected component (if applicable):
quay.io/openshift/origin-elasticsearch-operator@sha256:678c264a4775cf7deebd95f3a01668a15170736c8ad75278509fe1c550a3a340
quay.io/openshift/origin-logging-elasticsearch6@sha256:9bb059fff59b3be69b10cfd99965267d3df033cacc1c75bf498fbd6c290bb812
quay.io/openshift/origin-elasticsearch-proxy@sha256:dc96a379bbfc5d315452a06a10e6856352659b5726aeedbf2447db343f2ec1ab

How reproducible:
Sometimes

Steps to Reproduce:
1. deploy CLO and EO
2. create clusterlogging instance with:
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
  name: "instance"
  namespace: "openshift-logging"
spec:
  managementState: "Managed"
  logStore:
    type: "elasticsearch"
    retentionPolicy: 
      application:
        maxAge: 1d
      infra:
        maxAge: 7d
      audit:
        maxAge: 1w
    elasticsearch:
      nodeCount: 1
      redundancyPolicy: "ZeroRedundancy"
      resources:
        requests:
          memory: "1Gi"
      storage: {}
  visualization:
    type: "kibana"
    kibana:
      replicas: 1
  collection:
    logs:
      type: "fluentd"
      fluentd: {}
3. log into Kibana console with cluster-admin

Actual results:


Expected results:


Additional info:

Comment 1 Lukas Vlcek 2020-05-26 15:39:39 UTC
To me it seems that the filter used to implement DSL is operating on the field that is not indexed as keyword (but it should be https://github.com/openshift/origin-aggregated-logging/blob/master/elasticsearch/index_templates/com.redhat.viaq-openshift-project.template.json#L415)

Can you please pull index mappings of all indices in the ES cluster?

- You can either pull complete mappings [1]
- Or you can use more narrowed API to pull mapping of kubernetes.namespace_name filed only [2]

  [1] https://www.elastic.co/guide/en/elasticsearch/reference/6.8/indices-get-mapping.html
  [2] https://www.elastic.co/guide/en/elasticsearch/reference/6.8/indices-get-field-mapping.html

      That would be something like:
      GET /_all/_mapping/_doc/field/kubernetes.namespace_name

Also it would be useful to se complete log of ES nodes (specifically the master node). We need to see if index templates were in place when individual indices were created.

Comment 2 Qiaoling Tang 2020-05-27 03:21:47 UTC
Created attachment 1692513 [details]
ES logs and mappings

I found that only `app-000001` couldn't be displayed.

I checked the alias, the index app-000001 didn't have .all alias:

$ oc exec elasticsearch-cdm-4026zv7n-3-64996d99cd-d2n72 -- es_util --query=*/_alias |jq
Defaulting container name to elasticsearch.
Use 'oc describe pod/elasticsearch-cdm-4026zv7n-3-64996d99cd-d2n72 -n openshift-logging' to see all of the containers in this pod.
{
  ".kibana_1": {
    "aliases": {
      ".kibana": {}
    }
  },
  "infra-000002": {
    "aliases": {
      ".all": {},
      "infra": {},
      "infra-write": {
        "is_write_index": false
      },
      "logs-infra": {}
    }
  },
  "infra-000001": {
    "aliases": {
      ".all": {},
      "infra": {},
      "infra-write": {
        "is_write_index": false
      },
      "logs-infra": {}
    }
  },
  "infra-000004": {
    "aliases": {
      ".all": {},
      "infra": {},
      "infra-write": {
        "is_write_index": false
      },
      "logs-infra": {}
    }
  },
  ".security": {
    "aliases": {}
  },
  ".kibana_324888819_qitang1": {
    "aliases": {}
  },
  "infra-000006": {
    "aliases": {
      ".all": {},
      "infra": {},
      "infra-write": {
        "is_write_index": false
      },
      "logs-infra": {}
    }
  },
  "app-000002": {
    "aliases": {
      ".all": {},
      "app": {},
      "app-write": {
        "is_write_index": true
      },
      "logs-app": {}
    }
  },
  "infra-000005": {
    "aliases": {
      ".all": {},
      "infra": {},
      "infra-write": {
        "is_write_index": false
      },
      "logs-infra": {}
    }
  },
  "app-000001": {
    "aliases": {
      "app": {},
      "app-write": {
        "is_write_index": false
      },
      "logs-app": {}
    }
  },
  "infra-000003": {
    "aliases": {
      ".all": {},
      "infra": {},
      "infra-write": {
        "is_write_index": false
      },
      "logs-infra": {}
    }
  },
  "infra-000007": {
    "aliases": {
      ".all": {},
      "infra": {},
      "infra-write": {
        "is_write_index": true
      },
      "logs-infra": {}
    }
  }
}

Comment 3 Lukas Vlcek 2020-05-27 12:56:56 UTC
The exception is most likely thrown from execution of scripted filter that is used to implement DLS.
https://github.com/openshift/origin-aggregated-logging/blob/master/elasticsearch/sgconfig/roles.yml#L158

It is caused by the fact that the kubernetes.namespace_name field does not have expected mapping (it is not a keyword).

The reason is that we have a problem with indices being created before required index mappings are seeded.

What is clear from provided logs is that index app-000001 was created BEFORE important index template.
The following is a snippet from the ES node log:

[2020-05-27T01:57:44,120][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-4026zv7n-3] adding template [com.redhat.viaq-openshift-operations.template.json] for index patterns [infra-*, audit.infra-*]
[2020-05-27T01:57:44,347][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-4026zv7n-3] adding template [ocp-gen-app] for index patterns [app*]
[2020-05-27T01:57:44,386][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-4026zv7n-3] adding template [com.redhat.viaq-openshift-orphaned.template.json] for index patterns [.orphaned.*]
[2020-05-27T01:57:44,481][INFO ][o.e.c.m.MetaDataCreateIndexService] [elasticsearch-cdm-4026zv7n-3] [app-000001] creating index, cause [api], templates [ocp-gen-app], shards [3]/[1], mappings []
[2020-05-27T01:57:44,667][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-4026zv7n-3] adding template [com.redhat.viaq-openshift-project.template.json] for index patterns [app-*]
[2020-05-27T01:57:44,894][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-4026zv7n-3] adding template [common.settings.kibana.template.json] for index patterns [.kibana*]
[2020-05-27T01:57:44,937][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-4026zv7n-3] adding template [ocp-gen-infra] for index patterns [infra*]
[2020-05-27T01:57:45,070][INFO ][o.e.c.m.MetaDataCreateIndexService] [elasticsearch-cdm-4026zv7n-3] [infra-000001] creating index, cause [api], templates [com.redhat.viaq-openshift-operations.template.json, ocp-gen-infra], shards [3]/[1], mappings [_doc]
[2020-05-27T01:57:45,145][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-4026zv7n-3] adding template [common.settings.operations.orphaned.json] for index patterns [.orphaned*]
[2020-05-27T01:57:45,368][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-4026zv7n-3] adding template [common.settings.operations.template.json] for index patterns [.operations*]
[2020-05-27T01:57:45,540][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-4026zv7n-3] adding template [common.settings.project.template.json] for index patterns [project*]
[2020-05-27T01:57:45,666][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-4026zv7n-3] adding template [kibana_index_template:.kibana_*] for index patterns [.kibana_*]
[2020-05-27T01:57:45,720][INFO ][o.e.c.m.MetaDataCreateIndexService] [elasticsearch-cdm-4026zv7n-3] [.kibana_1] creating index, cause [api], templates [kibana_index_template:.kibana_*, common.settings.kibana.template.json], shards [1]/[1], mappings [doc]
[2020-05-27T01:57:45,770][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-4026zv7n-3] adding template [jaeger-service.json] for index patterns [*jaeger-service-*]
[2020-05-27T01:57:45,896][INFO ][o.e.c.r.a.AllocationService] [elasticsearch-cdm-4026zv7n-3] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.kibana_1][0]] ...]).
[2020-05-27T01:57:45,974][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-4026zv7n-3] adding template [jaeger-span.json] for index patterns [*jaeger-span-*]
[2020-05-27T01:57:46,193][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-4026zv7n-3] adding template [org.ovirt.viaq-collectd.template.json] for index patterns [project.ovirt-metrics-*]

We can see that index was created like this:
[app-000001] creating index, cause [api], templates [ocp-gen-app], shards [3]/[1], mappings []

And the only index mapping available and matching at that time was "ocp-gen-app".

200ms later a new index template has been seeded:
adding template [com.redhat.viaq-openshift-project.template.json] for index patterns [app-*]

And this ^^ index mapping is needed to make sure the filed kubernetes.namespace_name is analyzed as "keyword".


In this case the index "app-000001" has the following mapping of namespace_name field:

$ cat full_mapping | jq '.["app-000001"]["mappings"]["_doc"]["properties"]["kubernetes"]["properties"]["namespace_name"]'
{
  "type": "text",
  "fields": {
    "keyword": {
      "type": "keyword",
      "ignore_above": 256
    }
  }
}

(Note: the field is "text", ie is tokenized, just ignore that it has sub-field called "keyword", this is not important as our DLS filter uses only kubernetes.namespace_name and not kubernetes.namespace_name.keyword field)

When we check any other index we see different mapping:

$ cat full_mapping | jq '.["app-000002"]["mappings"]["_doc"]["properties"]["kubernetes"]["properties"]["namespace_name"]'
{
  "type": "keyword",
  "norms": true
}

This is the main reason why we see the exception in ES logs complaining about:
ScriptException[runtime error]; nested: IllegalArgumentException[Fielddata is disabled on text fields by default. Set fielddata=true on [kubernetes.namespace_name] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead.];

Now, I can only speculate but given the index app-000001 was created due to "cause [api]" and not "cause [auto(bulk api)]" I think it might be created from IndexManagement and not by fluentd sending a bluk indexing request into cluster. So perhaps we need to make sure that IndexManagement is not started until cluster is properly seeded?
Maybe pending https://github.com/openshift/origin-aggregated-logging/pull/1893 can fix this?

Comment 4 Jeff Cantrill 2020-05-27 15:50:40 UTC
Please retest with latest images.  We believe this was resolved by: https://bugzilla.redhat.com/show_bug.cgi?id=1836450

Comment 5 Qiaoling Tang 2020-06-01 09:47:37 UTC
I'm not able to reproduce this issue with elasticsearch-operator.4.5.0-202005301517

Comment 7 errata-xmlrpc 2020-07-13 17:41:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.