Red Hat Bugzilla – Bug 1476062
kibana index field list being reset after some period of time
Last modified: 2017-10-25 09:04:36 EDT
It appears the Kibana support from the openshift-aggregated-logging component is overwriting Kibana index patterns that have updated field data.
A user updated the .all index pattern to have it re-load all the known fields by going to "Settings" -> "Indices" -> ".all" -> "Refresh field list".
About 6 hours later, when she reloaded her page dashboard she built relying on the newly discovered fields, they had disappeared replaced with an error like (failed to capture it exactly): "Unknown field kubernetes.labels.deploymentconfigname.raw".
What ever is reloading index patterns should not be over-writing existing patterns.
(In reply to Peter Portante from comment #0)
> It appears the Kibana support from the openshift-aggregated-logging
> component is overwriting Kibana index patterns that have updated field data.
> A user updated the .all index pattern to have it re-load all the known
> fields by going to "Settings" -> "Indices" -> ".all" -> "Refresh field list".
> About 6 hours later, when she reloaded her page dashboard she built relying
> on the newly discovered fields, they had disappeared replaced with an error
> like (failed to capture it exactly): "Unknown field
Was it this or "kubernetes.labels.deploymentconfig.raw"? Because the latter was added to the list of standard index pattern fields for use in the kubernetes dashboards in 3.6.
> What ever is reloading index patterns should not be over-writing existing
It is really not about a specific field. It is about the fact that fields are added to the indexes dynamically, and the mappings updated dynamically, but kibana index patterns are being reset to the defaults continually.
We should provide defaults when we create the index pattern, but how are helping the customer if we reset to the defaults preventing them from using existing fields?
The openshift elastic plugin rebuilds this alias everytime a user's kibana index is rebuilt. There are pending changes with the move to the 'kibana index mode' which should probably be discussed
(In reply to Peter Portante from comment #2)
> It is really not about a specific field. It is about the fact that fields
> are added to the indexes dynamically, and the mappings updated dynamically,
> but kibana index patterns are being reset to the defaults continually.
Right. I understand. I was merely attempting to see if this was going to be a problem in this specific case for 3.6.
> We should provide defaults when we create the index pattern, but how are
> helping the customer if we reset to the defaults preventing them from using
> existing fields?
Of course. I was not implying that we should not help the customer.
Could you provide some additional info on how we might reproduce this? I have not been successful reproducing using code from master. Looking at the code it seems we should run into this issue every time the in memory cache is expired; we add mappings for a given index when not found in cache. Is it related to maybe a new project index being created where the given field is not in the mappings? This would occur nightly
What do you mean by adding mappings to an index? This does not have to do with index mappings, either the mappings contained in the index, or the mappings stored in an index template.
This has to do with the .kibana* documents which contain the index-pattern objects as maintained by Kibana itself.
If you are able to refresh the field list as described in the original description, and you see that the number of fields change, then nothing should change that index-pattern in the .kibana* index until the user comes back and refreshes those fields.
Does that make it easier to reproduce?
Jeff, did you rebuild the elasticsearch image containing this fix for 3.6 and 3.7?
@rich I did not since it has not merged in master, nor backported. I was awaiting to add a test as you requested.
*** Bug 1491635 has been marked as a duplicate of this bug. ***
Commit pushed to master at https://github.com/openshift/origin-aggregated-logging
bump openshift-elasticsearch-plugin to 184.108.40.206 to fix:
bug 1476062. Allow write to kibana index
bug 1491227. Modify request if user has back slash
bug 1490719. Operations missing from .all alias
Can you please provide additional information about the entire stack? Possibly logs from Elasticsearch that may provide additional clues. Consider using  which will gather logs and other data for you
Are searches case sensitive? Can you comment on if this approach is valid. Could it be there are actually no additional logs in the time range? Is there a quick query we may run to could documents in a given index over a given time range?
Agree with Comment 16. I formate another logging system, and format the kibana as comment 13. After two days, the kibana still works well. So move bug to verified.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory, and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.