Description of problem: Openshift users encountered unexpected confirmation "Apply these filters?" when switching between index list populated in the left panel on kibana This is a reproduce of https://bugzilla.redhat.com/show_bug.cgi?id=1388031 Version-Release number of selected component (if applicable): ops registry: openshift3/logging-elasticsearch d715f4d34ad4 openshift3/logging-kibana e0ab09c2cbeb openshift3/logging-fluentd 47057624ecab openshift3/logging-auth-proxy 139f7943475e openshift3/logging-curator 7f034fdf7702 Steps to Reproduce: 1.Deploy logging 3.5.0 with ansible scripts (json-file log driver was configured) 2.Wait until EFK stacks are all running, login kibana UI 3.Switch between different indices from the left panel Actual results: Encounter unexpected confirmation "Apply these filters?", log entries didn't display Expected results: Should not encounter unexpected confirmation "Apply these filters?", log entries should display Additional info: The workaround is effective: go to the Settings tab and select the index you want view, then go back to the Discover tab and refresh the page.
We first need to be able to use a Kibana index pattern specifically for the common data model in the openshift-elasticsearch-plugin https://github.com/fabric8io/openshift-elasticsearch-plugin/issues/63 - we will need to generate json or yaml files from the common data model, and change the plugin to be able to load these patterns from the files. Then we need to add configmap entries for these files to the elasticsearch configmap so that they can be loaded dynamically and edited dynamically. Then we need to add support to the installer to add these files to the configmap, and changes to the es dc to mount these in the right places.
Fixed in https://github.com/openshift/origin-aggregated-logging/pull/340
Commit pushed to master at https://github.com/openshift/origin-aggregated-logging https://github.com/openshift/origin-aggregated-logging/commit/eb83ac43a9c2dde14f747530facdf01133b35806 bug 1426061. Seed Kibana index mappings to avoid 'Apply these filter?' issue. bug 1420217. Mute connection stack trace resulting from transient start issues.
Tested with these images on ops registry, seems there is no new image for es and kibana, and encountered this issue repro at the first time when switched to a second index. Could developer help to confirm if this is the delivery to be tested? openshift3/logging-curator 8cfcb23f26b6 openshift3/logging-elasticsearch d715f4d34ad4 openshift3/logging-kibana e0ab09c2cbeb openshift3/logging-fluentd 47057624ecab openshift3/logging-auth-proxy 139f7943475e # docker inspect registry.ops.openshift.com/openshift3/logging-elasticsearch:3.5.0 [ { "Id": "sha256:d715f4d34ad48729d132abc7d7bae70dd2d92bce762a5e18b80a8c3bcb03e223", "RepoTags": [ "registry.ops.openshift.com/openshift3/logging-elasticsearch:3.5.0" ], "RepoDigests": [ "registry.ops.openshift.com/openshift3/logging-elasticsearch@sha256:bcecbb31b01f6970ca37e16bdfd556746bf7de09aa803d99b78ca00d7a7a32a5" ], "Parent": "", "Comment": "", "Created": "2017-02-08T14:28:11.70709Z", ... } }
Issue was reproduced with these images, when switched to a user project index for the very first time, got the confirmation "Apply these filters?" on kibana UI. The confirmation was gone since the 2nd time switch to the same index: On ops registry: openshift3/logging-fluentd 8cd33de7939c openshift3/logging-curator 8cfcb23f26b6 openshift3/logging-elasticsearch d715f4d34ad4 openshift3/logging-kibana e0ab09c2cbeb openshift3/logging-auth-proxy 139f7943475e Tested on openshift v3.5.0.43 kubernetes v1.5.2+43a9be4 etcd 3.1.0
@Xia, The required changes were only made to the origin images. I will notify you once they are available for the enterprise images.
12706216 buildContainer (noarch) completed successfully koji_builds: https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=542604 repositories: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:rhaos-3.5-rhel-7-docker-candidate-20170307134801 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:3.5.0-6 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:3.5.0 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:latest brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:v3.5
Tested with logging-elasticsearch:3.5.0-9, no log entries shown in Kibana UI, the data field is wrong, such as, it should be '@timestamp', but it is 'time' now. see the attached kibana ui snapshot using the following commands,find the data is generated oc exec ${es-pod} -- curl -s -k --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key https://logging-es:9200/project.*/_search?size=3\&sort=@timestamp:desc | python -mjson.tool oc exec ${es-ops-pod} -- curl -s -k --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key https://logging-es-ops:9200/.operations.*/_search?size=3\&sort=@timestamp:desc | python -mjson.tool
Created attachment 1264718 [details] Kibana UI, the data field is wrong
Created attachment 1264719 [details] logs are generated in es pod
Image ID: openshift3/logging-elasticsearch 3.5.0 9b824bebeb36 openshift3/logging-kibana 3.5.0 a6159c640977 openshift3/logging-fluentd 3.5.0 32a4ac0a3e18 openshift3/logging-curator 3.5.0 8cfcb23f26b6 openshift3/logging-auth-proxy 3.5.0 139f7943475e
I don't know why this behavior keeps regressing - this isn't the first time a 3.5 QE test has somehow used the 3.3 pre-common data model field naming. Maybe a fluentd 3.3.x image was somehow tagged with 3.5.0? I'm doing a 3.5 deployment now to see what's going on.
works for me if I use the latest images tagged for v3.5 Here are the images I'm using: # docker images|grep logging brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch v3.5 9b824bebeb36 7 days ago 399.4 MB brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-kibana v3.5 a6159c640977 13 days ago 342.4 MB brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-fluentd v3.5 32a4ac0a3e18 13 days ago 232.5 MB brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-curator v3.5 8cfcb23f26b6 2 weeks ago 211.1 MB brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-auth-proxy v3.5 139f7943475e 8 weeks ago 220 MB I'm going to do another deployment using logging image version 3.5.0 to see if there is a difference between the v3.5 and 3.5.0 images.
works for me with latest 3.5.0 tagged images: # docker images| grep logging brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch 3.5.0 9b824bebeb36 7 days ago 399.4 MB brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-kibana 3.5.0 a6159c640977 13 days ago 342.4 MB brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-fluentd 3.5.0 32a4ac0a3e18 13 days ago 232.5 MB brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-curator 3.5.0 8cfcb23f26b6 2 weeks ago 211.1 MB brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-auth-proxy 3.5.0 139f7943475e 9 weeks ago 220 MB
I should say - what works for me is querying elasticsearch directly - I do not see fields like "kubernetes_container_name" or "time" - I see "kubernetes.container_name" and "@timestamp" @Jeff - where are the new index template JSON files for the common data model? The openshift-elasticsearch-plugin was changed to load them, but where are the actual files?
The issue described in comment #14 is now tracking separately by this new bz: https://bugzilla.redhat.com/show_bug.cgi?id=1434300
ok - so can this bug be VERIFIED?
Moving to 'ON_QA' since the issue from #14 is being tracked separately. @rich the files are: https://github.com/fabric8io/openshift-elasticsearch-plugin/tree/master/src/main/resources/io/fabric8/elasticsearch/plugin/kibana
Our verification work is blocked by this testblocker bug: https://bugzilla.redhat.com/show_bug.cgi?id=1434300
Defect is fixed by using playbooks from https://github.com/openshift/openshift-ansible with branch release-1.5, and ansible is yum installed, version is ansible-2.2.1.0-2.el7.noarch. Image id: openshift3/logging-elasticsearch 3.5.0 5ff198b5c68d 4 hours ago 399.4 MB openshift3/logging-kibana 3.5.0 a6159c640977 2 weeks ago 342.4 MB openshift3/logging-fluentd 3.5.0 32a4ac0a3e18 2 weeks ago 232.5 MB openshift3/logging-curator 3.5.0 8cfcb23f26b6 3 weeks ago 211.1 MB openshift3/logging-auth-proxy 3.5.0 139f7943475e 9 weeks ago 220 MB
Will this fix make its way back to 3.4?
It will not. We might be able to provide instruction on how an admin could seed the kibana index with the appropriate document(s)[1]. It could get tedious and would require them to know the index of where to push the mapping which, I believe, would be '.kibana.$USER_UUID' I think we would be better to add this to a list of 'known issues' for logging, wherever that may be documented. [1] https://github.com/fabric8io/openshift-elasticsearch-plugin/commit/71e4050701b6f6cb423bfa0737ab9db0ef8caf5a
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0884