Bug 1426061 - openshift users encountered confirmation "Apply these filters?" when switching between index list populated in the left panel on kibana
Summary: openshift users encountered confirmation "Apply these filters?" when switchin...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: ---
Assignee: Jeff Cantrill
QA Contact: Xia Zhao
URL:
Whiteboard:
Depends On:
Blocks: 1444106
TreeView+ depends on / blocked
 
Reported: 2017-02-23 06:59 UTC by Xia Zhao
Modified: 2020-07-16 09:14 UTC (History)
13 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: No default field mappings were being applied Consequence: The user saw an error message "Apply these filters?" Fix: Update the plugin to provide the necessary field mappings. Also update plugin to allow them to be provided to ES during deployment in case they become out of date or need a quicker fix. Result: The message is no longer presented to the user.
Clone Of:
: 1444106 (view as bug list)
Environment:
Last Closed: 2017-04-12 19:13:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Kibana UI, the data field is wrong (156.40 KB, image/png)
2017-03-20 08:55 UTC, Junqi Zhao
no flags Details
logs are generated in es pod (68.38 KB, text/plain)
2017-03-20 09:06 UTC, Junqi Zhao
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:0884 0 normal SHIPPED_LIVE Red Hat OpenShift Container Platform 3.5 RPM Release Advisory 2017-04-12 22:50:07 UTC

Description Xia Zhao 2017-02-23 06:59:13 UTC
Description of problem:
Openshift users encountered unexpected confirmation "Apply these filters?" when switching between index list populated in the left panel on kibana
This is a reproduce of https://bugzilla.redhat.com/show_bug.cgi?id=1388031

Version-Release number of selected component (if applicable):
ops registry:
openshift3/logging-elasticsearch    d715f4d34ad4
openshift3/logging-kibana    e0ab09c2cbeb
openshift3/logging-fluentd    47057624ecab
openshift3/logging-auth-proxy    139f7943475e
openshift3/logging-curator    7f034fdf7702

Steps to Reproduce:
1.Deploy logging 3.5.0 with ansible scripts (json-file log driver was configured)
2.Wait until EFK stacks are all running, login kibana UI
3.Switch between different indices from the left panel

Actual results:
Encounter unexpected confirmation "Apply these filters?", log entries didn't display

Expected results:
Should not encounter unexpected confirmation "Apply these filters?", log entries should display

Additional info:
The workaround is effective: go to the Settings tab and select the index you want view, then go back to the Discover tab and refresh the page.

Comment 1 Rich Megginson 2017-02-23 16:36:23 UTC
We first need to be able to use a Kibana index pattern specifically for the common data model in the openshift-elasticsearch-plugin https://github.com/fabric8io/openshift-elasticsearch-plugin/issues/63 - we will need to generate json or yaml files from the common data model, and change the plugin to be able to load these patterns from the files.

Then we need to add configmap entries for these files to the elasticsearch configmap so that they can be loaded dynamically and edited dynamically.

Then we need to add support to the installer to add these files to the configmap, and changes to the es dc to mount these in the right places.

Comment 3 Jeff Cantrill 2017-03-01 14:19:58 UTC
Fixed in https://github.com/openshift/origin-aggregated-logging/pull/340

Comment 4 openshift-github-bot 2017-03-02 17:13:51 UTC
Commit pushed to master at https://github.com/openshift/origin-aggregated-logging

https://github.com/openshift/origin-aggregated-logging/commit/eb83ac43a9c2dde14f747530facdf01133b35806
bug 1426061. Seed Kibana index mappings to avoid 'Apply these filter?' issue.
bug 1420217. Mute connection stack trace resulting from transient start issues.

Comment 5 Xia Zhao 2017-03-03 06:50:28 UTC
Tested with these images on ops registry, seems there is no new image for es and kibana, and encountered this issue repro at the first time when switched to a second index. Could developer help to confirm if this is the delivery to be tested?

openshift3/logging-curator    8cfcb23f26b6
openshift3/logging-elasticsearch    d715f4d34ad4
openshift3/logging-kibana    e0ab09c2cbeb
openshift3/logging-fluentd    47057624ecab
openshift3/logging-auth-proxy    139f7943475e


# docker inspect registry.ops.openshift.com/openshift3/logging-elasticsearch:3.5.0
[
    {
        "Id": "sha256:d715f4d34ad48729d132abc7d7bae70dd2d92bce762a5e18b80a8c3bcb03e223",
        "RepoTags": [
            "registry.ops.openshift.com/openshift3/logging-elasticsearch:3.5.0"
        ],
        "RepoDigests": [
            "registry.ops.openshift.com/openshift3/logging-elasticsearch@sha256:bcecbb31b01f6970ca37e16bdfd556746bf7de09aa803d99b78ca00d7a7a32a5"
        ],
        "Parent": "",
        "Comment": "",
        "Created": "2017-02-08T14:28:11.70709Z",
...  }
}

Comment 6 Xia Zhao 2017-03-07 05:16:14 UTC
Issue was reproduced with these images, when switched to a user project index for the very first time, got the confirmation "Apply these filters?" on kibana UI. The confirmation was gone since the 2nd time switch to the same index:

On ops registry:
openshift3/logging-fluentd    8cd33de7939c
openshift3/logging-curator    8cfcb23f26b6
openshift3/logging-elasticsearch    d715f4d34ad4
openshift3/logging-kibana    e0ab09c2cbeb
openshift3/logging-auth-proxy    139f7943475e

Tested on openshift v3.5.0.43
kubernetes v1.5.2+43a9be4
etcd 3.1.0

Comment 7 Jeff Cantrill 2017-03-07 14:39:01 UTC
@Xia,

The required changes were only made to the origin images.  I will notify you once they are available for the enterprise images.

Comment 8 Jeff Cantrill 2017-03-07 19:07:30 UTC
12706216 buildContainer (noarch) completed successfully
koji_builds:
  https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=542604
repositories:
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:rhaos-3.5-rhel-7-docker-candidate-20170307134801
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:3.5.0-6
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:3.5.0
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:latest
  brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:v3.5

Comment 14 Junqi Zhao 2017-03-20 08:53:59 UTC
Tested with logging-elasticsearch:3.5.0-9, no log entries shown in Kibana UI, the data field is wrong, such as, it should be '@timestamp', but it is 'time' now. see the attached kibana ui snapshot

using the following commands,find the data is generated
oc exec ${es-pod} -- curl -s -k --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key https://logging-es:9200/project.*/_search?size=3\&sort=@timestamp:desc | python -mjson.tool

oc exec ${es-ops-pod} -- curl -s -k --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key https://logging-es-ops:9200/.operations.*/_search?size=3\&sort=@timestamp:desc | python -mjson.tool

Comment 15 Junqi Zhao 2017-03-20 08:55:36 UTC
Created attachment 1264718 [details]
Kibana UI, the data field is wrong

Comment 16 Junqi Zhao 2017-03-20 09:06:24 UTC
Created attachment 1264719 [details]
logs are generated in es pod

Comment 17 Junqi Zhao 2017-03-20 09:09:10 UTC
Image ID:
openshift3/logging-elasticsearch   3.5.0               9b824bebeb36
openshift3/logging-kibana          3.5.0               a6159c640977
openshift3/logging-fluentd         3.5.0               32a4ac0a3e18
openshift3/logging-curator         3.5.0               8cfcb23f26b6
openshift3/logging-auth-proxy      3.5.0               139f7943475e

Comment 20 Rich Megginson 2017-03-20 21:23:18 UTC
I don't know why this behavior keeps regressing - this isn't the first time a 3.5 QE test has somehow used the 3.3 pre-common data model field naming.  Maybe a fluentd 3.3.x image was somehow tagged with 3.5.0?  I'm doing a 3.5 deployment now to see what's going on.

Comment 21 Rich Megginson 2017-03-20 22:01:35 UTC
works for me if I use the latest images tagged for v3.5

Here are the images I'm using:

# docker images|grep logging
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch   v3.5                9b824bebeb36        7 days ago          399.4 MB
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-kibana          v3.5                a6159c640977        13 days ago         342.4 MB
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-fluentd         v3.5                32a4ac0a3e18        13 days ago         232.5 MB
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-curator         v3.5                8cfcb23f26b6        2 weeks ago         211.1 MB
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-auth-proxy      v3.5                139f7943475e        8 weeks ago         220 MB

I'm going to do another deployment using logging image version 3.5.0 to see if there is a difference between the v3.5 and 3.5.0 images.

Comment 22 Rich Megginson 2017-03-20 23:21:55 UTC
works for me with latest 3.5.0 tagged images:

# docker images| grep logging
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch   3.5.0               9b824bebeb36        7 days ago          399.4 MB
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-kibana          3.5.0               a6159c640977        13 days ago         342.4 MB
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-fluentd         3.5.0               32a4ac0a3e18        13 days ago         232.5 MB
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-curator         3.5.0               8cfcb23f26b6        2 weeks ago         211.1 MB
brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-auth-proxy      3.5.0               139f7943475e        9 weeks ago         220 MB

Comment 23 Rich Megginson 2017-03-20 23:27:25 UTC
I should say - what works for me is querying elasticsearch directly - I do not see fields like "kubernetes_container_name" or "time" - I see "kubernetes.container_name" and "@timestamp"

@Jeff - where are the new index template JSON files for the common data model?  The openshift-elasticsearch-plugin was changed to load them, but where are the actual files?

Comment 24 Xia Zhao 2017-03-21 08:45:43 UTC
The issue described in comment #14 is now tracking separately by this new bz: https://bugzilla.redhat.com/show_bug.cgi?id=1434300

Comment 25 Rich Megginson 2017-03-21 14:54:26 UTC
ok - so can this bug be VERIFIED?

Comment 26 Jeff Cantrill 2017-03-21 18:14:28 UTC
Moving to 'ON_QA' since the issue from #14 is being tracked separately.  @rich the files are: https://github.com/fabric8io/openshift-elasticsearch-plugin/tree/master/src/main/resources/io/fabric8/elasticsearch/plugin/kibana

Comment 27 Xia Zhao 2017-03-22 01:48:24 UTC
Our verification work is blocked by this testblocker bug: https://bugzilla.redhat.com/show_bug.cgi?id=1434300

Comment 28 Junqi Zhao 2017-03-23 03:46:30 UTC
Defect is fixed by using playbooks from 
https://github.com/openshift/openshift-ansible with branch release-1.5, and ansible is yum installed, version is ansible-2.2.1.0-2.el7.noarch.

Image id:
openshift3/logging-elasticsearch   3.5.0               5ff198b5c68d        4 hours ago         399.4 MB
openshift3/logging-kibana          3.5.0               a6159c640977        2 weeks ago         342.4 MB
openshift3/logging-fluentd         3.5.0               32a4ac0a3e18        2 weeks ago         232.5 MB
openshift3/logging-curator         3.5.0               8cfcb23f26b6        3 weeks ago         211.1 MB
openshift3/logging-auth-proxy      3.5.0               139f7943475e        9 weeks ago         220 MB

Comment 29 Peter Portante 2017-03-30 10:21:19 UTC
Will this fix make its way back to 3.4?

Comment 30 Jeff Cantrill 2017-03-30 12:52:55 UTC
It will not.  We might be able to provide instruction on how an admin could seed the kibana index with the appropriate document(s)[1]. It could get tedious and would require them to know the index of where to push the mapping which, I believe, would be '.kibana.$USER_UUID'

I think we would be better to add this to a list of 'known issues' for logging, wherever that may be documented.

[1] https://github.com/fabric8io/openshift-elasticsearch-plugin/commit/71e4050701b6f6cb423bfa0737ab9db0ef8caf5a

Comment 35 errata-xmlrpc 2017-04-12 19:13:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:0884


Note You need to log in before you can comment on or make changes to this bug.