Bug 1679613 - Handle default index pattern for Kibana when value is null
Summary: Handle default index pattern for Kibana when value is null
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 3.11.z
Assignee: Jeff Cantrill
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-02-21 15:01 UTC by Rajnikant
Modified: 2019-04-11 05:38 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: The defaultIndex in the kibana/config entry is null Consequence: The seeding process fails and user is presented with a white screen Fix: Evaluate the value for 'defaultIndex' and return default if null Result: The kibana seeding process completes successfully.
Clone Of:
Environment:
Last Closed: 2019-04-11 05:38:34 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github fabric8io openshift-elasticsearch-plugin pull 169 0 'None' closed bug 1679613. Handle null defaultIndex 2020-09-23 02:28:25 UTC
Github openshift origin-aggregated-logging pull 1562 0 'None' closed [release-3.11] bug 1679613 by defaulting a value if null 2020-09-23 02:28:25 UTC
Red Hat Product Errata RHBA-2019:0636 0 None None None 2019-04-11 05:38:43 UTC

Description Rajnikant 2019-02-21 15:01:03 UTC
Description of problem:
ldap user is not able to view logs in kibana

Version-Release number of selected component (if applicable):
OpenShift Container Platform 3.11


admin user is able to login to kibana portal, but no logs are visible in kibana. 
While deleting and creating user multiple times, logs would be visible for some times/days. Then again same problem.

Same user is able to view logs using `oc logs pod` in respective project.

User is the part of ldap froup and having admin role for that project. 

Where cluster admin user is also part of ldap is able to view logs successfully. 

Logging image version :- v3.11.59-2

How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:
No logs visible for ldap user

Expected results:
It should be visible. 

Additional info:

Comment 2 Jeff Cantrill 2019-02-25 23:57:54 UTC
Please provide the following:

1. User's name as is returned from `oc whoami`
2. The output of [1] after login to Kibana.  Note: You must run this script within a minute; you may need to adjust the script to get the correct namespace.

[1] https://github.com/jcantrill/cluster-logging-tools/blob/master/scripts/view-es-permissions

Comment 4 Jeff Cantrill 2019-02-26 21:29:48 UTC
I would recommend cloning this entire repo to one of their master nodes and running it from that directory.  The only caveat is if they deployed to a different namespace (e.g. logging) from the default, then they should do something like the following in the 'scripts' directory:

echo logging > .logging-ns

The script will run against the first ES pod and the permissions are valid for the entire ES cluster.

Comment 8 Jeff Cantrill 2019-03-04 20:50:53 UTC
Reviewing the logs I see a stack trace when trying to seed the dashboards.  This would explain why permissions are not changing.  Can you provide details about this cluster:

* Was this cluster upgraded from a previous version (e.g.  3.x to 3.y) or even a minor upgrade (3.11.x to 3.11.y)?  If so, do we know what versions
* Are any user's able to view logs using Kibana?
* Are only admin user's unable to view logs using Kibana?

Comment 12 Jeff Cantrill 2019-03-13 19:06:43 UTC
I believe the issue is the defaultIndex pattern in the user's kibana profile is null. Based on the info from #c7, it looks like this user would be considered an operations user because they are able to see the 'default' namespace.  If they can answer 'oc can-i view pods/log' then they are an operations user, otherwise they are a non-operations user.  The only work around I might devise until the PR lands is to update the config object.  The following call will depend on if they are an ops user or not. 

oc -n openshift-logging exec -c elasticsearch $pod -- es_util --query=$kibindex/5.6.13 -XPUT -d '{"defaultIndex":""}'

where:

$pod is one of the ES pods
$kibindex is '.kibana' for operations users and the output of [1] for non-operations users

[1] https://github.com/jcantrill/cluster-logging-tools/blob/master/scripts/kibana-index-name

Comment 14 Jeff Cantrill 2019-03-14 17:56:55 UTC
(In reply to Rajnikant from comment #13)
> Hi,
> 
> Latest comment not clear to me.  
> operations user:- Is this stands for cluster-admin user. 
> 
> Only cluster admin user is able to view logs in any/default project. But
> user having admin role to a project not able to view logs. 

Likely but not necessarily.  It's anyone who can answer 'oc can-i view pods/log -n default'. Note the namespace was missing previously.

> 
> Issue is on production cluster. 
> 
> Is there any impact of existing user, after applying this with es pod. If
> there is any impact, how we can revert such changes in case of any issue. 
> 
> oc -n openshift-logging exec -c elasticsearch $pod -- es_util
> --query=$kibindex/5.6.13 -XPUT -d '{"defaultIndex":""}'

Yes, this will impact 'the user' for whom you are running this command.  We desire NOT to revert the change because the fact that it is null is the problem; here we are setting it to an empty string.  The error results because there is code in ES to evaluate for null and throw an error if it is null.  The worst case is when they use the Kibana UI they will need to set the defaultIndex by going to the Settings tab.

> 
> Should we apply on all es deployment config.

No.  Elasticsearch is a storage cluster and changes are replicated as required.

Comment 22 Qiaoling Tang 2019-03-25 03:19:29 UTC
Per #c21, move this bug to VERIFIED.

Comment 34 errata-xmlrpc 2019-04-11 05:38:34 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0636


Note You need to log in before you can comment on or make changes to this bug.