Hide Forgot
Description of problem: ldap user is not able to view logs in kibana Version-Release number of selected component (if applicable): OpenShift Container Platform 3.11 admin user is able to login to kibana portal, but no logs are visible in kibana. While deleting and creating user multiple times, logs would be visible for some times/days. Then again same problem. Same user is able to view logs using `oc logs pod` in respective project. User is the part of ldap froup and having admin role for that project. Where cluster admin user is also part of ldap is able to view logs successfully. Logging image version :- v3.11.59-2 How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: No logs visible for ldap user Expected results: It should be visible. Additional info:
Please provide the following: 1. User's name as is returned from `oc whoami` 2. The output of [1] after login to Kibana. Note: You must run this script within a minute; you may need to adjust the script to get the correct namespace. [1] https://github.com/jcantrill/cluster-logging-tools/blob/master/scripts/view-es-permissions
I would recommend cloning this entire repo to one of their master nodes and running it from that directory. The only caveat is if they deployed to a different namespace (e.g. logging) from the default, then they should do something like the following in the 'scripts' directory: echo logging > .logging-ns The script will run against the first ES pod and the permissions are valid for the entire ES cluster.
Reviewing the logs I see a stack trace when trying to seed the dashboards. This would explain why permissions are not changing. Can you provide details about this cluster: * Was this cluster upgraded from a previous version (e.g. 3.x to 3.y) or even a minor upgrade (3.11.x to 3.11.y)? If so, do we know what versions * Are any user's able to view logs using Kibana? * Are only admin user's unable to view logs using Kibana?
I believe the issue is the defaultIndex pattern in the user's kibana profile is null. Based on the info from #c7, it looks like this user would be considered an operations user because they are able to see the 'default' namespace. If they can answer 'oc can-i view pods/log' then they are an operations user, otherwise they are a non-operations user. The only work around I might devise until the PR lands is to update the config object. The following call will depend on if they are an ops user or not. oc -n openshift-logging exec -c elasticsearch $pod -- es_util --query=$kibindex/5.6.13 -XPUT -d '{"defaultIndex":""}' where: $pod is one of the ES pods $kibindex is '.kibana' for operations users and the output of [1] for non-operations users [1] https://github.com/jcantrill/cluster-logging-tools/blob/master/scripts/kibana-index-name
(In reply to Rajnikant from comment #13) > Hi, > > Latest comment not clear to me. > operations user:- Is this stands for cluster-admin user. > > Only cluster admin user is able to view logs in any/default project. But > user having admin role to a project not able to view logs. Likely but not necessarily. It's anyone who can answer 'oc can-i view pods/log -n default'. Note the namespace was missing previously. > > Issue is on production cluster. > > Is there any impact of existing user, after applying this with es pod. If > there is any impact, how we can revert such changes in case of any issue. > > oc -n openshift-logging exec -c elasticsearch $pod -- es_util > --query=$kibindex/5.6.13 -XPUT -d '{"defaultIndex":""}' Yes, this will impact 'the user' for whom you are running this command. We desire NOT to revert the change because the fact that it is null is the problem; here we are setting it to an empty string. The error results because there is code in ES to evaluate for null and throw an error if it is null. The worst case is when they use the Kibana UI they will need to set the defaultIndex by going to the Settings tab. > > Should we apply on all es deployment config. No. Elasticsearch is a storage cluster and changes are replicated as required.
Per #c21, move this bug to VERIFIED.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0636