Description of problem: In OCP 4.4, with the EFK stack installed. A user can log in to Kibana and without logout when the session is expired, it's possible to see the error : ~~~ Error: [security_exception] no permissions for [indices:data/read/field_stats] and User [name=CN=system.logging.kibana,OU=OpenShift,O=Logging, roles=[]] ~~~ Version-Release number of selected component (if applicable): OCP 4.4.11 How reproducible: Always Steps to Reproduce: 1. Install or upgrade to OCP 4.4.11 2. Install or upgrade Logging stack without any changes and managed state 3. Open a session in Kibana from a web browser 4. Let that the session gets expired 5. Open from the same web browser the Kibana URL and you'll receive the error: ~~~ Error: [security_exception] no permissions for [indices:data/read/field_stats] and User [name=CN=system.logging.kibana,OU=OpenShift,O=Logging, roles=[]] ~~~ Actual results: User is going to see in his browser: ~~~ Error: [security_exception] no permissions for [indices:data/read/field_stats] and User [name=CN=system.logging.kibana,OU=OpenShift,O=Logging, roles=[]] ~~~ Expected results: If the session is expired, then, redirect to login again Additional info: This is similar issue discussed in BZ, https://bugzilla.redhat.com/show_bug.cgi?id=1815422 The above Bugzilla closed as "NextRelease" and Errata associated with it is supposed to fix in ocp 4.4.9 but issue still exists with ocp 4.4.11. Errata: https://access.redhat.com/errata/RHBA-2020:2580
(In reply to Sachin Raje from comment #0) > Description of problem: > > In OCP 4.4, with the EFK stack installed. A user can log in to Kibana and > without logout when the session is expired, it's possible to see the error : > > ~~~ > Error: [security_exception] no permissions for > [indices:data/read/field_stats] and User > [name=CN=system.logging.kibana,OU=OpenShift,O=Logging, roles=[]] > ~~~ > This a known limitation with using Kibana related to your expired token as you have identified. The work around is to access Kibana with a valid token. You may have some caching issues which can be resolved by: * Use a private or incognito browser * Clear your browser cache for Kibana
As a RedHat customer, I believe this bug needs to be reopened. I believe it's unacceptable to expect users to work around this problem permanently. Please reopen the bug and address it.
*** Bug 1870063 has been marked as a duplicate of this bug. ***
Same expectation here. Please fix the bug as requested be redirecting to the login page. Currently in 4.5.7 the behaviour is even worse. After the token is expired, no kibana page nor login page is displayed anymore but an "error json": {"statusCode":500,"error":"Internal Server Error","message":"An internal server error occurred"} This can only be fixed be manually deleting the kibana cookies... every single time.
*** Bug 1874309 has been marked as a duplicate of this bug. ***
Moved back to assigned for an additional fix
(In reply to Qiaoling Tang from comment #19) > If don't set the internal OAuth server’s token duration, the session will > expire 24 hours later by default. > > I tried log into Kibana console, and waited 24 hours for the session to > expire, then refreshed the page, it could redirect to the login page. Per my comment [1], this fix is not complete and can't be until 4.7. Repeating it here as a non-private issue: The change you reference is regarding the cookie expiration is only a mitigation to the issue. Customers may still need to either utilize an incognito or private browser, or delete their cookie manually. We are dependent on changes in both the oauth-proxy and oauth-server to completely resolve this issue. These changes are currently considered features and targeted for 4.7 release. There is no intention at the moment to backport to 4.6 as far as I know which is why I would encourage you to work with PM for proper scheduling and negotiation internally. Moving this back to ON_QA as stated in https://bugzilla.redhat.com/show_bug.cgi?id=1867461#c4 this is a known limitation. The associated changes are not intended as a fix to the experienced behavior but a mitigation. Based upon this comment is would seem them mitigation works as intended. [1] https://bugzilla.redhat.com/show_bug.cgi?id=1872104#c5
The mentioned BZ https://bugzilla.redhat.com/show_bug.cgi?id=1872104#c5 is not public. Also, a lot of comments here are not visible to customers. Please increase the verbosity/visibility! And since we don't know how the mitigation will look like: Please keep in mind that 4.7 will be available next year. Do you really think that customers/developers will accept daily error 500 pages? We won't.
Per comment 18, 19 and 20, move this bz to VERIFIED.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6.1 extras update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4198
Just wanted to put this here because this bug is not resolved for us in Openshift 4.6.4, or we have encountered a new "iteration" of it. Although the users seem to still be able to log in to Kibana, the search function and their index patterns don't work any more, and they are unable to create new or edit/delete existing ones. This can only be "fixed" by removing the users ".kibana_..._<USERNAME>" indices from ES and restarting the ES pods afterwards. Sometimes also their OAuth tokens need to be removed to get rid of the problem (temporarily). Indicators of this bug: * In the browser developer console spurious "HTTP/403" errors are visible in POST requests when trying to add index patterns in Kibana, with HTTP response content like this: {"message":"no permissions for [indices:data/write/delete, indices:data/write/bulk[s]] and User [name=<USERNAME>, roles=[project_user], requestedTenant=__user__]: [security_exception] no permissions for [indices:data/write/delete, indices:data/write/bulk[s]] and User [name=<USERNAME>, roles=[project_user], requestedTenant=__user__]","statusCode":403,"error":"Forbidden"} * In the ES container the following appears on the pod stdout: [<DATE>][INFO ][c.a.o.s.p.PrivilegesEvaluator] [elasticsearch-cdm-2gzu7ihs-1] No index-level perm match for User [name=<USERNAME>, roles=[project_user], requestedTenant=__user__] Resolved [aliases=[.kibana_1912323692_<USERNAME>], indices=[], allIndices=[.kibana_1912323692_<USERNAME>_2], types=[doc], originalRequested=[.kibana_1912323692_<USERNAME>], remoteIndices=[]] [Action [indices:data/write/bulk[s]]] [RolesChecked [project_user]] Not sure what is going on there exactly, but this is really obnoxious to track down and fix.
Hi, This doesn't seem to have been resolved in 4.7.12 either as I faced the same issue on a OCP cluster that was upgraded from 4.6.30 to 4.7.12 where a specific set of non-admin users were unable to create an index pattern in Kibana resulting with 403 status code and the discover screen was simply blank (white screen). The same set of users are able do all that is needed with cluster-admin privilege but not with the admin role that's been granted for specific application projects. I do see the same messages in Kibana console & ES logs as stated above by NDGIT Operations. However, In my case, users were able to create the index patterns & view the logs after having removed the users that were associated with the Kibana indices from elasticsearch. Not sure if this has been addressed in the future versions of 4.7.x than the one I'm currently using. Any additional details on the same would be appreciated. Regards, Sunil Sivan