Bug 1902112 - Kibana corrupting users security index settings
Summary: Kibana corrupting users security index settings
Keywords:
Status: CLOSED DUPLICATE of bug 1933978
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.5
Hardware: All
OS: Linux
urgent
high
Target Milestone: ---
: 4.7.z
Assignee: ewolinet
QA Contact: Anping Li
URL:
Whiteboard: logging-exploration
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-11-27 00:45 UTC by Matthew Robson
Modified: 2024-10-01 17:08 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-05-03 17:05:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Matthew Robson 2020-11-27 00:45:57 UTC
Description of problem:

Refused to execute inline script because it violates the following Content Security Policy directive: "script-src 'unsafe-eval' 'self'". Either the 'unsafe-inline' keyword, a hash ('sha256-SHHSeLc0bp6xt4BoVVyUy+3IbVqp3ujLaR+s+kSP5UI='), or a nonce ('nonce-...') is required to enable inline execution.

kibana:1 Access to manifest at 'https://oauth-openshift.apps./oauth/authorize?approval_prompt=force&client_id=system:serviceaccount:openshift-logging:kibana&redirect_uri=https://kibana-openshift-logging.apps./oauth/callback&response_type=code&scope=user:info+user:check-access+user:list-projects&state=175138ade530d204992d6bb3dd776773:/ui/favicons/manifest.json' (redirected from 'https://kibana-openshift-logging.apps./ui/favicons/manifest.json') from origin 'https://kibana-openshift-logging.apps.' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
oauth-openshift.apps./oauth/authorize?approval_prompt=force&client_id=system:serviceaccount:openshift-logging:kibana&redirect_uri=https://kibana-openshift-logging.apps./oauth/callback&response_type=code&scope=user:info+user:check-access+user:list-projects&state=175138ade530d204992d6bb3dd776773:/ui/favicons/manifest.json:1 Failed to load resource: net::ERR_FAILED
/app/kibana#/discover?_g=():1 Access to internal resource at 'https://oauth-openshift.apps./oauth/authorize?approval_prompt=force&client_id=system:serviceaccount:openshift-logging:kibana&redirect_uri=https://kibana-openshift-logging.apps./oauth/callback&response_type=code&scope=user:info+user:check-access+user:list-projects&state=6f8fb08bd9e2d4b6e7dc774b21b63434:/ui/favicons/manifest.json' (redirected from 'https://kibana-openshift-logging.apps./ui/favicons/manifest.json') from origin 'https://kibana-openshift-logging.apps.' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.

Going to the management area it said the index is not found. Went to create a new one and it hung:

Saved object is missing
Could not locate that index-pattern (id: 1d023e20-23ae-11eb-97ab-01d80db011e8), click here to re-create it
Getting: Detected an unhandled Promise rejection.
Error: Forbidden


Deleting the index solve the issue. The user can log back in and things are working again.

ES Logs show:

ES 3:
[2020-11-23T18:53:13,977][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-cdm-9aeuzkeh-3] [.kibana_2099475215_usergithub_1][0], node[F_vdIRzATlOEy55qJKnERw], [P], s[STARTED], a[id=kOBqKkSkQmWRDkcdjO2Upw]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[], indicesOptions=IndicesOptions[ignore_unavailable=false, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_aliases_to_multiple_indices=true, forbid_closed_indices=true, ignore_aliases=false, ignore_throttled=true], types=[], routing='null', preference='null', requestCache=null, scroll=null, maxConcurrentShardRequests=15, batchedReduceSize=512, preFilterShardSize=128, allowPartialSearchResults=true, localClusterAlias=null, getOrCreateAbsoluteStartMillis=-1, source={"sort":[{"@timestamp":{"order":"desc"}}]}}] lastShard [true]

[2020-11-23T18:53:13,978][DEBUG][o.e.a.s.TransportSearchAction] [elasticsearch-cdm-9aeuzkeh-3] [.kibana_2099475215_github_2][0], node[PN206y3iTkiTYCA8mBl4ow], [R], s[STARTED], a[id=a7EP804HRtehIkTD39xY6g]: Failed to execute [SearchRequest{searchType=QUERY_THEN_FETCH, indices=[], indicesOptions=IndicesOptions[ignore_unavailable=false, allow_no_indices=true, expand_wildcards_open=true, expand_wildcards_closed=false, allow_aliases_to_multiple_indices=true, forbid_closed_indices=true, ignore_aliases=false, ignore_throttled=true], types=[], routing='null', preference='null', requestCache=null, scroll=null, maxConcurrentShardRequests=15, batchedReduceSize=512, preFilterShardSize=128, allowPartialSearchResults=true, localClusterAlias=null, getOrCreateAbsoluteStartMillis=-1, source={"sort":[{"@timestamp":{"order":"desc"}}]}}] lastShard [true]

[2020-11-26T03:29:31,499][INFO ][c.a.o.s.p.PrivilegesEvaluator] [elasticsearch-cdm-9aeuzkeh-3] No index-level perm match for User [name=github, roles=[project_user], requestedTenant=__user__] Resolved [aliases=[.kibana_2099475215_github], indices=[], allIndices=[.kibana_2099475215_github_2], types=[doc], originalRequested=[.kibana_2099475215_github], remoteIndices=[]] [Action [indices:data/write/bulk[s]]] [RolesChecked [project_user]]

ES 1:

[2020-11-26T03:30:42,749][INFO ][c.a.o.s.p.PrivilegesEvaluator] [elasticsearch-cdm-9aeuzkeh-1] No index-level perm match for User [name=@github, roles=[project_user], requestedTenant=__user__] Resolved [aliases=[.kibana_2099475215_github], indices=[], allIndices=[.kibana_2099475215_github_2], types=[doc], originalRequested=[.kibana_2099475215_github], remoteIndices=[]] [Action [indices:data/write/bulk[s]]] [RolesChecked [project_user]]

[2020-11-26T03:31:57,992][INFO ][c.a.o.s.p.PrivilegesEvaluator] [elasticsearch-cdm-9aeuzkeh-1] No index-level perm match for User [name=@github, roles=[project_user], requestedTenant=__user__] Resolved [aliases=[.kibana_2099475215_github], indices=[], allIndices=[.kibana_2099475215_github_2], types=[doc], originalRequested=[.kibana_2099475215_github], remoteIndices=[]] [Action [indices:data/write/bulk[s]]] [RolesChecked [project_user]]

[2020-11-26T18:32:33,956][INFO ][o.e.c.m.MetaDataDeleteIndexService] [elasticsearch-cdm-9aeuzkeh-1] [.kibana_2099475215_github_1/2nVNYRuhR1amiuMDHMsWdw] deleting index

[2020-11-26T18:32:34,300][INFO ][o.e.c.m.MetaDataDeleteIndexService] [elasticsearch-cdm-9aeuzkeh-1] [.kibana_2099475215_github_2/O9kKeJK5QYWDImDfi-1dag] deleting index

[2020-11-26T18:33:51,574][INFO ][o.e.c.m.MetaDataCreateIndexService] [elasticsearch-cdm-9aeuzkeh-1] [.kibana_2099475215_github] creating index, cause [auto(bulk api)], templates [common.settings.kibana.template.json, kibana_index_template:.kibana_*], shards [3]/[1], mappings [doc]


Kibana:

2020/11/26 03:28:20 provider.go:624: 200 GET https://10.98.0.1/apis/user.openshift.io/v1/users/~ {"kind":"User","apiVersion":"user.openshift.io/v1","metadata":{"name":"@github","selfLink":"/apis/user.openshift.io/v1/users/github","uid":"175c42fe-8460-45e9-8b60-ed168f3df726","resourceVersion":"58402050","creationTimestamp":"2020-10-16T00:30:41Z","managedFields":[{"manager":"oauth-server","operation":"Update","apiVersion":"user.openshift.io/v1","time":"2020-10-16T00:30:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:fullName":{},"f:identities":{}}}]},"fullName":"","identities":["sso:e4dfe7b8-c228-47de-b8b9-0c40a3a72aaa"],"groups":["system:authenticated","system:authenticated:oauth"]}

2020/11/26 03:28:20 oauthproxy.go:675: 10.97.12.1:45254 authentication complete Session{@github token:true}



Version-Release number of selected component (if applicable):
4.5.16


How reproducible:
There are 2 users this happens to constantly.


Steps to Reproduce:
1. Unsure
2.
3.

Actual results:
User can not use Kibana. Index must be deleted.

Expected results:


Additional info:

Comment 4 ewolinet 2021-02-22 20:47:18 UTC
> [2020-11-26T03:30:42,749][INFO ][c.a.o.s.p.PrivilegesEvaluator] [elasticsearch-cdm-9aeuzkeh-1] No index-level perm match for User [name=@github, roles=[project_user], requestedTenant=__user__] Resolved [aliases=[.kibana_2099475215_github], indices=[], allIndices=[.kibana_2099475215_github_2], types=[doc], originalRequested=[.kibana_2099475215_github], remoteIndices=[]] [Action [indices:data/write/bulk[s]]] [RolesChecked [project_user]]

> [2020-11-26T03:31:57,992][INFO ][c.a.o.s.p.PrivilegesEvaluator] [elasticsearch-cdm-9aeuzkeh-1] No index-level perm match for User [name=@github, roles=[project_user], requestedTenant=__user__] Resolved [aliases=[.kibana_2099475215_github], indices=[], allIndices=[.kibana_2099475215_github_2], types=[doc], originalRequested=[.kibana_2099475215_github], remoteIndices=[]] [Action [indices:data/write/bulk[s]]] [RolesChecked [project_user]]

> [2020-11-26T03:29:31,499][INFO ][c.a.o.s.p.PrivilegesEvaluator] [elasticsearch-cdm-9aeuzkeh-3] No index-level perm match for User [name=github, roles=[project_user], requestedTenant=__user__] Resolved [aliases=[.kibana_2099475215_github], indices=[], allIndices=[.kibana_2099475215_github_2], types=[doc], originalRequested=[.kibana_2099475215_github], remoteIndices=[]] [Action [indices:data/write/bulk[s]]] [RolesChecked [project_user]]

These messages indicate that the user does not have access to the indices they are trying to access.
What is the index pattern that the user is trying to access their logs with?
Per our docs they should be one of "app", "infra" or "audit". Given this user is using the __user__ tenant, I would expect they should be using the "app" alias and index pattern.

Comment 5 Matthew Robson 2021-02-23 14:02:51 UTC
They should only be / have access to their own projects logs. The environment is highly multi-tenant, so there are lots of users with access to a small percentage of total projects.

They should be using the app alias / pattern.

On a side note, we saw kibana go into a yellow state (like in BZ1913952) on a brand new OCP 4.5 cluster so it seems like it is still trying to do some kind of index migrations even on fresh 4.5+ clusters.

This seems to occur on a semi-regular basis for different users on 4.5. Is there anything else we can collect or that you would want to see or know from the specific user?

Matt

Comment 6 ewolinet 2021-02-23 23:11:45 UTC
(In reply to Matthew Robson from comment #5)
> They should only be / have access to their own projects logs. The
> environment is highly multi-tenant, so there are lots of users with access
> to a small percentage of total projects.
> 
> They should be using the app alias / pattern.
> 
> On a side note, we saw kibana go into a yellow state (like in BZ1913952) on
> a brand new OCP 4.5 cluster so it seems like it is still trying to do some
> kind of index migrations even on fresh 4.5+ clusters.
> 
> This seems to occur on a semi-regular basis for different users on 4.5. Is
> there anything else we can collect or that you would want to see or know
> from the specific user?


This is controlled by the Kibana multitenancy plugins and should only be migrating when it finds there is something to migrate (the plugin makes that decision based on Kibana code).
If this is happening frequently that makes me think Kibana is restarting often, is this accurate?


I'm not sure what the user index files that you attached are... do you have a must gather for this cluster?

Comment 8 ewolinet 2021-03-01 19:24:01 UTC
I can't tell if kibana has restarted at all in the must gather provided, it looks like everything is still running without restarts.

I do see in kibana that the elasticsearch plugin was red and looking in the elasticsearch logs there appears to be an issue with the customer's application data because ES is failing to execute a bulk create for index app-000048.


2020-11-21T08:00:18.65170917Z [2020-11-21T08:00:18,651][DEBUG][o.e.a.b.TransportShardBulkAction] [elasticsearch-cdm-9aeuzkeh-1] [app-000048][1] failed to execute bulk item (create) index {[app-write][_doc][NjIwMTczMTUtNThmNy00ZTRiLTk4MzgtYzRjNWMwNWVjNDhj], source[n/a, actual length: [2kb], max length: 2kb]}
2020-11-21T08:00:18.65170917Z org.elasticsearch.index.mapper.MapperParsingException: failed to parse field [message] of type [text] in document with id 'NjIwMTczMTUtNThmNy00ZTRiLTk4MzgtYzRjNWMwNWVjNDhj'

There appears to be utf formatting issues based on another ES node:

2020-11-21T08:00:19.873249955Z Caused by: com.fasterxml.jackson.core.JsonParseException: Invalid UTF-8 start byte 0x8b


The fact that kibana cannot talk to ES may be the reason the plugin is yellow, however, looking at the current must gather it is unclear why kibana cannot talk to elasticsearch.

Is the customer still seeing the issue that was originally reported?
It was noted that only two customers are unable to use kibana, is it possible that their logs have not yet made it into Elasticsearch so that is preventing them from having the correct permissions?

Do you have an admin user that can check/confirm this?

2020-11-23T18:04:54.416956165Z [2020-11-23T18:04:54,416][INFO ][c.a.o.s.p.PrivilegesEvaluator] [elasticsearch-cdm-9aeuzkeh-2] No index-level perm match for User [name=<redacted>@github, roles=[project_user], requestedTenant=__user__] Resolved [aliases=[*], indices=[*], allIndices=[*], types=[*], originalRequested=[], remoteIndices=[]] [Action [indices:admin/get]] [RolesChecked [project_user]]
2020-11-23T18:04:54.417041035Z [2020-11-23T18:04:54,417][INFO ][c.a.o.s.p.PrivilegesEvaluator] [elasticsearch-cdm-9aeuzkeh-2] No permissions for [indices:admin/get]

Comment 9 Matthew Robson 2021-03-01 19:44:21 UTC
From what I know, Kibana is not restarting often, but I can't say for sure if there was a restart around the time of the issue. I will note that as something to check when it happens again.

The 'failed to execute bulk item' could be coincidental, aside from these one off index issues, the logging stack is generally operational and when they do see this issue, deleting the users kibana index results in immediate resolution to the problem. In this specific case, Kibana is not yellow (that's https://bugzilla.redhat.com/show_bug.cgi?id=1913952 in which Kiabna doesn't work for anyone). This is generally a single user issue where Kibana works fine for everyone else, admins or otherwise.

This issue, anecdotally, looks to occur more often within a group of people who use Kibana significantly more often.

A bunch of users who have experienced this have had it happen more than once. They all have long standing projects with lots of logs. It's possible they had a new project created recently, but I don't know how we could correlate that happening and the impact.

When this happens again for a user, is there any more info you would want to collect outside a new must gather? I can grab another must gather to compare the circumstances to the initial observations.

Comment 24 Matthew Robson 2021-05-03 17:05:22 UTC

*** This bug has been marked as a duplicate of bug 1933978 ***


Note You need to log in before you can comment on or make changes to this bug.