Bug 1408412

Summary: Unexpected notification encountered on kibana UI with non-cluster-admin user + journald log driver
Product: OpenShift Container Platform Reporter: Xia Zhao <xiazhao>
Component: LoggingAssignee: ewolinet
Status: CLOSED CURRENTRELEASE QA Contact: Xia Zhao <xiazhao>
Severity: low Docs Contact:
Priority: low    
Version: 3.4.0CC: aos-bugs, juzhao, tdawson, xiazhao
Target Milestone: ---Keywords: Reopened
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: Known Issue
Doc Text:
Cause: With Elasticsearch 2.4.1 and Kibana 4.5.4, if a user navigates to Kibana index prior to the logs being available in Elasticsearch then they be met with a message in Kibana that says "Discover: [security_exception] no permissions for indices:data/read/msearch". Consequence: Users receive an error message at the top of Kibana. Workaround (if any): Waiting enough time for the logs to populate in Elasticsearch and reconnecting to Kibana. Result: The error message no longer appears for the index.
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-02-16 21:03:17 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
es_log
none
kibana_log
none
message + log entries all show up none

Comment 1 Xia Zhao 2016-12-23 10:24:14 UTC
Created attachment 1234995 [details]
es_log

Comment 2 Xia Zhao 2016-12-23 10:24:36 UTC
Created attachment 1234996 [details]
kibana_log

Comment 3 ewolinet 2017-01-03 20:31:02 UTC
Xia,

Can you provide the results of the following curls against your ES?

curl --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key --cacert /etc/elasticsearch/secret/admin-ca -XGET 'https://localhost:9200/.searchguard.logging-es-ylwejw0r-1-8lyg2/roles/0'

curl --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key --cacert /etc/elasticsearch/secret/admin-ca -XGET 'https://localhost:9200/.searchguard.logging-es-ylwejw0r-1-8lyg2/rolesmapping/0'

Where the index is '.searchguard.<ES_POD_NAME>'.

Comment 4 Junqi Zhao 2017-01-04 08:55:52 UTC
@ewolinet
user "test" is a non-cluster-admin user, and created project "test"
user "juzhao" is a cluster-admin user
When access Kibana UI with user "test" + journald log driver,this issue reproduced, this issue doesn't exist with json-file log driver

The info you wanted:
curl --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key --cacert /etc/elasticsearch/secret/admin-ca -XGET 'https://localhost:9200/.searchguard.logging-es-0sn7afy4-1-onp0i/roles/0'

{"_index":".searchguard.logging-es-0sn7afy4-1-onp0i","_type":"roles","_id":"0","_version":5,"found":true,"_source":{"gen_project_logging_0e70844c-d23f-11e6-9bb9-42010af0000e":{"cluster":[],"indices":{"project?logging?0e70844c-d23f-11e6-9bb9-42010af0000e?*":{"*":["indices:admin/validate/query*","indices:admin/get*","indices:admin/mappings/fields/get*","indices:data/read*"]},"logging?0e70844c-d23f-11e6-9bb9-42010af0000e?*":{"*":["indices:admin/validate/query*","indices:admin/get*","indices:admin/mappings/fields/get*","indices:data/read*"]}}},"sg_role_kibana":{"cluster":["cluster:monitor/nodes/info","cluster:monitor/health"],"indices":{"?kibana":{"*":["ALL"]}}},"sg_role_curator":{"cluster":["CLUSTER_MONITOR"],"indices":{"*":{"*":["READ","MANAGE"]}}},"sg_role_fluentd":{"cluster":[],"indices":{"*":{"*":["CREATE_INDEX","WRITE"]}}},"gen_project_management-infra_53d62e71-d23e-11e6-9bb9-42010af0000e":{"cluster":[],"indices":{"management-infra?53d62e71-d23e-11e6-9bb9-42010af0000e?*":{"*":["indices:admin/validate/query*","indices:admin/get*","indices:admin/mappings/fields/get*","indices:data/read*"]},"project?management-infra?53d62e71-d23e-11e6-9bb9-42010af0000e?*":{"*":["indices:admin/validate/query*","indices:admin/get*","indices:admin/mappings/fields/get*","indices:data/read*"]}}},"gen_project_test_eec2a5d6-d24d-11e6-a995-42010af0000e":{"cluster":[],"indices":{"project?test?eec2a5d6-d24d-11e6-a995-42010af0000e?*":{"*":["indices:admin/validate/query*","indices:admin/get*","indices:admin/mappings/fields/get*","indices:data/read*"]},"test?eec2a5d6-d24d-11e6-a995-42010af0000e?*":{"*":["indices:admin/validate/query*","indices:admin/get*","indices:admin/mappings/fields/get*","indices:data/read*"]}}},"gen_kibana_a94a8fe5ccb19ba61c4c0873d391e987982fbbd3":{"cluster":[],"indices":{"?kibana?a94a8fe5ccb19ba61c4c0873d391e987982fbbd3":{"*":["indices:*"]}}},"sg_role_admin":{"cluster":["CLUSTER_ALL"],"indices":{"*":{"*":["ALL"]}}},"sg_project_operations":{"cluster":[],"indices":{"*?*?*":{"*":["READ","indices:admin/validate/query*","indices:admin/get*","indices:admin/mappings/fields/get*"]},"?operations?*":{"*":["READ","indices:admin/validate/query*","indices:admin/get*","indices:admin/mappings/fields/get*"]}}},"gen_kibana_ca16f3742a6651321d9ef7284619623553ee35f8":{"cluster":[],"indices":{"?kibana?ca16f3742a6651321d9ef7284619623553ee35f8":{"*":["indices:*"]}}},"gen_project_install-test_5a80392c-d23f-11e6-9bb9-42010af0000e":{"cluster":[],"indices":{"project?install-test?5a80392c-d23f-11e6-9bb9-42010af0000e?*":{"*":["indices:admin/validate/query*","indices:admin/get*","indices:admin/mappings/fields/get*","indices:data/read*"]},"install-test?5a80392c-d23f-11e6-9bb9-42010af0000e?*":{"*":["indices:admin/validate/query*","indices:admin/get*","indices:admin/mappings/fields/get*","indices:data/read*"]}}}}}


curl --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key --cacert /etc/elasticsearch/secret/ad.logging-es-0sn7afy4-1-onp0i/rolesmapping/0'

{"_index":".searchguard.logging-es-0sn7afy4-1-onp0i","_type":"rolesmapping","_id":"0","_version":5,"found":true,"_source":{"gen_project_logging_0e70844c-d23f-11e6-9bb9-42010af0000e":{"users":["juzhao"]},"sg_role_kibana":{"users":["CN=system.logging.kibana,OU=OpenShift,O=Logging"]},"sg_role_curator":{"users":["CN=system.logging.curator,OU=OpenShift,O=Logging"]},"sg_role_fluentd":{"users":["CN=system.logging.fluentd,OU=OpenShift,O=Logging"]},"gen_kibana_a94a8fe5ccb19ba61c4c0873d391e987982fbbd3":{"users":["test"]},"gen_project_test_eec2a5d6-d24d-11e6-a995-42010af0000e":{"users":["test","juzhao"]},"gen_project_management-infra_53d62e71-d23e-11e6-9bb9-42010af0000e":{"users":["juzhao"]},"sg_role_admin":{"users":["CN=system.admin,OU=OpenShift,O=Logging"]},"sg_project_operations":{"users":["juzhao"]},"gen_kibana_ca16f3742a6651321d9ef7284619623553ee35f8":{"users":["juzhao"]},"gen_project_install-test_5a80392c-d23f-11e6-9bb9-42010af0000e":{"users":["juzhao"]}}}

Comment 6 Junqi Zhao 2017-01-05 01:30:16 UTC
@ewolinet
This message does not pop up if we don't try to do the workaround of bug #1388031

Comment 7 Xia Zhao 2017-01-05 07:32:23 UTC
@ewolinet @juzhao With the latest images + journal log driver on GCE, I do see the message pop up at the very first screen after logging on kibana, before performing the workaround of bug #1388031

Images tested:
ops registry
openshift3/logging-kibana    8a3df528c998
openshift3/logging-elasticsearch    583e04127ed6
openshift3/logging-auth-proxy    d9236074fecb
openshift3/logging-fluentd    43d549beb4c8
openshift3/logging-curator    5aadf9eb6168
openshift3/logging-deployer    7386facde449

# openshift version
openshift v3.4.0.38
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0

# docker version
Client:
 Version:         1.12.5
 API version:     1.24
 Package version: docker-common-1.12.5-8.el7.x86_64
 Go version:      go1.7.4
 Git commit:      1d8f205
 Built:           Wed Dec 21 08:37:50 2016
 OS/Arch:         linux/amd64

Server:
 Version:         1.12.5
 API version:     1.24
 Package version: docker-common-1.12.5-8.el7.x86_64
 Go version:      go1.7.4
 Git commit:      1d8f205
 Built:           Wed Dec 21 08:37:50 2016
 OS/Arch:         linux/amd64

Comment 9 Xia Zhao 2017-01-06 05:21:19 UTC
@ewolinet

Thanks for the hints. Unfortunately I didn't get rid of the message with the above steps in 3):

1) and 2) are exactly what I experienced.

When I tried to get rid of the message with 3), after these lines show up in ES log:

[2017-01-06 04:50:23,704][INFO ][cluster.metadata         ] [Chamber] [project.logging.fb722fcc-d3be-11e6-9e1b-42010af00026.2017.01.06] update_mapping [com.redhat.viaq.common]
[2017-01-06 04:53:00,222][INFO ][cluster.metadata         ] [Chamber] [.operations.2017.01.06] update_mapping [com.redhat.viaq.common]

I logged in kibana UI with the non-admin user, the message still poped up with empty log entries shown on kibana UI when time range is the default 15 minutes. 

Then I adjusted the time range to "Today", log entries about namespace 'logging' show up, together with the message, as in the attached picture "message + log entries all show up".

I logged out and logged back into Kibana using the same user, the message was still observed.

Comment 10 Xia Zhao 2017-01-06 05:26:47 UTC
(In reply to Xia Zhao from comment #9)
> Then I adjusted the time range to "Today", log entries about namespace
> 'logging' show up, together with the message, as in the attached picture
> "message + log entries all show up".

Oh, after waiting on kibana UI with log entries shown for about 10 minutes, the message disappeared.

> 
> I logged out and logged back into Kibana using the same user, the message
> was still observed.

Is there a way to adjust the default time range on kibana from '15 mins' to 'Today'?

Comment 11 Xia Zhao 2017-01-06 05:32:47 UTC
Created attachment 1237864 [details]
message + log entries all show up

Comment 12 ewolinet 2017-01-06 15:56:53 UTC
> Is there a way to adjust the default time range on kibana from '15 mins' to
> 'Today'?

I will check if there is a way to adjust this in the Kibana config. However I am hesitant to change this default as it may lead to poor performance (loading up to 24 hours of logs vs just 15 minutes).

It may just need to be documented as a known issue with the resolution being that more time needs to pass so that logs can be populated for the index.

Comment 13 ewolinet 2017-01-06 16:11:05 UTC
Xia,

You should be able to update the default time range in the 'settings' tab for Kibana. I believe the field you would want to update is 'timepicker:timeDefaults'.

There does not appear (in their documentation) to any configuration setting values that can be provided to update this default.

Comment 14 Xia Zhao 2017-01-09 12:06:19 UTC
(In reply to ewolinet from comment #13)
> Xia,
> 
> You should be able to update the default time range in the 'settings' tab
> for Kibana. I believe the field you would want to update is
> 'timepicker:timeDefaults'.
> 
> There does not appear (in their documentation) to any configuration setting
> values that can be provided to update this default.

Thank you so much, Eric. By setting 'timepicker:timeDefaults' to "from: now-15000m to now" in the Advanced setting of Kibana and re-login, I didn't see the warning message any more. My original issue thus got resolved.

Comment 15 Xia Zhao 2017-01-09 12:07:04 UTC
(In reply to ewolinet from comment #12)

> It may just need to be documented as a known issue with the resolution being
> that more time needs to pass so that logs can be populated for the index.

Yes, I agree based on comment #14.

Comment 16 Xia Zhao 2017-01-10 00:40:33 UTC
Closing based on comment #14. The original issue was gone after refined the default query for kibana.

Comment 17 Troy Dawson 2017-02-16 21:03:17 UTC
This bug was fixed with the latest OCP 3.4.0 that is already released.