Description of problem: Ordinary users are able to access logs of a deleted namespace if recreated with the same name regardless if they were the previous owner. Version-Release number of selected component (if applicable): logging-elasticsearch5-v3.11.0-0.24.0.0 logging-fluentd-v3.11.0-0.24.0.0 logging-kibana5-v3.11.0-0.24.0.0 openshift-ansible-3.11.0-0.24.0.git.0.3cd1597None.noarch How reproducible: Always Steps to Reproduce: 1.Create two users as common user. 2.Login OCP with user1 and create a project named test-project and an app 3.Check logs in kibana with user1 4.User1 deletes the project test-project 5.User2 log into OCP, creates a project named test-project 6.Check logs in kibana with user2 Actual results: User2 can access logs from user1's namespace Expected results: Ordinary user should be restricted to logs generated from the pods he created in his new namespace. Additional info:
This is already the case. Permissions and indices are based on project name and uuid. The permissions for logging are kept separate from Openshift so there is a window where they will not be in sync. Permissions are expired periodically which is currently every 60s. Please confirm the time window you are checking.
I waited about 5 minutes after step 4, then I did step 5 and 6, the issue still could be reproduced.
At each step of your reproducer flow, capture: oc get projects -o yaml # as user1 oc get projects -o yaml # as user2 oc get rolebindings.rbac -o yaml -n test-project # as cluster admin
Also make sure that no cluster role bindings exist for either user.
The output is as expected: user2 has access to the new project and user1 does not. I have no idea what "indices" are or what is means for there to be two of them, but from an OpenShift auth perspective (at least the components that my team controls), everything is working as expected. The issue (if there is one), lies in the logging stack.
How many ES nodes are you testing?
Also, please dump the acls after step4 and step6: 'oc exec -c elasticsearch $pod -- es_acl get --doc=roles' 'oc exec -c elasticsearch $pod -- es_acl get --doc=rolesmapping' I'm interested in: * If the expiration is working * If maybe replication or something is 'resetting' the perms
Only one ES node in my env. # oc get pod NAME READY STATUS RESTARTS AGE logging-es-data-master-2ks5bbzq-1-vxfrc 2/2 Running 0 6m logging-fluentd-9gkdw 1/1 Running 0 8m Acls after step 4(user1 has logged out from kibana) # oc exec -c elasticsearch logging-es-data-master-2ks5bbzq-1-vxfrc -- es_acl get --doc=roles { "sg_project_operations": { "indices": { "*?*?*": { "*": [ "READ", "indices:admin/validate/query*", "indices:admin/get*", "indices:admin/mappings/fields/get*" ] }, "?operations?*": { "*": [ "READ", "indices:admin/validate/query*", "indices:admin/get*", "indices:admin/mappings/fields/get*" ] } } }, "sg_role_admin": { "indices": { "*": { "*": [ "ALL" ] } }, "cluster": [ "CLUSTER_ALL" ] }, "sg_role_curator": { "indices": { "*": { "*": [ "READ", "MANAGE" ] } }, "cluster": [ "CLUSTER_MONITOR" ] }, "sg_role_fluentd": { "indices": { "*": { "*": [ "CRUD", "CREATE_INDEX" ] } }, "cluster": [ "CLUSTER_MONITOR", "indices:data/write/bulk" ] }, "sg_role_kibana": { "indices": { "?kibana": { "*": [ "INDICES_ALL" ] } }, "cluster": [ "CLUSTER_COMPOSITE_OPS", "CLUSTER_MONITOR" ] }, "sg_role_prometheus": { "cluster": [ "METRICS" ] } } [root@qe-qitang-311-alicloud-men-001 ~]# oc exec -c elasticsearch logging-es-data-master-2ks5bbzq-1-vxfrc -- es_acl get --doc=rolesmapping { "sg_role_curator": { "users": [ "CN=system.logging.curator,OU=OpenShift,O=Logging" ] }, "sg_role_kibana": { "users": [ "CN=system.logging.kibana,OU=OpenShift,O=Logging" ] }, "sg_role_admin": { "users": [ "CN=system.admin,OU=OpenShift,O=Logging" ] }, "sg_role_fluentd": { "users": [ "CN=system.logging.fluentd,OU=OpenShift,O=Logging" ] }, "sg_role_prometheus": { "users": [ "system:serviceaccount:openshift-metrics:prometheus" ] } } [root@qe-qitang-311-alicloud-men-001 ~]# oc exec -c elasticsearch logging-es-data-master-2ks5bbzq-1-vxfrc -- indices Fri Aug 31 01:38:41 UTC 2018 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open .searchguard bHk2BJdZQDGhaIwliR5XQQ 1 0 5 0 0 0 yellow open project.test-project.ff0746ac-acbb-11e8-8a85-00163e00601d.2018.08.31 5FnUY92SQ5yk328tP9cfgA 5 1 860 0 1 1 yellow open .kibana Ym1PVMFKRimgf7Td0iWrIg 1 1 1 0 0 0 yellow open project.operator-lifecycle-manager.027f8dac-acba-11e8-8a85-00163e00601d.2018.08.31 nVFP525tQBm4T_7dVhrbCA 5 1 542 0 0 0 green open .kibana.6bd50a57d607a756175855abb9675564260c1af1 KwVdYrbfQKOLZFVCA9IXrg 1 0 2 0 0 0 yellow open .operations.2018.08.31 aDLwJBXsTNaSa0LFhSoBsw 5 1 311109 0 529 529 yellow open project.install-test.3b744b8c-acba-11e8-8a85-00163e00601d.2018.08.31 hdRj5aGsRRWSKH2rvi1ibQ 5 1 691 0 1 1 [root@qe-qitang-311-alicloud-men-001 ~]# Acls after step 6(user2 has logged into kibana, and didn't logout) [root@qe-qitang-311-alicloud-men-001 ~]# oc exec -c elasticsearch logging-es-data-master-2ks5bbzq-1-vxfrc -- es_acl get --doc=roles { "sg_role_admin": { "indices": { "*": { "*": [ "ALL" ] } }, "cluster": [ "CLUSTER_ALL" ] }, "sg_project_operations": { "indices": { "*?*?*": { "*": [ "READ", "indices:admin/validate/query*", "indices:admin/get*", "indices:admin/mappings/fields/get*" ] }, "?operations?*": { "*": [ "READ", "indices:admin/validate/query*", "indices:admin/get*", "indices:admin/mappings/fields/get*" ] } } }, "sg_role_curator": { "indices": { "*": { "*": [ "READ", "MANAGE" ] } }, "cluster": [ "CLUSTER_MONITOR" ] }, "sg_role_fluentd": { "indices": { "*": { "*": [ "CRUD", "CREATE_INDEX" ] } }, "cluster": [ "CLUSTER_MONITOR", "indices:data/write/bulk" ] }, "gen_user_601dbd8d36d56431e031af5f8aab5c45bc8e20f6": { "indices": { "test-project?*": { "*": [ "INDEX_PROJECT" ] }, "project?test-project?*": { "*": [ "INDEX_PROJECT" ] } }, "cluster": [ "USER_CLUSTER_OPERATIONS" ], "expires": "1535679739778" }, "sg_role_kibana": { "indices": { "?kibana": { "*": [ "INDICES_ALL" ] } }, "cluster": [ "CLUSTER_COMPOSITE_OPS", "CLUSTER_MONITOR" ] }, "gen_kibana_601dbd8d36d56431e031af5f8aab5c45bc8e20f6": { "indices": { "?kibana?601dbd8d36d56431e031af5f8aab5c45bc8e20f6": { "*": [ "INDEX_KIBANA" ] } }, "cluster": [ "CLUSTER_MONITOR_KIBANA" ], "expires": "1535679739778" }, "sg_role_prometheus": { "cluster": [ "METRICS" ] } } [root@qe-qitang-311-alicloud-men-001 ~]# oc exec -c elasticsearch logging-es-data-master-2ks5bbzq-1-vxfrc -- es_acl get --doc=rolesmapping { "sg_role_admin": { "users": [ "CN=system.admin,OU=OpenShift,O=Logging" ] }, "sg_role_curator": { "users": [ "CN=system.logging.curator,OU=OpenShift,O=Logging" ] }, "sg_role_fluentd": { "users": [ "CN=system.logging.fluentd,OU=OpenShift,O=Logging" ] }, "gen_user_601dbd8d36d56431e031af5f8aab5c45bc8e20f6": { "expires": "1535679722066", "users": [ "test-user2" ] }, "sg_role_kibana": { "users": [ "CN=system.logging.kibana,OU=OpenShift,O=Logging" ] }, "gen_kibana_601dbd8d36d56431e031af5f8aab5c45bc8e20f6": { "expires": "1535679722066", "users": [ "test-user2" ] }, "sg_role_prometheus": { "users": [ "system:serviceaccount:openshift-metrics:prometheus" ] } } [root@qe-qitang-311-alicloud-men-001 ~]# oc exec -c elasticsearch logging-es-data-master-2ks5bbzq-1-vxfrc -- indices Fri Aug 31 01:43:13 UTC 2018 health status index uuid pri rep docs.count docs.deleted store.size pri.store.size green open .searchguard bHk2BJdZQDGhaIwliR5XQQ 1 0 5 0 0 0 yellow open project.test-project.ff0746ac-acbb-11e8-8a85-00163e00601d.2018.08.31 5FnUY92SQ5yk328tP9cfgA 5 1 860 0 1 1 green open .kibana.601dbd8d36d56431e031af5f8aab5c45bc8e20f6 odCIoaENSYuItx8n6RkUMA 1 0 2 0 0 0 yellow open .kibana Ym1PVMFKRimgf7Td0iWrIg 1 1 1 0 0 0 yellow open project.operator-lifecycle-manager.027f8dac-acba-11e8-8a85-00163e00601d.2018.08.31 nVFP525tQBm4T_7dVhrbCA 5 1 623 0 1 1 green open .kibana.6bd50a57d607a756175855abb9675564260c1af1 KwVdYrbfQKOLZFVCA9IXrg 1 0 2 0 0 0 yellow open .operations.2018.08.31 aDLwJBXsTNaSa0LFhSoBsw 5 1 556061 0 797 797 yellow open project.install-test.3b744b8c-acba-11e8-8a85-00163e00601d.2018.08.31 hdRj5aGsRRWSKH2rvi1ibQ 5 1 773 0 1 1 [root@qe-qitang-311-alicloud-men-001 ~]#
https://github.com/fabric8io/openshift-elasticsearch-plugin/pull/157
Logging pr with fix script: https://github.com/openshift/origin-aggregated-logging/pull/1335
Per conversation with PM, unable to complete by EOD 9/6 and moving this to z stream
Commit pushed to master at https://github.com/openshift/origin-aggregated-logging https://github.com/openshift/origin-aggregated-logging/commit/de92cf7393d4ccc8d3146423f85a3b448c028fc0 bug 1622822. Restrict logs to their own namespace by uid
Verified in logging-elasticsearch5-v3.11.36-1.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:3537
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days