[Description of problem] The same issue that was reported in BZ#1710868 for OCP 3.11 and BZ#1722959 for OCP 4.2 is shown in OCP 4.5 [Version-Release number of selected component (if applicable):] OCP 4.5.x [How reproducible] Always [Steps to Reproduce:] ### Login as normal user, with not admin rights $ oc new-project test ### Create normal user $ oc create sa test ### Create rolebinding giving ClusterRole view to the serviceaccount test $ cat rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: test-view namespace: test roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: test namespace: test $ oc create -f rolebinding.yaml ### Get the token $ token=$(oc whoami -t) ### As admin user, follow the documentation to expose the log store service as a route [1] ### As Service Account test, try to list the / from the Elasticsearch receiving the HTTP response code 403 $ curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://${routeES}/" {"error":{"root_cause":[{"type":"security_exception","reason":"no permissions for [cluster:monitor/main] and User [name=quicklab, roles=[project_user], requestedTenant=null]"}],"type":"security_exception","reason":"no permissions for [cluster:monitor/main] and User [name=quicklab, roles=[project_user], requestedTenant=null]"},"status":403} [Actual results] It fails with HTTP response code 403 [Expected results] It returns: ~~~ $ curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://${routeES}/" { "name" : "elasticsearch-cdm-qelvol0j-1", "cluster_name" : "elasticsearch", "cluster_uuid" : "v2pP4XGoSUeqmo3r9-_1yQ", "version" : { "number" : "5.6.16", "build_hash" : "8dc130e", "build_date" : "2019-09-10T20:07:09.564Z", "build_snapshot" : false, "lucene_version" : "6.6.1" }, "tagline" : "You Know, for Search" ~~~
(In reply to Oscar Casal Sanchez from comment #0) > [Description of problem] > The same issue that was reported in BZ#1710868 for OCP 3.11 and BZ#1722959 > for OCP 4.2 is shown in OCP 4.5 > > > [Version-Release number of selected component (if applicable):] > OCP 4.5.x > > > [How reproducible] > Always > > > [Steps to Reproduce:] > > ### Login as normal user, with not admin rights > $ oc new-project test > ### Create normal user > $ oc create sa test > ### Create rolebinding giving ClusterRole view to the serviceaccount test > $ cat rolebinding.yaml > apiVersion: rbac.authorization.k8s.io/v1 > kind: RoleBinding > metadata: > name: test-view > namespace: test > roleRef: > apiGroup: rbac.authorization.k8s.io > kind: ClusterRole > name: view > subjects: > - kind: ServiceAccount > name: test > namespace: test > $ oc create -f rolebinding.yaml > ### Get the token > $ token=$(oc whoami -t) > > ### As admin user, follow the documentation to expose the log store service > as a route [1] > > ### As Service Account test, try to list the / from the Elasticsearch > receiving the HTTP response code 403 > $ curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" > "https://${routeES}/" > {"error":{"root_cause":[{"type":"security_exception","reason":"no > permissions for [cluster:monitor/main] and User [name=quicklab, > roles=[project_user], > requestedTenant=null]"}],"type":"security_exception","reason":"no > permissions for [cluster:monitor/main] and User [name=quicklab, > roles=[project_user], requestedTenant=null]"},"status":403} This SA does not have the proper permissions and was evaluated to be a "project_user". Why do believe normal user's should have these permissions? It is only granted to admin users: [1] https://github.com/openshift/origin-aggregated-logging/blob/master/elasticsearch/sgconfig/roles.yml#L150
Hello Jeff, This was exactly the same configured in OCP 4.4 and it was working, then, the asseveration from the customer that they have used it in the previous version is fair and now it doesn't work after upgrading to OCP 4.5. Then, something was changed in OCP 4.5 with respect to the roles in relation with the previous versions delivered. Regards, Oscar
Hello Jeff, To show you it in OCP 4.4: - Normal user without privileges following the same steps in OCP 4.4 ~~~ ### User only have access to their own project created test. There, it was created the SA test $ oc get projects NAME DISPLAY NAME STATUS test Active $ oc whoami quicklab $ token=$(oc whoami -t) $ curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://${routeES}/" { "name" : "elasticsearch-cdm-qelvol0j-1", "cluster_name" : "elasticsearch", "cluster_uuid" : "v2pP4XGoSUeqmo3r9-_1yQ", "version" : { "number" : "5.6.16", "build_hash" : "8dc130e", "build_date" : "2019-09-10T20:07:09.564Z", "build_snapshot" : false, "lucene_version" : "6.6.1" }, "tagline" : "You Know, for Search" } $ oc login -u system:admin $ oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.4.30 True False 5h32m Cluster version is 4.4.30 ~~~ I was able to check the sg_roles that you have mentioned for OCP 4.5 and for OCP 4.4 before opening the Bug and at the same time I have tried to reproduce the error for OCP 4.4 and I was not able as you can see above. The behaviour has changed in 4.5 and now, it's not possible to do the same that it was before impacting to the customer. You can see previous bugs opened for the same in BZ#1710868 for OCP 3.11 and BZ#1722959 for OCP 4.2. I'm aware that it's like that since in OCP 4.5 was written in that way, but it is not following the same way that it was working prior to OCP 4.5 where it was possible to access to the Elasticsearch /. Regards, Oscar
Confirmed behavior in 4.6 by adding the "MONITORING" permission to "project_user" role. This allows ordinary users access to the root URL. Add PR and marked it to be backported to 4.6. User's could work around this issue by granting the specific SA permissions to 'view pods/logs' in the 'default' namespace though this would also give them access to view logs across the cluster.
A workaround to include the permission changes associated with the linked pull request. Note these changes require setting the ClusterLogging instance to unmanaged which has the following implications: * The logging stack will no longer reconcile changes, including image updates * Returning to "managed" will revert all changes which will need to be reapplied if the update does not include the fix The steps (unverified) are as follows: * Download the permission files from the pull request [1] * Edit the clusterlogging instance and set it to "Unmanaged" * Create a configmap from the permission files like: oc create configmap sgconfig --from-file=<download_dir> * Mount the configmap into each Elasticsearch deployment(e.g. oc get deployments -lcomponent=elasticsearch) ** set "paused" to false ** add volume to the pod's spec under "volumes": volumes: - configMap: defaultMode: 420 name: sgconfig ** add volumemount to the "elasticsearch" container under "volumemounts": volumeMounts: - mountPath: /opt/app-root/src/sgconfig name: sgconfig readOnly: false Editing the deployment in this way should redeploy each ES Pod which will trigger loading of the new permission files. [1] https://github.com/openshift/origin-aggregated-logging/tree/84a0f63f29c7a18dc0a473b9a6b2f78bbdcc851f/elasticsearch/sgconfig
Verified on elasticsearch-operator.4.7.0-202101090911.p0
Hello Roberto, - Bug for OCP 4.6 is Bug#1913483 - Bug for OCP 4.5 is Bug#1913366 Both are in POST status. Regards, Oscar
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Errata Advisory for Openshift Logging 5.0.0), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:0652