Bug 1906765 - Access to the ES root url / from a project's pod on Openshift
Summary: Access to the ES root url / from a project's pod on Openshift
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.7.0
Assignee: Jeff Cantrill
QA Contact: Anping Li
Rolfe Dlugy-Hegwer
URL:
Whiteboard: logging-exploration
Depends On:
Blocks: 1913483
TreeView+ depends on / blocked
 
Reported: 2020-12-11 11:44 UTC by Oscar Casal Sanchez
Modified: 2024-03-25 17:30 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
* Previously, queries to the root endpoint to retrieve the Elasticsearch version received a 403 response. The 403 response broke any services that used this endpoint in prior releases. This error happened because non-administrative users did not have the MONITOR permission needed to query the root endpoint and retrieve the Elasticsearch version. The current release fixes this issue: It updates the permission set to allow querying the root endpoint for the deployed version of Elasticsearch. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1906765[*BZ#1906765*])
Clone Of:
Environment:
Last Closed: 2021-02-24 11:22:30 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift origin-aggregated-logging pull 2030 0 None closed Bug 1906765: Allow project users to view root endpoint 2021-02-08 15:50:25 UTC
Red Hat Knowledge Base (Solution) 5643291 0 None None None 2020-12-11 12:10:40 UTC
Red Hat Product Errata RHBA-2021:0652 0 None None None 2021-02-24 11:23:16 UTC

Description Oscar Casal Sanchez 2020-12-11 11:44:27 UTC
[Description of problem]
The same issue that was reported in BZ#1710868 for OCP 3.11 and BZ#1722959 for OCP 4.2 is shown in OCP 4.5


[Version-Release number of selected component (if applicable):]
OCP 4.5.x


[How reproducible]
Always


[Steps to Reproduce:]

### Login as normal user, with not admin rights
$ oc new-project test
### Create normal user
$ oc create sa test
### Create rolebinding giving ClusterRole view to the serviceaccount test
$ cat rolebinding.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: test-view
  namespace: test 
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: view
subjects:
- kind: ServiceAccount
  name: test
  namespace: test 
$ oc create -f rolebinding.yaml
### Get the token 
$ token=$(oc whoami -t)

### As admin user, follow the documentation to expose the log store service as a route [1]

### As Service Account test, try to list the / from the Elasticsearch receiving the HTTP response code 403
$ curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://${routeES}/" 
{"error":{"root_cause":[{"type":"security_exception","reason":"no permissions for [cluster:monitor/main] and User [name=quicklab, roles=[project_user], requestedTenant=null]"}],"type":"security_exception","reason":"no permissions for [cluster:monitor/main] and User [name=quicklab, roles=[project_user], requestedTenant=null]"},"status":403}


[Actual results]
It fails with HTTP response code 403


[Expected results]
It returns:

~~~
$ curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://${routeES}/" 
{
  "name" : "elasticsearch-cdm-qelvol0j-1",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "v2pP4XGoSUeqmo3r9-_1yQ",
  "version" : {
    "number" : "5.6.16",
    "build_hash" : "8dc130e",
    "build_date" : "2019-09-10T20:07:09.564Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.1"
  },
  "tagline" : "You Know, for Search"
~~~

Comment 1 Jeff Cantrill 2020-12-11 15:13:22 UTC
(In reply to Oscar Casal Sanchez from comment #0)
> [Description of problem]
> The same issue that was reported in BZ#1710868 for OCP 3.11 and BZ#1722959
> for OCP 4.2 is shown in OCP 4.5
> 
> 
> [Version-Release number of selected component (if applicable):]
> OCP 4.5.x
> 
> 
> [How reproducible]
> Always
> 
> 
> [Steps to Reproduce:]
> 
> ### Login as normal user, with not admin rights
> $ oc new-project test
> ### Create normal user
> $ oc create sa test
> ### Create rolebinding giving ClusterRole view to the serviceaccount test
> $ cat rolebinding.yaml 
> apiVersion: rbac.authorization.k8s.io/v1
> kind: RoleBinding
> metadata:
>   name: test-view
>   namespace: test 
> roleRef:
>   apiGroup: rbac.authorization.k8s.io
>   kind: ClusterRole
>   name: view
> subjects:
> - kind: ServiceAccount
>   name: test
>   namespace: test 
> $ oc create -f rolebinding.yaml
> ### Get the token 
> $ token=$(oc whoami -t)
> 
> ### As admin user, follow the documentation to expose the log store service
> as a route [1]
> 
> ### As Service Account test, try to list the / from the Elasticsearch
> receiving the HTTP response code 403
> $ curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}"
> "https://${routeES}/" 
> {"error":{"root_cause":[{"type":"security_exception","reason":"no
> permissions for [cluster:monitor/main] and User [name=quicklab,
> roles=[project_user],
> requestedTenant=null]"}],"type":"security_exception","reason":"no
> permissions for [cluster:monitor/main] and User [name=quicklab,
> roles=[project_user], requestedTenant=null]"},"status":403}

This SA does not have the proper permissions and was evaluated to be a "project_user".  Why do believe normal user's should have these permissions?  It is only granted to admin users:

[1] https://github.com/openshift/origin-aggregated-logging/blob/master/elasticsearch/sgconfig/roles.yml#L150

Comment 2 Oscar Casal Sanchez 2020-12-11 15:27:26 UTC
Hello Jeff,

This was exactly the same configured in OCP 4.4 and it was working, then, the asseveration from the customer that they have used it in the previous version is fair and now it doesn't work after upgrading to OCP 4.5. 

Then, something was changed in OCP 4.5 with respect to the roles in relation with the previous versions delivered.

Regards,
Oscar

Comment 3 Oscar Casal Sanchez 2020-12-11 15:44:56 UTC
Hello Jeff,

To show you it in OCP 4.4:

- Normal user without privileges following the same steps in OCP 4.4
~~~
### User only have access to their own project created test. There, it was created the SA test
$ oc get projects
NAME   DISPLAY NAME   STATUS
test                  Active

$ oc whoami
quicklab

$ token=$(oc whoami -t)

$ curl -tlsv1.2 --insecure -H "Authorization: Bearer ${token}" "https://${routeES}/" 
{
  "name" : "elasticsearch-cdm-qelvol0j-1",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "v2pP4XGoSUeqmo3r9-_1yQ",
  "version" : {
    "number" : "5.6.16",
    "build_hash" : "8dc130e",
    "build_date" : "2019-09-10T20:07:09.564Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.1"
  },
  "tagline" : "You Know, for Search"
}


$ oc login -u system:admin
$ oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.4.30    True        False         5h32m   Cluster version is 4.4.30
~~~

I was able to check the sg_roles that you have mentioned for OCP 4.5 and for OCP 4.4 before opening the Bug and at the same time I have tried to reproduce the error for OCP 4.4 and I was not able as you can see above.

The behaviour has changed in 4.5 and now, it's not possible to do the same that it was before impacting to the customer. You can see previous bugs opened for the same in  BZ#1710868 for OCP 3.11 and BZ#1722959 for OCP 4.2.

I'm aware that it's like that since in OCP 4.5 was written in that way, but it is not following the same way that it was working prior to OCP 4.5 where it was possible to access to the Elasticsearch /. 


Regards,
Oscar

Comment 6 Jeff Cantrill 2020-12-14 21:43:07 UTC
Confirmed behavior in 4.6 by adding the "MONITORING" permission to "project_user" role.  This allows ordinary users access to the root URL. Add PR and marked it to be backported to 4.6.  User's could work around this issue by granting the specific SA permissions to 'view pods/logs' in the 'default' namespace though this would also give them access to view logs across the cluster.

Comment 11 Jeff Cantrill 2020-12-18 13:29:38 UTC
A workaround to include the permission changes associated with the linked pull request.  Note these changes require setting the ClusterLogging instance to unmanaged which has the following implications:

* The logging stack will no longer reconcile changes, including image updates
* Returning to "managed" will revert all changes which will need to be reapplied if the update does not include the fix


The steps (unverified) are as follows:

* Download the permission files from the pull request [1]
* Edit the clusterlogging instance and set it to "Unmanaged"
* Create a configmap from the permission files like: oc create configmap sgconfig --from-file=<download_dir>
* Mount the configmap into each Elasticsearch deployment(e.g. oc get deployments -lcomponent=elasticsearch)
 ** set "paused" to false
 ** add volume to the pod's spec under "volumes":
    volumes:
    - configMap:
          defaultMode: 420
          name: sgconfig
 ** add volumemount to the "elasticsearch" container under "volumemounts":
   volumeMounts:
   - mountPath: /opt/app-root/src/sgconfig
     name: sgconfig
     readOnly: false 
    
Editing the deployment in this way should redeploy each ES Pod which will trigger loading of the new permission files.


[1] https://github.com/openshift/origin-aggregated-logging/tree/84a0f63f29c7a18dc0a473b9a6b2f78bbdcc851f/elasticsearch/sgconfig

Comment 14 Anping Li 2021-01-11 03:50:59 UTC
Verified on elasticsearch-operator.4.7.0-202101090911.p0

Comment 16 Oscar Casal Sanchez 2021-01-14 07:37:45 UTC
Hello Roberto,

- Bug for OCP 4.6 is Bug#1913483
- Bug for OCP 4.5 is Bug#1913366

Both are in POST status. 

Regards,
Oscar

Comment 21 errata-xmlrpc 2021-02-24 11:22:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Errata Advisory for Openshift Logging 5.0.0), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2021:0652


Note You need to log in before you can comment on or make changes to this bug.