Bug 1679864 - Cluster-admin user can't get index list
Summary: Cluster-admin user can't get index list
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
low
low
Target Milestone: ---
: 4.1.0
Assignee: Jeff Cantrill
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks: 1685792
TreeView+ depends on / blocked
 
Reported: 2019-02-22 05:13 UTC by Qiaoling Tang
Modified: 2019-06-04 10:44 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
: 1685792 (view as bug list)
Environment:
Last Closed: 2019-06-04 10:44:19 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift origin-aggregated-logging pull 1533 0 None closed bug 1679864. Allow cluster admins to retrieve indice list 2020-12-01 08:52:03 UTC
Red Hat Product Errata RHBA-2019:0758 0 None None None 2019-06-04 10:44:27 UTC

Comment 1 Jeff Cantrill 2019-02-22 15:28:33 UTC
Please confirm you are a cluster admin:

oc auth can-i view pods/log -n default -t $token

This is the way this role is determined.

Comment 2 Jeff Cantrill 2019-02-22 15:31:56 UTC
Please also confirm you can do this by hitting the service endpoint directly.  It should be unrelated to the fact it is exposed via a route:

curl --silent --insecure -H "Authorization: Bearer $token" "https://logging-es:9200/_cat/indices?v" | jq

This may require you to hit the service endpoint via IP directly from the node unless you can resolve the service.

Comment 3 Qiaoling Tang 2019-02-25 05:32:38 UTC
Use kubeadmin user:

$ oc whoami -t
whTuHxHVAOnx4wpZUt2f_RGfUDpDZlaRTkpd5tTPvZ0

$ oc auth can-i view pods/log -n default -t whTuHxHVAOnx4wpZUt2f_RGfUDpDZlaRTkpd5tTPvZ0
Error: unknown shorthand flag: 't' in -t

$ oc auth can-i view pods/log -n default 
yes

$ oc get svc
NAME                    TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
elasticsearch           ClusterIP   172.30.31.41     <none>        9200/TCP   112m

$ curl --silent --insecure -H "Authorization: Bearer whTuHxHVAOnx4wpZUt2f_RGfUDpDZlaRTkpd5tTPvZ0" https://172.30.31.41:9200/_cat/indices?v |python -mjson.tool
{
    "error": {
        "reason": "no permissions for [indices:monitor/stats] and User [name=kube:admin, roles=[gen_project_operations, gen_kibana_81378af5ec74f2e854679768a37b6d13cf25cbfa, gen_user_81378af5ec74f2e854679768a37b6d13cf25cbfa, prometheus, jaeger]]",
        "root_cause": [
            {
                "reason": "no permissions for [indices:monitor/stats] and User [name=kube:admin, roles=[gen_project_operations, gen_kibana_81378af5ec74f2e854679768a37b6d13cf25cbfa, gen_user_81378af5ec74f2e854679768a37b6d13cf25cbfa, prometheus, jaeger]]",
                "type": "security_exception"
            }
        ],
        "type": "security_exception"
    },
    "status": 403
}

Comment 4 Noriko Hosoi 2019-02-26 19:23:21 UTC
I could reproduce the same issue [1] in my test env.  And I verified that I can see the indices with "oc exec" [2].

[2] oc exec $ESPOD -- curl -s -k --cert /etc/elasticsearch/secret/admin-cert --key /etc/elasticsearch/secret/admin-key "https://localhost:9200/_cat/indices?v"
health status index                  uuid                   pri rep docs.count docs.deleted store.size pri.store.size
yellow open   .operations.2019.02.25 m1gOWTzgROSZswz6myDSfw   1   1    2619338            0      2.2gb          2.2gb
yellow open   .kibana                _A_z3C7mSzKOS1dUTsD4Kw   1   1          5            0     56.8kb         56.8kb
yellow open   .operations.2019.02.26 i7gyDz4MTNeEEDZvJwxF0g   1   1    2760813            0      3.4gb          3.4gb
yellow open   .searchguard           40q2ua1HQ62EeCF3mqsjwA   1   1          5            0     35.7kb         35.7kb

Just in case I added the admin-key and admin-cert to the reencrypt route and reran the test, but it did not change the result...  Please note that the admin-key and admin-cert are identical to the one used in the command line [2].

Another note: _nodes/stats returns the same error "type":"index_not_found_exception","reason":"no such index",...

If this is a permission problem, where I could look into for debugging?

Also, this should be a regression, shouldn't it?  That is, the same operation should have worked on 3.11 (and older)?

[1] curl --silent --insecure -H "Authorization: Bearer $( oc whoami -t )" "t}' )/.operations.*/_cat/indices" | jq
{
  "error": {
    "root_cause": [
      {
        "type": "index_not_found_exception",
        "reason": "no such index",
        "resource.type": "index_expression",
        "resource.id": ".operations.*",
        "index_uuid": "_na_",
        "index": ".operations.*"
      }
    ],
    "type": "index_not_found_exception",
    "reason": "no such index",
    "resource.type": "index_expression",
    "resource.id": ".operations.*",
    "index_uuid": "_na_",
    "index": ".operations.*"
  },
  "status": 404
}

Comment 5 Jeff Cantrill 2019-02-27 14:45:22 UTC
Modified the title as this is unrelated to the route.  Creating a forthcoming PR against the sg_action_group.yml to include INDICES_ALL into INDEX_ANY_OPERATIONS to allow listing of indices

Comment 6 Qiaoling Tang 2019-03-04 02:19:33 UTC
Verified in quay.io/openshift/origin-logging-elasticsearch5@sha256:957f768629f77220210027cffd252d657830bcf327ed14880d3f596881c267c4

Comment 9 errata-xmlrpc 2019-06-04 10:44:19 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.