Bug 1666674

Summary: Calling <kibana_url>/api/status results in being redirected to an oauth login page that does not support challenge authentication in 3.11.
Product: OpenShift Container Platform Reporter: Vedanti Jaypurkar <vjaypurk>
Component: LoggingAssignee: Jeff Cantrill <jcantril>
Status: CLOSED WONTFIX QA Contact: Anping Li <anli>
Severity: high Docs Contact:
Priority: unspecified    
Version: 3.11.0CC: aos-bugs, jcantril, mkhan, rmeggins, suchaudh, travi
Target Milestone: ---   
Target Release: 3.11.z   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-04-02 21:06:52 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Vedanti Jaypurkar 2019-01-16 10:52:24 UTC
Description of problem:

As part of healthchecks and cluster build CICD process we check the <kibana_url>/api/status endpoint for Kibana to ensure that it is up and healthy.

We could previously do this by authorizing against the cluster /oauth/authorize endpoint with the below parameters.

{"client_id": "openshift-challenging-client",
  "response_type": "token"}

Using that resulting token in the header we we could successfully call <kibana_url>/api/status.

However, with 3.11 calling <kibana_url>/api/status results in being redirected to an oauth login page that does not support challenge authentication.

How can we programmatically call the <kibana_url>/api/status endpoint with 3.11?

Version-Release number of selected component (if applicable):
3.11


Actual results:
Output of - #curl -k -H "Authorization: Bearer $(oc whoami -t)" -H "X-Proxy-Remote-User: $(oc whoami)" -H "X-Forwarded-For: 127.0.0.1" https://logging-kibana.openshift-logging.svc/api/status -v

* About to connect() to logging-kibana.openshift-logging.svc port <> (#0)
*   Trying <ip>...
* Connected to logging-kibana.openshift-logging.svc (<ip>) port <> (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
* Server certificate:
*       subject: CN=" <>"
*       start date: Dec 04 03:22:05 2018 GMT
*       expire date: Dec 03 03:22:06 2020 GMT
*       common name:  <>
*       issuer: CN=<>
> GET /api/status HTTP/1.1
> User-Agent: curl/7.29.0
> Host: logging-kibana.openshift-logging.svc
> Accept: */*
> Authorization: Bearer <>
> X-Proxy-Remote-User: <>
> X-Forwarded-For: 127.0.0.1
>
< HTTP/1.1 302 Found
< Content-Type: text/html; charset=utf-8
< Location: https://<example.com>:<port>/oauth/authorize?approval_prompt=force&client_id=kibana-proxy&redirect_uri=https%3A%2F%2Flogging-kibana.openshift-logging.svc%2Foauth%2Fcallback&response_type=code&scope=user%3Ainfo+user%3Acheck-access+user%3Alist-projects&state=757a6a76a8fd047ce76c9552861a8504%3A%2Fapi%2Fstatus
< Set-Cookie: _oauth_proxy_csrf=757a6a76a8fd047ce76c9552861a8504; Path=/; Domain=logging-kibana.openshift-logging.svc; Expires=Wed, 16 Jan 2019 23:54:45 GMT; HttpOnly; Secure
< Date: Wed, 09 Jan 2019 23:54:45 GMT
< Content-Length: 371
<
<a href="https://<example.com>:<port>/oauth/authorize?approval_prompt=force&amp;client_id=kibana-proxy&amp;redirect_uri=https%3A%2F%2Flogging-kibana.openshift-logging.svc%2Foauth%2Fcallback&amp;response_type=code&amp;scope=user%3Ainfo+user%3Acheck-access+user%3Alist-projects&amp;state=757a6a76a8fd047ce76c9552861a8504%3A%2Fapi%2Fstatus">Found</a>.

* Connection #0 to host logging-kibana.openshift-logging.svc left intact

Output of-  #curl -kv -H "Authorization: Bearer $(oc whoami -t)" -H "X-Proxy-Remote-User: $(oc whoami)" -H "X-Forwarded-For: 127.0.0.1" "https://<example.com>:<port>/oauth/authorize?approval_prompt=force&client_id=kibana-proxy&redirect_uri=https%3A%2F%2Flogging-kibana.openshift-logging.svc%2Foauth%2Fcallback&response_type=code&scope=user%3Ainfo+user%3Acheck-access+user%3Alist-projects&state=757a6a76a8fd047ce76c9552861a8504%3A%2Fapi%2Fstatus"

* About to connect() to <example.com> port <> (#0)
*   Trying <ip>...
* Connected to <example.com> (ip) port <> (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* NSS: client certificate not found (nickname not specified)
* SSL connection using TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256
* Server certificate:
*       subject: CN=<example.com>,OU=IT,O=<>,L=<>,ST=<>,Cu=<>
*       start date: Dec 04 02:16:55 2018 GMT
*       expire date: Dec 03 02:16:55 2020 GMT
*       common name: example.com
*       issuer: CN=<> SHA2 Issuing CA-11,DC=<>,DC=<>,DC=<>
> GET /oauth/authorize?approval_prompt=force&client_id=kibana-proxy&redirect_uri=https%3A%2F%2Flogging-kibana.openshift-logging.svc%2Foauth%2Fcallback&response_type=code&scope=user%3Ainfo+user%3Acheck-access+user%3Alist-projects&state=757a6a76a8fd047ce76c9552861a8504%3A%2Fapi%2Fstatus HTTP/1.1
> User-Agent: curl/7.29.0
> Host: <example.com>
> Accept: */*
> Authorization: Bearer <>
> X-Proxy-Remote-User: A334659
> X-Forwarded-For: 127.0.0.1
>
< HTTP/1.1 400 Bad Request
< Audit-Id: ffa7724a-90c0-4c8b-adbb-638bdb24959c
< Cache-Control: no-cache, no-store, max-age=0, must-revalidate
< Content-Type: application/json
< Expires: Fri, 01 Jan 1990 00:00:00 GMT
< Pragma: no-cache
< Date: Wed, 09 Jan 2019 23:55:34 GMT
< Content-Length: 251
<
{"error":"invalid_request","error_description":"The request is missing a required parameter, includes an invalid parameter value, includes a parameter more than once, or is otherwise malformed.","state":"757a6a76a8fd047ce76c9552861a8504:/api/status"}
* Connection #0 to host <example.com> left intact

Expected results:
It should run and show the status of ElasticSearch cluster without any error.

Additional info:

Comment 1 Jeff Cantrill 2019-01-16 17:08:54 UTC
Mo,

The change here is the move to use the oauthproxy to front kibana in 3.11 instead of our proxy from 3.10 and prior.  Should the bearer token be enough? what more might we be missing to allow access via CLI?

Comment 2 Mo 2019-02-04 04:26:50 UTC
At the very least you are missing the redirect URI for openshift-challenging-client.  At log level 2+, you should be able to see what is upsetting the OAuth server.

As an aside, you should not be using openshift-challenging-client or a "real" OAuth client like kibana-proxy.  Based on the request scopes, a service account based OAuth client will work without issue [1].

[1] https://docs.okd.io/latest/architecture/additional_concepts/authentication.html#service-accounts-as-oauth-clients

Comment 4 Mo 2019-03-14 21:12:03 UTC
Trying to authenticate via OAuth from the CLI seems incorrect.  The openshift-delegate-urls [1] parameter for OAuth proxy is used to configure bearer token based auth which is what I would expect to be used here.

Jeff, how is OAuth proxy configured for kibana?

[1] https://github.com/openshift/origin/blob/master/examples/prometheus/prometheus.yaml#L271

Comment 5 Jeff Cantrill 2019-03-15 15:11:11 UTC
(In reply to Mo from comment #4)
> Trying to authenticate via OAuth from the CLI seems incorrect.  The
> openshift-delegate-urls [1] parameter for OAuth proxy is used to configure
> bearer token based auth which is what I would expect to be used here.
> 
> Jeff, how is OAuth proxy configured for kibana?

https://github.com/openshift/openshift-ansible/blob/release-3.11/roles/openshift_logging_kibana/templates/kibana.j2#L98-L110

Comment 6 Mo 2019-03-15 16:53:43 UTC
The Kibana OAuth proxy configuration needs to be updated to include something like:

- '-openshift-sar={"resource": "selfsubjectaccessreviews", "verb": "create", "group": "authorization.k8s.io"}'
- '-openshift-delegate-urls={"/": {"resource": "selfsubjectaccessreviews", "verb": "create", "group": "authorization.k8s.io"}}'

This will allow it to directly honor tokens as long as the OAuth proxy service account has a cluster role binding to system:auth-delegator

The noted SAR checks are allowed by all users (this is fine because Kibana handles the authz checks itself).

Comment 8 Jeff Cantrill 2019-04-01 18:18:43 UTC
Reverting PR because it introduced regression.

Comment 9 Jeff Cantrill 2019-04-01 18:19:22 UTC
Possible fix is:

Jeff Cantrill [2:13 PM]
create a custom role, bind to system:authenticated

something like view kibana/status

Comment 11 Red Hat Bugzilla 2023-09-14 04:45:11 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days