Bug 1766187 - Authentication "500 Internal Error"
Summary: Authentication "500 Internal Error"
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.2.0
Hardware: x86_64
OS: Linux
Target Milestone: ---
: 4.3.0
Assignee: ewolinet
QA Contact: Anping Li
Depends On:
TreeView+ depends on / blocked
Reported: 2019-10-28 14:23 UTC by Gabriel Virga
Modified: 2021-03-01 08:42 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2020-01-23 11:09:38 UTC
Target Upstream Version:

Attachments (Terms of Use)
the clo, fluentd,kibana resource and clo logs (210.00 KB, application/x-tar)
2019-11-25 12:59 UTC, Anping Li
no flags Details

System ID Private Priority Status Summary Last Updated
Github openshift cluster-logging-operator issues 261 0 'None' closed 500 Internal Error Additional Trusted CA Bundle missing 2021-02-17 12:49:53 UTC
Github openshift cluster-logging-operator pull 255 0 'None' closed Bug 1752725: Log into kibana console get `504 Gateway Time-out The server didn't respond in time. ` when http_proxy enab... 2021-02-17 12:49:54 UTC
Red Hat Product Errata RHBA-2020:0062 0 None None None 2020-01-23 11:09:58 UTC

Description Gabriel Virga 2019-10-28 14:23:32 UTC
Description of problem:
I installed the latest Openshift 4.2 version. And I used the variable "additionalTrustBundle:" to add our internal intermediate and root chains.
The proxy sidecar of kibana is not receiving the additionalTrustBundle

How reproducible:
Every install using additionalTrustBundle

Steps to Reproduce:
1. Install Openshift 4.2 with additionalTrustBundle for self signed certificate
2. Deploy the logging operator following the procedure
3. Try to authenticate to
- https://kibana-openshift-logging.apps.ose.company.com/

Actual results:
Browser error "500 Internal Error"

# Kibana-proxy container
$ oc logs -c kibana-proxy kibana-5f6cb5bf7f-zrhvm | grep TLS
I1017 00:11:10.507334 1 log.go:172] http: TLS handshake error from EOF
I1017 00:18:13.698951 1 log.go:172] http: TLS handshake error from remote error: tls: bad certificate

Expected results:

Additional info:
CASE 02497459

# To fix Kibana I set the operator to Unmanaged then 
I manually created the configMap "trusted-ca-bundle".
Under kibana-proxy container I added:
            - name: trusted-ca-bundle
              readOnly: true
              mountPath: /etc/pki/ca-trust/extracted/pem

Under Volumes I added:
        - name: trusted-ca-bundle
            name: trusted-ca-bundle
              - key: ca-bundle.crt
                path: tls-ca-bundle.pem
            defaultMode: 420

Comment 3 Anping Li 2019-11-25 12:59:01 UTC
Created attachment 1639474 [details]
the clo, fluentd,kibana resource and clo logs

The kibana couldn't be accessable. there wasn't  HTTP_PROXY Env kibana pod, the configmap kibana-trusted-ca-bundle wasn't mounted into kibana pod

Comment 5 Anping Li 2019-11-25 13:35:54 UTC
Note: The kibana works well although  the HTTP_RPOXY is not set, and kibana-trusted-ca-bundle wasn't mounted

Comment 7 Jeff Cantrill 2019-12-02 22:03:04 UTC
4.4 additional fixes: https://github.com/openshift/cluster-logging-operator/pull/305

Comment 9 Anping Li 2019-12-12 11:08:41 UTC
The kibana can be display when proxy is enabled.  Will launch a new cluster to verify the additional ca works.

Comment 10 Anping Li 2019-12-12 11:21:31 UTC
The trust can file are binded to  /etc/pki/ca-trust/extracted/pem/ in both fluentd and kibana pod

Comment 12 errata-xmlrpc 2020-01-23 11:09:38 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.