Created attachment 1677231 [details] default-kibana-route.yaml Description of problem: After the deployment of cluster logging in OCP 4.3.x, the default kibana route which gets created is inaccessible. The kibana pod is up and running fine but when we access kibana route it shows "Application not available" and kibana-proxy container show below logs: $ oc logs -c kibana-proxy <kibana-pod-name> 2020/04/07 13:22:24 oauthproxy.go:200: mapping path "/" => upstream "http://localhost:5601/" 2020/04/07 13:22:24 oauthproxy.go:227: OAuthProxy configured for Client ID: system:serviceaccount:openshift-logging:kibana 2020/04/07 13:22:24 oauthproxy.go:237: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> refresh:disabled 2020/04/07 13:22:24 http.go:106: HTTPS: listening on [::]:3000 2020/04/07 13:22:24 http.go:60: HTTP: listening on 127.0.0.1:4180 2020/04/07 13:22:30 server.go:3012: http: TLS handshake error from 10.130.0.1:34356: remote error: tls: unknown certificate authority 2020/04/07 13:22:30 server.go:3012: http: TLS handshake error from 10.131.0.1:41272: remote error: tls: unknown certificate authority 2020/04/07 14:24:52 server.go:3012: http: TLS handshake error from 10.131.0.1:53350: remote error: tls: unknown certificate authority 2020/04/07 14:24:53 server.go:3012: http: TLS handshake error from 10.131.0.1:53364: remote error: tls: unknown certificate authority 2020/04/07 14:24:54 server.go:3012: http: TLS handshake error from 10.131.0.1:53388: remote error: tls: unknown certificate authority 2020/04/07 14:24:55 server.go:3012: http: TLS handshake error from 10.131.0.1:53412: remote error: tls: unknown certificate authority 2020/04/07 14:25:09 server.go:3012: http: TLS handshake error from 10.131.0.1:53670: remote error: tls: unknown certificate authority 2020/04/07 14:25:11 server.go:3012: http: TLS handshake error from 10.131.0.1:53690: remote error: tls: unknown certificate authority 2020/04/07 14:25:12 server.go:3012: http: TLS handshake error from 10.131.0.1:53714: remote error: tls: unknown certificate authority 2020/04/07 14:25:13 server.go:3012: http: TLS handshake error from 10.131.0.1:53740: remote error: tls: unknown certificate authority 2020/04/07 14:41:07 server.go:3012: http: TLS handshake error from 10.131.0.1:44280: remote error: tls: unknown certificate authority 2020/04/07 14:41:08 server.go:3012: http: TLS handshake error from 10.131.0.1:44300: remote error: tls: unknown certificate authority 2020/04/07 14:41:09 server.go:3012: http: TLS handshake error from 10.131.0.1:44324: remote error: tls: unknown certificate authority To resolve this issue, we need to delete the default route and let the operator create a new route, once the new route is created it is accessible. 1] Curl request to the default route results in 503 Service Unavailable. 2] When compared the default route and new route, the older kibana route was not having the caCertificate and destCACertificate, after recreation this two section appeared. Version-Release number of selected component (if applicable): KIBANA_VER=5.6.16 BUILD_VERSION=v4.3.10 How reproducible: 1. Deploy cluster logging. 2. Try to access kibana route. Expected results: 1. Kibana route should be accessible.
Created attachment 1677232 [details] recreated-kibana-route.yaml
This looks to have the same route issues as https://bugzilla.redhat.com/show_bug.cgi?id=1781492 I'll cherrypick that fix for 4.3
Verified in clusterlogging.4.3.14-202004231410 1. check kibana status #oc get route kibana -o json |jq '.spec.tls.caCertificate' #Access the kibana #oc get pods NAME READY STATUS RESTARTS AGE cluster-logging-operator-74b5785596-5p8rx 1/1 Running 0 10m curator-1587714600-qq77f 0/1 Completed 0 4m4s elasticsearch-cdm-cs4flf8z-1-59dbfdc784-n4hl6 2/2 Running 0 8m31s elasticsearch-cdm-cs4flf8z-2-7d7b4b84b-zs65m 2/2 Running 0 8m31s elasticsearch-cdm-cs4flf8z-3-67c8785f5c-pd8wn 2/2 Running 0 8m31s fluentd-4pp9z 1/1 Running 1 5h17m fluentd-bkfzf 1/1 Running 0 5h17m fluentd-hj9jb 1/1 Running 0 5h17m fluentd-nbtk9 1/1 Running 1 5h17m fluentd-sm2nz 1/1 Running 0 5h17m fluentd-strtp 1/1 Running 0 5h17m kibana-67f5f6774b-bqt22 2/2 Running 0 9m18s 2.Refresh the ca in cluster-logging-operators #oc exec $clo_pod -- rm -rf /tmp/ocp-clo #oc delete deployment cluster-logging-operator #oc exec $clo_pod -- rm -rf /tmp/ocp-clo 3 Verify the pod/kibana was restarted. the route/kibana was updated. and the kibana route can be accessable.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:1529