Description of problem: After upgrade Logging to 4.5. the Kibana couldn't be logged in. It pops up the message below. {"statusCode":401,"error":"Unauthorized","message":"Authentication Exception"} Version-Release number of selected component (if applicable): 4.5.0 How reproducible: twice Steps to Reproduce: 1. Deploy clusterlogging 4.4 on OCP 4.4 2. Upgrade OCP to 4.5 3. Upgrade Clusterlogging to 4.5. workaround the bug 1841832 by delete 'networkpolicy' and '.kibana indices' 4. Login Kibana Actual results: The user couldn't log in kibana. Kibana popup a login windown(snapshot attached.) Expected results: Additional info:
Created attachment 1694060 [details] Kibana Popup Windows
All component show works well. and there are some tls errors in oauth pod, elasticsearch pod. $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.5.0-0.nightly-2020-05-30-025738 True False False 21h cloud-credential 4.5.0-0.nightly-2020-05-30-025738 True False False 22h cluster-autoscaler 4.5.0-0.nightly-2020-05-30-025738 True False False 21h config-operator 4.5.0-0.nightly-2020-05-30-025738 True False False 7h23m console 4.5.0-0.nightly-2020-05-30-025738 True False False 6h52m csi-snapshot-controller 4.5.0-0.nightly-2020-05-30-025738 True False False 6h56m dns 4.5.0-0.nightly-2020-05-30-025738 True False False 22h etcd 4.5.0-0.nightly-2020-05-30-025738 True False False 21h image-registry 4.5.0-0.nightly-2020-05-30-025738 True False False 21h ingress 4.5.0-0.nightly-2020-05-30-025738 True False False 21h insights 4.5.0-0.nightly-2020-05-30-025738 True False False 21h kube-apiserver 4.5.0-0.nightly-2020-05-30-025738 True False False 21h kube-controller-manager 4.5.0-0.nightly-2020-05-30-025738 True False False 21h kube-scheduler 4.5.0-0.nightly-2020-05-30-025738 True False False 21h kube-storage-version-migrator 4.5.0-0.nightly-2020-05-30-025738 True False False 6h59m machine-api 4.5.0-0.nightly-2020-05-30-025738 True False False 21h machine-approver 4.5.0-0.nightly-2020-05-30-025738 True False False 7h19m machine-config 4.5.0-0.nightly-2020-05-30-025738 True False False 21h marketplace 4.5.0-0.nightly-2020-05-30-025738 True False False 6h55m monitoring 4.5.0-0.nightly-2020-05-30-025738 True False False 7h16m network 4.5.0-0.nightly-2020-05-30-025738 True False False 21h node-tuning 4.5.0-0.nightly-2020-05-30-025738 True False False 7h19m openshift-apiserver 4.5.0-0.nightly-2020-05-30-025738 True False False 21h openshift-controller-manager 4.5.0-0.nightly-2020-05-30-025738 True False False 7h16m openshift-samples 4.5.0-0.nightly-2020-05-30-025738 True False False 7h18m operator-lifecycle-manager 4.5.0-0.nightly-2020-05-30-025738 True False False 21h operator-lifecycle-manager-catalog 4.5.0-0.nightly-2020-05-30-025738 True False False 21h operator-lifecycle-manager-packageserver 4.5.0-0.nightly-2020-05-30-025738 True False False 6h51m service-ca 4.5.0-0.nightly-2020-05-30-025738 True False False 22h storage 4.5.0-0.nightly-2020-05-30-025738 True False False 7h19 #oc logs oauth-openshift-9c8c46d64-xscv5 I0601 11:49:17.210422 1 log.go:172] http: TLS handshake error from 10.128.2.12:33798: remote error: tls: unknown certificate I0601 11:49:17.339308 1 log.go:172] http: TLS handshake error from 10.128.2.12:33806: remote error: tls: unknown certificate I0601 11:49:20.743246 1 log.go:172] http: TLS handshake error from 10.128.2.12:33900: remote error: tls: unknown certificate I0601 11:49:20.745711 1 log.go:172] http: TLS handshake error from 10.128.2.12:33902: remote error: tls: unknown certificate I0601 11:49:21.311074 1 log.go:172] http: TLS handshake error from 10.128.2.12:33910: EOF I0601 11:49:21.396607 1 log.go:172] http: TLS handshake error from 10.128.2.12:33916: EOF I0601 11:49:24.498083 1 log.go:172] http: TLS handshake error from 10.128.2.12:34016: remote error: tls: unknown certificate I0601 11:50:44.226011 1 log.go:172] http: TLS handshake error from 10.129.2.10:49550: remote error: tls: unknown certificate I0601 11:50:44.228545 1 log.go:172] http: TLS handshake error from 10.129.2.10:49548: remote error: tls: unknown certificate I0601 11:50:44.264979 1 log.go:172] http: TLS handshake error from 10.129.2.10:49552: remote error: tls: unknown certificate $ oc logs -c proxy elasticsearch-cdm-4ojr1ygf-1-5b894f774f-n4p4s 2020/06/01 12:22:00 http: TLS handshake error from 10.128.2.18:56356: remote error: tls: bad certificate 2020/06/01 12:22:11 http: TLS handshake error from 10.129.2.23:33114: remote error: tls: bad certificate 2020/06/01 12:22:30 http: TLS handshake error from 10.128.2.18:57126: remote error: tls: bad certificate 2020/06/01 12:22:41 http: TLS handshake error from 10.129.2.23:33942: remote error: tls: bad certificate 2020/06/01 12:23:00 http: TLS handshake error from 10.128.2.18:57918: remote error: tls: bad certificate
Created attachment 1694222 [details] kibana pop up
(In reply to Anping Li from comment #0) > Description of problem: > After upgrade Logging to 4.5. the Kibana couldn't be logged in. It pops up > the message below. > {"statusCode":401,"error":"Unauthorized","message":"Authentication > Exception"} > > > > Version-Release number of selected component (if applicable): > 4.5.0 > > How reproducible: > twice > > Steps to Reproduce: > 1. Deploy clusterlogging 4.4 on OCP 4.4 > 2. Upgrade OCP to 4.5 > 3. Upgrade Clusterlogging to 4.5. workaround the bug 1841832 by delete > 'networkpolicy' and '.kibana indices' Why are deleting the .kibana index? The expectation is the operator should be be upgrading Kibana including its backing index as addressed by [1] [1] https://bugzilla.redhat.com/show_bug.cgi?id=1835903
Yes, we needn't delete the .kibana indices. But even we keep the .kibana indices we still hit this issue. I am thinking that it is a 100% reproducible issue.
I believe this is a symptom of our upgrade from 5x to 6x and searchguard -> opendistro. it should be addressed by https://github.com/openshift/elasticsearch-operator/pull/367
it is not fixed by by https://github.com/openshift/elasticsearch-operator/pull/367. Reproduce after upgrade from 4.4 to 4.6.
After merging https://github.com/openshift/elasticsearch-opeerator/pull/384 , I have verified i am able to login to kibana after upgrading from 4.4 to 4.5
Verified #oc get csv -o name clusterserviceversion.operators.coreos.com/clusterlogging.4.5.0-202006090812 clusterserviceversion.operators.coreos.com/elasticsearch-operator.4.5.0-202006091957
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409