Bug 1842452 - The kibana return 401 after upgraded from 4.4 to 4.5
Summary: The kibana return 401 after upgraded from 4.4 to 4.5
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.5
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: ---
: 4.5.0
Assignee: Vimal Kumar
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-06-01 09:50 UTC by Anping Li
Modified: 2020-07-13 17:43 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-07-13 17:42:47 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
Kibana Popup Windows (32.62 KB, image/png)
2020-06-01 09:51 UTC, Anping Li
no flags Details
kibana pop up (42.99 KB, image/png)
2020-06-02 01:18 UTC, Anping Li
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:43:07 UTC

Description Anping Li 2020-06-01 09:50:52 UTC
Description of problem:
After upgrade Logging to 4.5. the Kibana couldn't be logged in.  It pops up the message below.
{"statusCode":401,"error":"Unauthorized","message":"Authentication Exception"}



Version-Release number of selected component (if applicable):
4.5.0

How reproducible:
twice

Steps to Reproduce:
1. Deploy clusterlogging 4.4 on OCP 4.4
2. Upgrade OCP to 4.5
3. Upgrade Clusterlogging to 4.5. workaround the bug 1841832 by delete 'networkpolicy' and '.kibana indices'
4. Login Kibana

Actual results:
The user couldn't log in kibana. Kibana popup a login windown(snapshot attached.)


Expected results:


Additional info:

Comment 1 Anping Li 2020-06-01 09:51:23 UTC
Created attachment 1694060 [details]
Kibana Popup Windows

Comment 2 Anping Li 2020-06-01 12:25:14 UTC
All component show works well. and there are some tls errors in oauth pod, elasticsearch pod. 

$ oc get co
NAME                                       VERSION                             AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.5.0-0.nightly-2020-05-30-025738   True        False         False      21h
cloud-credential                           4.5.0-0.nightly-2020-05-30-025738   True        False         False      22h
cluster-autoscaler                         4.5.0-0.nightly-2020-05-30-025738   True        False         False      21h
config-operator                            4.5.0-0.nightly-2020-05-30-025738   True        False         False      7h23m
console                                    4.5.0-0.nightly-2020-05-30-025738   True        False         False      6h52m
csi-snapshot-controller                    4.5.0-0.nightly-2020-05-30-025738   True        False         False      6h56m
dns                                        4.5.0-0.nightly-2020-05-30-025738   True        False         False      22h
etcd                                       4.5.0-0.nightly-2020-05-30-025738   True        False         False      21h
image-registry                             4.5.0-0.nightly-2020-05-30-025738   True        False         False      21h
ingress                                    4.5.0-0.nightly-2020-05-30-025738   True        False         False      21h
insights                                   4.5.0-0.nightly-2020-05-30-025738   True        False         False      21h
kube-apiserver                             4.5.0-0.nightly-2020-05-30-025738   True        False         False      21h
kube-controller-manager                    4.5.0-0.nightly-2020-05-30-025738   True        False         False      21h
kube-scheduler                             4.5.0-0.nightly-2020-05-30-025738   True        False         False      21h
kube-storage-version-migrator              4.5.0-0.nightly-2020-05-30-025738   True        False         False      6h59m
machine-api                                4.5.0-0.nightly-2020-05-30-025738   True        False         False      21h
machine-approver                           4.5.0-0.nightly-2020-05-30-025738   True        False         False      7h19m
machine-config                             4.5.0-0.nightly-2020-05-30-025738   True        False         False      21h
marketplace                                4.5.0-0.nightly-2020-05-30-025738   True        False         False      6h55m
monitoring                                 4.5.0-0.nightly-2020-05-30-025738   True        False         False      7h16m
network                                    4.5.0-0.nightly-2020-05-30-025738   True        False         False      21h
node-tuning                                4.5.0-0.nightly-2020-05-30-025738   True        False         False      7h19m
openshift-apiserver                        4.5.0-0.nightly-2020-05-30-025738   True        False         False      21h
openshift-controller-manager               4.5.0-0.nightly-2020-05-30-025738   True        False         False      7h16m
openshift-samples                          4.5.0-0.nightly-2020-05-30-025738   True        False         False      7h18m
operator-lifecycle-manager                 4.5.0-0.nightly-2020-05-30-025738   True        False         False      21h
operator-lifecycle-manager-catalog         4.5.0-0.nightly-2020-05-30-025738   True        False         False      21h
operator-lifecycle-manager-packageserver   4.5.0-0.nightly-2020-05-30-025738   True        False         False      6h51m
service-ca                                 4.5.0-0.nightly-2020-05-30-025738   True        False         False      22h
storage                                    4.5.0-0.nightly-2020-05-30-025738   True        False         False      7h19

#oc logs oauth-openshift-9c8c46d64-xscv5
I0601 11:49:17.210422       1 log.go:172] http: TLS handshake error from 10.128.2.12:33798: remote error: tls: unknown certificate
I0601 11:49:17.339308       1 log.go:172] http: TLS handshake error from 10.128.2.12:33806: remote error: tls: unknown certificate
I0601 11:49:20.743246       1 log.go:172] http: TLS handshake error from 10.128.2.12:33900: remote error: tls: unknown certificate
I0601 11:49:20.745711       1 log.go:172] http: TLS handshake error from 10.128.2.12:33902: remote error: tls: unknown certificate
I0601 11:49:21.311074       1 log.go:172] http: TLS handshake error from 10.128.2.12:33910: EOF
I0601 11:49:21.396607       1 log.go:172] http: TLS handshake error from 10.128.2.12:33916: EOF
I0601 11:49:24.498083       1 log.go:172] http: TLS handshake error from 10.128.2.12:34016: remote error: tls: unknown certificate
I0601 11:50:44.226011       1 log.go:172] http: TLS handshake error from 10.129.2.10:49550: remote error: tls: unknown certificate
I0601 11:50:44.228545       1 log.go:172] http: TLS handshake error from 10.129.2.10:49548: remote error: tls: unknown certificate
I0601 11:50:44.264979       1 log.go:172] http: TLS handshake error from 10.129.2.10:49552: remote error: tls: unknown certificate

$ oc logs -c proxy elasticsearch-cdm-4ojr1ygf-1-5b894f774f-n4p4s
2020/06/01 12:22:00 http: TLS handshake error from 10.128.2.18:56356: remote error: tls: bad certificate
2020/06/01 12:22:11 http: TLS handshake error from 10.129.2.23:33114: remote error: tls: bad certificate
2020/06/01 12:22:30 http: TLS handshake error from 10.128.2.18:57126: remote error: tls: bad certificate
2020/06/01 12:22:41 http: TLS handshake error from 10.129.2.23:33942: remote error: tls: bad certificate
2020/06/01 12:23:00 http: TLS handshake error from 10.128.2.18:57918: remote error: tls: bad certificate

Comment 4 Anping Li 2020-06-02 01:18:42 UTC
Created attachment 1694222 [details]
kibana pop up

Comment 5 Jeff Cantrill 2020-06-02 12:49:54 UTC
(In reply to Anping Li from comment #0)
> Description of problem:
> After upgrade Logging to 4.5. the Kibana couldn't be logged in.  It pops up
> the message below.
> {"statusCode":401,"error":"Unauthorized","message":"Authentication
> Exception"}
> 
> 
> 
> Version-Release number of selected component (if applicable):
> 4.5.0
> 
> How reproducible:
> twice
> 
> Steps to Reproduce:
> 1. Deploy clusterlogging 4.4 on OCP 4.4
> 2. Upgrade OCP to 4.5
> 3. Upgrade Clusterlogging to 4.5. workaround the bug 1841832 by delete
> 'networkpolicy' and '.kibana indices'

Why are deleting the .kibana index?  The expectation is the operator should be be upgrading Kibana including its backing index as addressed by [1]

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1835903

Comment 6 Anping Li 2020-06-03 05:10:29 UTC
Yes, we needn't delete the .kibana indices. But even we keep the .kibana indices we still hit this issue.  I am thinking that it is a 100% reproducible issue.

Comment 7 ewolinet 2020-06-03 18:39:30 UTC
I believe this is a symptom of our upgrade from 5x to 6x and searchguard -> opendistro.
it should be addressed by https://github.com/openshift/elasticsearch-operator/pull/367

Comment 8 Anping Li 2020-06-09 08:49:57 UTC
it is not fixed by by https://github.com/openshift/elasticsearch-operator/pull/367.  Reproduce after upgrade from 4.4 to 4.6.

Comment 9 Vimal Kumar 2020-06-09 22:09:19 UTC
After merging https://github.com/openshift/elasticsearch-opeerator/pull/384 , I have verified i am able to login to kibana after upgrading from 4.4 to 4.5

Comment 10 Anping Li 2020-06-10 06:40:04 UTC
Verified
#oc get csv -o name
clusterserviceversion.operators.coreos.com/clusterlogging.4.5.0-202006090812
clusterserviceversion.operators.coreos.com/elasticsearch-operator.4.5.0-202006091957

Comment 12 errata-xmlrpc 2020-07-13 17:42:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.