Bug 1781492 - Kibana is not updated after secrets regenerated.
Summary: Kibana is not updated after secrets regenerated.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.4.0
Assignee: ewolinet
QA Contact: Qiaoling Tang
URL:
Whiteboard:
Depends On:
Blocks: 1822083 1822152
TreeView+ depends on / blocked
 
Reported: 2019-12-10 07:02 UTC by Qiaoling Tang
Modified: 2020-07-31 21:12 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1852639 (view as bug list)
Environment:
Last Closed: 2020-05-04 11:19:30 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift cluster-logging-operator pull 351 0 None closed Bug 1781492: Updating so Kibana properly handles cert redeploys 2021-01-18 06:09:10 UTC
Red Hat Product Errata RHBA-2020:0581 0 None None None 2020-05-04 11:20:10 UTC

Description Qiaoling Tang 2019-12-10 07:02:55 UTC
Description of problem:

Deploy logging stack via operator, make sure the logging stack works well. Then scale down CLO to 0, delete mater-certs secret, scale up CLO to 1. After doing these, master-certs secrte is regenerated, the secret curator, fluentd, kibana and elasticsearch are changed, but the kibana pod isn't updated. Log into kibana console, the status is RED.

Logs in the kibana pod:
{"type":"log","@timestamp":"2019-12-10T06:48:00Z","tags":["warning","elasticsearch","admin"],"pid":227,"message":"Unable to revive connection: https://elasticsearch.openshift-logging.svc.cluster.local:9200/"}
{"type":"log","@timestamp":"2019-12-10T06:48:00Z","tags":["warning","elasticsearch","admin"],"pid":227,"message":"No living connections"}
{"type":"log","@timestamp":"2019-12-10T06:48:02Z","tags":["warning","elasticsearch","admin"],"pid":227,"message":"Unable to revive connection: https://elasticsearch.openshift-logging.svc.cluster.local:9200/"}
{"type":"log","@timestamp":"2019-12-10T06:48:02Z","tags":["warning","elasticsearch","admin"],"pid":227,"message":"No living connections"}

Then delete kibana pod, wait until new kibana pod start, try to access kibana console via browser, get error:
Application is not available

The application is currently not serving requests at this endpoint. It may not have been started or is still starting.

deleting route/kibana then wait for CLO to create a new kibana/route can resolve this problem. 


Version-Release number of selected component (if applicable):
ose-cluster-logging-operator-v4.3.0-201912091517

How reproducible:
Always

Steps to Reproduce:
1.deploy clusterlogging
2.scale down CLO replicas to 0
3.delete secret/master-certs
4.scale up CLO
5.wait until all pods running, log into kibana console

Actual results:


Expected results:


Additional info:

Comment 2 Qiaoling Tang 2020-02-06 06:28:52 UTC
Verified with ose-cluster-logging-operator-v4.4.0-202002050701

Comment 4 errata-xmlrpc 2020-05-04 11:19:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0581


Note You need to log in before you can comment on or make changes to this bug.