Bug 1781492

Summary: Kibana is not updated after secrets regenerated.
Product: OpenShift Container Platform Reporter: Qiaoling Tang <qitang>
Component: LoggingAssignee: ewolinet
Status: CLOSED ERRATA QA Contact: Qiaoling Tang <qitang>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.3.0CC: aos-bugs, ewolinet
Target Milestone: ---   
Target Release: 4.4.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
: 1852639 (view as bug list) Environment:
Last Closed: 2020-05-04 11:19:30 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1822083, 1822152    

Description Qiaoling Tang 2019-12-10 07:02:55 UTC
Description of problem:

Deploy logging stack via operator, make sure the logging stack works well. Then scale down CLO to 0, delete mater-certs secret, scale up CLO to 1. After doing these, master-certs secrte is regenerated, the secret curator, fluentd, kibana and elasticsearch are changed, but the kibana pod isn't updated. Log into kibana console, the status is RED.

Logs in the kibana pod:
{"type":"log","@timestamp":"2019-12-10T06:48:00Z","tags":["warning","elasticsearch","admin"],"pid":227,"message":"Unable to revive connection: https://elasticsearch.openshift-logging.svc.cluster.local:9200/"}
{"type":"log","@timestamp":"2019-12-10T06:48:00Z","tags":["warning","elasticsearch","admin"],"pid":227,"message":"No living connections"}
{"type":"log","@timestamp":"2019-12-10T06:48:02Z","tags":["warning","elasticsearch","admin"],"pid":227,"message":"Unable to revive connection: https://elasticsearch.openshift-logging.svc.cluster.local:9200/"}
{"type":"log","@timestamp":"2019-12-10T06:48:02Z","tags":["warning","elasticsearch","admin"],"pid":227,"message":"No living connections"}

Then delete kibana pod, wait until new kibana pod start, try to access kibana console via browser, get error:
Application is not available

The application is currently not serving requests at this endpoint. It may not have been started or is still starting.

deleting route/kibana then wait for CLO to create a new kibana/route can resolve this problem. 


Version-Release number of selected component (if applicable):
ose-cluster-logging-operator-v4.3.0-201912091517

How reproducible:
Always

Steps to Reproduce:
1.deploy clusterlogging
2.scale down CLO replicas to 0
3.delete secret/master-certs
4.scale up CLO
5.wait until all pods running, log into kibana console

Actual results:


Expected results:


Additional info:

Comment 2 Qiaoling Tang 2020-02-06 06:28:52 UTC
Verified with ose-cluster-logging-operator-v4.4.0-202002050701

Comment 4 errata-xmlrpc 2020-05-04 11:19:30 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0581