Bug 1665378

Summary: Cluster-logging-operator should regenerate secrets for containers when master-certs changed.
Product: OpenShift Container Platform Reporter: Qiaoling Tang <qitang>
Component: LoggingAssignee: ewolinet
Status: CLOSED ERRATA QA Contact: Anping Li <anli>
Severity: medium Docs Contact:
Priority: unspecified    
Version: 4.1.0CC: aos-bugs, ewolinet, jcantril, rmeggins
Target Milestone: ---   
Target Release: 4.1.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-06-04 10:41:49 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Secrets none

Description Qiaoling Tang 2019-01-11 08:13:39 UTC
Created attachment 1519983 [details]
Secrets

Description of problem:
The secrets for container aren't regenerated when master-certs changed.

Deploy logging, make sure logging is deployed successfully, and there have secrets for CEFK container, 
$ oc get secrets |grep Opaque
curator                                    Opaque                                6         21m
elasticsearch                              Opaque                                7         21m
fluentd                                    Opaque                                6         21m
kibana                                     Opaque                                3         21m
kibana-proxy                               Opaque                                4         21m
master-certs                               Opaque                                2         21m
save these secrets to file1

then scale down CLO to 0
$ oc scale deployment cluster-logging-operator --replicas=0
deployment.extensions/cluster-logging-operator scaled

wait until the CLO pod deleted, delete the master-certs secret:
$ oc delete secrets master-certs
secret "master-certs" deleted

scale up CLO to 1, wait for a while, check secrets:
$ oc scale deployment cluster-logging-operator --replicas=1
deployment.extensions/cluster-logging-operator scaled
$ oc get secrets |grep Opaque
curator                                    Opaque                                6         39m
elasticsearch                              Opaque                                7         39m
fluentd                                    Opaque                                6         39m
kibana                                     Opaque                                3         39m
kibana-proxy                               Opaque                                4         39m
master-certs                               Opaque                                2         10m

save these secrets to file2, then compare file1 with file2, the masterkey in master-certs changed, but other secrets don't change anything.


Version-Release number of selected component (if applicable):
$ oc get clusterversion
NAME      VERSION                           AVAILABLE   PROGRESSING   SINCE     STATUS
version   4.0.0-0.alpha-2019-01-10-192359   True        False         5h        Cluster version is 4.0.0-0.alpha-2019-01-10-192359

$ oc get pod cluster-logging-operator-65458bf7d7-cx28l -o yaml |grep image
    image: quay.io/openshift/origin-cluster-logging-operator:latest
    imagePullPolicy: IfNotPresent
  imagePullSecrets:
    image: 4569cf9fe87761107b0b607a15aff65eced0937d2a33c0f9a03f76ea97781575
    imageID: quay.io/openshift/origin-cluster-logging-operator@sha256:5817009a27eb35f836172b7354a95348f12eea0727120c0876a854854568bf40


How reproducible:
Always

Steps to Reproduce:
1. see "Description of problem:" part
2.
3.

Actual results:


Expected results:


Additional info:

Comment 3 Qiaoling Tang 2019-01-25 08:19:41 UTC
Verified in quay.io/openshift/origin-cluster-logging-operator@sha256:9be38553042fa32720c4b5237c7eadb974546c9e702a0aa44e3fe089dc21ac8b

Comment 6 errata-xmlrpc 2019-06-04 10:41:49 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758