Bug 1690255 - Kibana pod deployed twice when deploying logging via operators
Summary: Kibana pod deployed twice when deploying logging via operators
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.1.0
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: 4.1.0
Assignee: Jeff Cantrill
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-03-19 07:19 UTC by Qiaoling Tang
Modified: 2019-06-04 10:46 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-06-04 10:46:06 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 None None None 2019-06-04 10:46:15 UTC

Description Qiaoling Tang 2019-03-19 07:19:57 UTC
Description of problem:

Deploy logging via community-operators, check pods in openshift-logging namespace, the kibana pod created and all the containers are ready, after about 2 minutes, the first kibana pod killed, and another kibana pod created. check CLO, kibana pod logs and events in "openshift-logging" namespace, no error message.

Also checked in different OCP env with different payload, this issue always happen, and every time this issue reproduced, they have a same kibana rs "kibana-7fb4fd4cc9"

$ oc get pod
NAME                                                  READY   STATUS        RESTARTS   AGE
cluster-logging-operator-56658767fc-sbkzz             1/1     Running       0          5m27s
elasticsearch-clientdatamaster-0-1-6bd45568f8-x5pgd   2/2     Running       0          2m21s
elasticsearch-clientdatamaster-0-2-6f4bcbc549-nvgq6   2/2     Running       0          2m21s
fluentd-h6rd8                                         1/1     Running       0          2m15s
fluentd-hdz96                                         1/1     Running       0          2m15s
fluentd-pgwg4                                         1/1     Running       0          2m15s
fluentd-qm8gv                                         1/1     Running       0          2m15s
fluentd-xjm7m                                         1/1     Running       0          2m15s
fluentd-xnztg                                         1/1     Running       0          2m15s
kibana-6f94486567-qsq48                               2/2     Running       0          20s
kibana-7fb4fd4cc9-4k9n5                               2/2     Terminating   0          2m20s
$ oc get rs
NAME                                            DESIRED   CURRENT   READY   AGE
cluster-logging-operator-56658767fc             1         1         1       5m34s
elasticsearch-clientdatamaster-0-1-6bd45568f8   1         1         1       2m28s
elasticsearch-clientdatamaster-0-2-6f4bcbc549   1         1         1       2m28s
kibana-6f94486567                               1         1         1       27s
kibana-7fb4fd4cc9                               0         0         0       2m27s

Compared these two kibana rs, differences are in the below:
$ diff kibana-7fb4fd4cc9 kibana-6f94486567 
7,9c7,9
<     deployment.kubernetes.io/revision: "1"
<   creationTimestamp: 2019-03-19T06:40:20Z
<   generation: 2
---
>     deployment.kubernetes.io/revision: "2"
>   creationTimestamp: 2019-03-19T06:40:48Z
>   generation: 1
13c13
<     pod-template-hash: 7fb4fd4cc9
---
>     pod-template-hash: 6f94486567
15c15
<   name: kibana-7fb4fd4cc9
---
>   name: kibana-6f94486567
24,26c24,26
<   resourceVersion: "419407"
<   selfLink: /apis/extensions/v1beta1/namespaces/openshift-logging/replicasets/kibana-7fb4fd4cc9
<   uid: dd9888e3-4a11-11e9-bc8d-0e1b3aeabb72
---
>   resourceVersion: "419398"
>   selfLink: /apis/extensions/v1beta1/namespaces/openshift-logging/replicasets/kibana-6f94486567
>   uid: ee30748f-4a11-11e9-bc8d-0e1b3aeabb72
28c28
<   replicas: 0
---
>   replicas: 1
33c33
<       pod-template-hash: 7fb4fd4cc9
---
>       pod-template-hash: 6f94486567
36a37,40
>       annotations:
>         olm.operatorGroup: openshift-logging-7jds7
>         olm.operatorNamespace: openshift-logging
>         olm.targetNamespaces: openshift-logging
41c45
<         pod-template-hash: 7fb4fd4cc9
---
>         pod-template-hash: 6f94486567
149,150c153,157
<   observedGeneration: 2
<   replicas: 0
---
>   availableReplicas: 1
>   fullyLabeledReplicas: 1
>   observedGeneration: 1
>   readyReplicas: 1
>   replicas: 1

$ oc get events |grep kibana
2m14s       Normal    Scheduled                pod/kibana-6f94486567-qsq48                                              Successfully assigned openshift-logging/kibana-6f94486567-qsq48 to ip-10-0-151-238.ap-northeast-1.compute.internal
2m6s        Normal    Pulled                   pod/kibana-6f94486567-qsq48                                              Container image "quay.io/openshift/origin-logging-kibana5:latest" already present on machine
2m5s        Normal    Created                  pod/kibana-6f94486567-qsq48                                              Created container kibana
2m5s        Normal    Started                  pod/kibana-6f94486567-qsq48                                              Started container kibana
2m5s        Normal    Pulled                   pod/kibana-6f94486567-qsq48                                              Container image "quay.io/openshift/origin-oauth-proxy:latest" already present on machine
2m5s        Normal    Created                  pod/kibana-6f94486567-qsq48                                              Created container kibana-proxy
2m5s        Normal    Started                  pod/kibana-6f94486567-qsq48                                              Started container kibana-proxy
6m36s       Normal    Scheduled                pod/kibana-6f94486567-vr8vc                                              Successfully assigned openshift-logging/kibana-6f94486567-vr8vc to ip-10-0-161-160.ap-northeast-1.compute.internal
6m28s       Normal    Pulling                  pod/kibana-6f94486567-vr8vc                                              Pulling image "quay.io/openshift/origin-logging-kibana5:latest"
5m47s       Normal    Pulled                   pod/kibana-6f94486567-vr8vc                                              Successfully pulled image "quay.io/openshift/origin-logging-kibana5:latest"
5m47s       Normal    Created                  pod/kibana-6f94486567-vr8vc                                              Created container kibana
5m47s       Normal    Started                  pod/kibana-6f94486567-vr8vc                                              Started container kibana
5m47s       Normal    Pulling                  pod/kibana-6f94486567-vr8vc                                              Pulling image "quay.io/openshift/origin-oauth-proxy:latest"
5m24s       Normal    Pulled                   pod/kibana-6f94486567-vr8vc                                              Successfully pulled image "quay.io/openshift/origin-oauth-proxy:latest"
5m24s       Normal    Created                  pod/kibana-6f94486567-vr8vc                                              Created container kibana-proxy
5m24s       Normal    Started                  pod/kibana-6f94486567-vr8vc                                              Started container kibana-proxy
5m16s       Normal    Killing                  pod/kibana-6f94486567-vr8vc                                              Stopping container kibana
5m16s       Normal    Killing                  pod/kibana-6f94486567-vr8vc                                              Stopping container kibana-proxy
6m36s       Normal    SuccessfulCreate         replicaset/kibana-6f94486567                                             Created pod: kibana-6f94486567-vr8vc
2m14s       Normal    SuccessfulCreate         replicaset/kibana-6f94486567                                             Created pod: kibana-6f94486567-qsq48
4m14s       Normal    Scheduled                pod/kibana-7fb4fd4cc9-4k9n5                                              Successfully assigned openshift-logging/kibana-7fb4fd4cc9-4k9n5 to ip-10-0-161-160.ap-northeast-1.compute.internal
4m7s        Normal    Pulled                   pod/kibana-7fb4fd4cc9-4k9n5                                              Container image "quay.io/openshift/origin-logging-kibana5:latest" already present on machine
4m6s        Normal    Created                  pod/kibana-7fb4fd4cc9-4k9n5                                              Created container kibana
4m6s        Normal    Started                  pod/kibana-7fb4fd4cc9-4k9n5                                              Started container kibana
4m6s        Normal    Pulled                   pod/kibana-7fb4fd4cc9-4k9n5                                              Container image "quay.io/openshift/origin-oauth-proxy:latest" already present on machine
4m6s        Normal    Created                  pod/kibana-7fb4fd4cc9-4k9n5                                              Created container kibana-proxy
4m6s        Normal    Started                  pod/kibana-7fb4fd4cc9-4k9n5                                              Started container kibana-proxy
118s        Normal    Killing                  pod/kibana-7fb4fd4cc9-4k9n5                                              Stopping container kibana
118s        Normal    Killing                  pod/kibana-7fb4fd4cc9-4k9n5                                              Stopping container kibana-proxy
87s         Warning   Unhealthy                pod/kibana-7fb4fd4cc9-4k9n5                                              Readiness probe errored: rpc error: code = Unknown desc = container is not created or running
6m37s       Normal    Scheduled                pod/kibana-7fb4fd4cc9-72dsw                                              Successfully assigned openshift-logging/kibana-7fb4fd4cc9-72dsw to ip-10-0-151-238.ap-northeast-1.compute.internal
6m30s       Normal    Pulling                  pod/kibana-7fb4fd4cc9-72dsw                                              Pulling image "quay.io/openshift/origin-logging-kibana5:latest"
5m52s       Normal    Pulled                   pod/kibana-7fb4fd4cc9-72dsw                                              Successfully pulled image "quay.io/openshift/origin-logging-kibana5:latest"
5m52s       Normal    Created                  pod/kibana-7fb4fd4cc9-72dsw                                              Created container kibana
5m52s       Normal    Started                  pod/kibana-7fb4fd4cc9-72dsw                                              Started container kibana
5m52s       Normal    Pulling                  pod/kibana-7fb4fd4cc9-72dsw                                              Pulling image "quay.io/openshift/origin-oauth-proxy:latest"
5m24s       Normal    Pulled                   pod/kibana-7fb4fd4cc9-72dsw                                              Successfully pulled image "quay.io/openshift/origin-oauth-proxy:latest"
5m24s       Normal    Created                  pod/kibana-7fb4fd4cc9-72dsw                                              Created container kibana-proxy
5m24s       Normal    Started                  pod/kibana-7fb4fd4cc9-72dsw                                              Started container kibana-proxy
5m21s       Normal    Killing                  pod/kibana-7fb4fd4cc9-72dsw                                              Stopping container kibana
5m21s       Normal    Killing                  pod/kibana-7fb4fd4cc9-72dsw                                              Stopping container kibana-proxy
4m51s       Warning   Unhealthy                pod/kibana-7fb4fd4cc9-72dsw                                              Readiness probe errored: rpc error: code = Unknown desc = container is not created or running
6m37s       Normal    SuccessfulCreate         replicaset/kibana-7fb4fd4cc9                                             Created pod: kibana-7fb4fd4cc9-72dsw
5m21s       Normal    SuccessfulDelete         replicaset/kibana-7fb4fd4cc9                                             Deleted pod: kibana-7fb4fd4cc9-72dsw
4m14s       Normal    SuccessfulCreate         replicaset/kibana-7fb4fd4cc9                                             Created pod: kibana-7fb4fd4cc9-4k9n5
118s        Normal    SuccessfulDelete         replicaset/kibana-7fb4fd4cc9                                             Deleted pod: kibana-7fb4fd4cc9-4k9n5
6m37s       Normal    ScalingReplicaSet        deployment/kibana                                                        Scaled up replica set kibana-7fb4fd4cc9 to 1
6m36s       Normal    ScalingReplicaSet        deployment/kibana                                                        Scaled up replica set kibana-6f94486567 to 1
5m21s       Normal    ScalingReplicaSet        deployment/kibana                                                        Scaled down replica set kibana-7fb4fd4cc9 to 0
4m14s       Normal    ScalingReplicaSet        deployment/kibana                                                        Scaled up replica set kibana-7fb4fd4cc9 to 1
2m14s       Normal    ScalingReplicaSet        deployment/kibana                                                        Scaled up replica set kibana-6f94486567 to 1
118s        Normal    ScalingReplicaSet        deployment/kibana                                                        Scaled down replica set kibana-7fb4fd4cc9 to 0

Version-Release number of selected component (if applicable):
4.0.0-0.nightly-2019-03-18-200009

How reproducible:
Always

Steps to Reproduce:
1. Deploy logging via community-operators
2. check pod in "openshift-logging" namespace
3.

Actual results:


Expected results:


Additional info:

Comment 1 ewolinet 2019-03-19 18:25:26 UTC
I don't believe this is a bug. If you look k8s is rolling out an updated RS (indicated by the generation).
If we saw this to happen every X minutes to the point where our generation keeps climbing I would think there is an issue.
I am unable to recreate this by leaving the pods running, or by annotating the deployment/kibana object.

Comment 2 Qiaoling Tang 2019-03-21 06:08:08 UTC
Get you, thanks.

Comment 3 Anping Li 2019-03-21 12:05:59 UTC
I hit similar issue. The interesting is the RS is always 7fb4fd4cc9.  @yinzhou, Could you help us confirm?
oc get pods
kibana-7fb4fd4cc9-7n8rp                              2/2     Terminating   0          64s
kibana-c9fc7b6-fgq6w                                 2/2     Running       0          40s

oc get rs
kibana-7fb4fd4cc9                              0         0         0       3m30s
kibana-c9fc7b6                                 1         1         1       3m6s

Comment 4 zhou ying 2019-03-22 01:02:49 UTC
When the Template updates will trigger new deployment and new RS.

Comment 6 errata-xmlrpc 2019-06-04 10:46:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.