Bug 1667430 - Deleting clusterlogging cr does not remove elasticsearch-clientdatamaster deployment
Summary: Deleting clusterlogging cr does not remove elasticsearch-clientdatamaster dep...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.1.0
Hardware: x86_64
OS: Linux
low
medium
Target Milestone: ---
: 4.1.0
Assignee: ewolinet
QA Contact: Mike Fiedler
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-01-18 13:15 UTC by Mike Fiedler
Modified: 2019-06-04 10:42 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2019-06-04 10:42:06 UTC
Target Upstream Version:


Attachments (Terms of Use)
CLO and ES operator logs after CR deletion (1.39 KB, application/gzip)
2019-01-18 13:16 UTC, Mike Fiedler
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2019:0758 None None None 2019-06-04 10:42:13 UTC

Description Mike Fiedler 2019-01-18 13:15:30 UTC
Description of problem:

Deleting the clusterlogging CR removes the deployments for kibana and fluentd and terminates all pods.  It also deletes the elasticsearch CR, but deleting the elasticsearch CR does not delete the elasticsearch-clientdatamaster

Version-Release number of selected component (if applicable):

Latest operators built by ART:

registry.reg-aws.openshift.com:443/openshift/ose-cluster-logging-operator   v4.0                d30f397c70ae        33 hours ago        273 MB                                                                                                                                           
registry.reg-aws.openshift.com:443/openshift/ose-elasticsearch-operator     v4.0                b35c5b1efb46        33 hours ago        261 MB 

How reproducible: Always


Steps to Reproduce:
1.   Create an clusterlogging CR from the hack/cr.yaml
2.   Verify ES, fluentd and kibana are running successfully
3.   oc delete clusterlogging example

Actual results:

elasticsearches/elasticsearch CR is deleted, but the deployment and pod remain:

[fedora@ip-172-31-53-199 ~]$ oc delete elasticsearch/elasticsearch clusterlogging/example
elasticsearch.logging.openshift.io "elasticsearch" deleted
clusterlogging.logging.openshift.io "example" deleted
[fedora@ip-172-31-53-199 ~]$ oc get pods
NAME                                                 READY     STATUS        RESTARTS   AGE
cluster-logging-operator-959d488b8-vtlsq             1/1       Running       0          16h
elasticsearch-clientdatamaster-0-1-d4ddc458f-lr67g   1/1       Running       0          16h
elasticsearch-operator-566cf5bb5c-wnh75              1/1       Running       0          16h


[fedora@ip-172-31-53-199 ~]$ oc get deployment
NAME                                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
cluster-logging-operator             1         1         1            1           16h
elasticsearch-clientdatamaster-0-1   1         1         1            1           16h
elasticsearch-operator               1         1         1            1           16h



Expected results:

All resources related to the clusterlogging CR are cleaned up upon deletion


Additional info:

Operator pod logs will be attached

Comment 1 Mike Fiedler 2019-01-18 13:16:58 UTC
Created attachment 1521516 [details]
CLO and ES operator logs after CR deletion

Comment 3 Mike Fiedler 2019-01-22 19:47:40 UTC
Verified with upstream - waiting for official OCP images.

Comment 4 Mike Fiedler 2019-02-11 16:30:11 UTC
Verified on 4.0.0-0.nightly-2019-02-11-045151.   Deleting the clusterlogging cr cascades to the elasticsearch cr

Comment 7 errata-xmlrpc 2019-06-04 10:42:06 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758


Note You need to log in before you can comment on or make changes to this bug.