Bug 1910259
| Summary: | Missing Logging/Elasticsearch dashboard. | |||
|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Qiaoling Tang <qitang> | |
| Component: | Logging | Assignee: | Hui Kang <hkang> | |
| Status: | CLOSED ERRATA | QA Contact: | Qiaoling Tang <qitang> | |
| Severity: | medium | Docs Contact: | Rolfe Dlugy-Hegwer <rdlugyhe> | |
| Priority: | unspecified | |||
| Version: | 4.7 | CC: | anli, aos-bugs, hkang, mmohan, periklis, rdlugyhe | |
| Target Milestone: | --- | Keywords: | Regression | |
| Target Release: | 4.7.0 | |||
| Hardware: | Unspecified | |||
| OS: | Unspecified | |||
| Whiteboard: | logging-exploration | |||
| Fixed In Version: | Doc Type: | Bug Fix | ||
| Doc Text: |
* Previously, in some cases, the Red Hat OpenShift Logging/Elasticsearch dashboard was missing from the OpenShift Container Platform monitoring dashboard. This happened when the dashboard configuration resource referred to a different namespace owner and caused the OpenShift Container Platform to garbage-collect that resource. The current release fixes this issue. It removes the ownership reference and cleans up the configuration in the Elasticsearch Operator (EO) reconciler, so the logging dashboard appears in the console.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1910259[*BZ#1910259*])
|
Story Points: | --- | |
| Clone Of: | ||||
| : | 1914987 (view as bug list) | Environment: | ||
| Last Closed: | 2021-02-24 11:22:33 UTC | Type: | Bug | |
| Regression: | --- | Mount Type: | --- | |
| Documentation: | --- | CRM: | ||
| Verified Versions: | Category: | --- | ||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
| Cloudforms Team: | --- | Target Upstream Version: | ||
| Embargoed: | ||||
| Bug Depends On: | ||||
| Bug Blocks: | 1914987 | |||
|
Description
Qiaoling Tang
2020-12-23 08:18:11 UTC
@qit, Hi, Qiaoling, did you install the EO operator in an OCP 4.7 cluster? (In reply to Hui Kang from comment #1) > @qit, Hi, Qiaoling, did you install the EO operator in an OCP 4.7 cluster? Yes I have some new founding, after I create the cl/instance, the cm/grafana-dashboard-elasticsearch is created, but a few seconds later, it's removed:
$ oc get cm -n openshift-config-managed
NAME DATA AGE
bound-sa-token-signing-certs 1 47m
console-public 1 41m
csr-controller-ca 1 47m
default-ingress-cert 1 46m
grafana-dashboard-api-performance 1 50m
grafana-dashboard-cluster-logging 1 30s
grafana-dashboard-cluster-total 1 50m
grafana-dashboard-elasticsearch 1 0s
grafana-dashboard-etcd 1 50m
$ oc get cm -n openshift-config-managed
NAME DATA AGE
bound-sa-token-signing-certs 1 48m
console-public 1 42m
csr-controller-ca 1 48m
default-ingress-cert 1 46m
grafana-dashboard-api-performance 1 51m
grafana-dashboard-cluster-logging 1 57s
grafana-dashboard-cluster-total 1 51m
grafana-dashboard-etcd 1 51m
Below are the logs in EO:
$ oc logs -n openshift-operators-redhat elasticsearch-operator-77c676d5dc-bc7zj
{"component":"elasticsearch-operator","go_arch":"amd64","go_os":"linux","go_version":"go1.15.5","level":"0","message":"starting up...","operator-sdk_version":"v0.19.4","operator_version":"4.7.0","ts":"2021-01-05T00:49:20.659040486Z"}
I0105 00:49:21.710153 1 request.go:621] Throttling request took 1.038302675s, request: GET:https://172.30.0.1:443/apis/migration.k8s.io/v1alpha1?timeout=32s
I0105 00:49:32.383228 1 request.go:621] Throttling request took 1.045862518s, request: GET:https://172.30.0.1:443/apis/imageregistry.operator.openshift.io/v1?timeout=32s
{"component":"elasticsearch-operator","go_arch":"amd64","go_os":"linux","go_version":"go1.15.5","level":"0","message":"This operator no longer honors the image specified by the custom resources so that it is able to properly coordinate the configuration with the image.","namespace":"","operator-sdk_version":"v0.19.4","operator_version":"4.7.0","ts":"2021-01-05T00:49:34.014883771Z"}
{"component":"elasticsearch-operator","go_arch":"amd64","go_os":"linux","go_version":"go1.15.5","level":"0","message":"Starting the manager.","namespace":"","operator-sdk_version":"v0.19.4","operator_version":"4.7.0","ts":"2021-01-05T00:49:34.014933332Z"}
{"component":"elasticsearch-operator","go_arch":"amd64","go_os":"linux","go_version":"go1.15.5","level":"0","message":"Registering future events","name":{"Namespace":"openshift-logging","Name":"kibana"},"operator-sdk_version":"v0.19.4","operator_version":"4.7.0","ts":"2021-01-05T00:51:39.385800234Z"}
{"cluster":"elasticsearch","component":"elasticsearch-operator","go_arch":"amd64","go_os":"linux","go_version":"go1.15.5","level":"0","message":"Updated Elasticsearch","name":{"Namespace":"openshift-logging","Name":"kibana"},"namespace":"openshift-logging","operator-sdk_version":"v0.19.4","operator_version":"4.7.0","retries":0,"ts":"2021-01-05T00:51:39.815645028Z"}
CSV: elasticsearch-operator.4.7.0-202101020306.p0
I checked the events in the openshift-config-managed namespace, the events keeps repeating the following warning message: $ oc get events -n openshift-config-managed |grep elastic 6m32s Warning OwnerRefInvalidNamespace configmap/grafana-dashboard-elasticsearch ownerRef [logging.openshift.io/v1/Elasticsearch, namespace: openshift-config-managed, name: elasticsearch, uid: 4c9a14a6-cf2d-45ab-b3f0-aab7bba78d30] does not exist in namespace "openshift-config-managed" 6m27s Warning OwnerRefInvalidNamespace configmap/grafana-dashboard-elasticsearch ownerRef [logging.openshift.io/v1/Elasticsearch, namespace: openshift-config-managed, name: elasticsearch, uid: 4c9a14a6-cf2d-45ab-b3f0-aab7bba78d30] does not exist in namespace "openshift-config-managed" 6m26s Warning OwnerRefInvalidNamespace configmap/grafana-dashboard-elasticsearch ownerRef [logging.openshift.io/v1/Elasticsearch, namespace: openshift-config-managed, name: elasticsearch, uid: 4c9a14a6-cf2d-45ab-b3f0-aab7bba78d30] does not exist in namespace "openshift-config-managed" ... 1s Warning OwnerRefInvalidNamespace configmap/grafana-dashboard-elasticsearch ownerRef [logging.openshift.io/v1/Elasticsearch, namespace: openshift-config-managed, name: elasticsearch, uid: 4c9a14a6-cf2d-45ab-b3f0-aab7bba78d30] does not exist in namespace "openshift-config-managed" Thanks for these info. Somehow I could not reproduce the error in my local 4.6 cluster. Could you please test if the issue exists in any 4.6 cluster? If not, I will try to spin up a 4.7 cluster. Thanks. I'm not able to reproduce this issue in a 4.6 cluster. I deployed logging 4.6 and logging 4.7, none of them have this issue. *** Bug 1913688 has been marked as a duplicate of this bug. *** Verified in elasticsearch-operator.4.7.0-202101080648.p0 Will we backport this fix to 4.6? We have same issue when deploy logging 4.6 on a 4.7 OCP cluster. After OCP 4.7 GA, if customers upgrade their OCP version to 4.7 but don't upgrade logging, then they will hit this issue. Yes, I will backport this to 4.6. Thanks for reminding me. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Errata Advisory for Openshift Logging 5.0.0), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2021:0652 |