Bug 1834576
Summary: | The ES and Kibana don't mount new secrets after secret/master-certs updated. | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Qiaoling Tang <qitang> |
Component: | Logging | Assignee: | ewolinet |
Status: | CLOSED ERRATA | QA Contact: | Anping Li <anli> |
Severity: | high | Docs Contact: | |
Priority: | urgent | ||
Version: | 4.5 | CC: | anli, aos-bugs, ewolinet |
Target Milestone: | --- | ||
Target Release: | 4.6.0 | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: |
Cause:
The elasticsearch operator was not updating the secret hash for the kibana deployment.
Consequence:
Kibana pods would not be restarted in the event of a secret update.
Fix:
Ensured we are correctly updating the hash for the deployment to trigger a redeploy of the pods.
Result:
Kibana is correctly redeployed in the event of its secret updating
|
Story Points: | --- |
Clone Of: | Environment: | ||
Last Closed: | 2020-10-27 15:58:59 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | |||
Bug Blocks: | 1845947 |
Description
Qiaoling Tang
2020-05-12 01:50:29 UTC
Can you provide the output of `oc get elasticsearch elasticsearch -o yaml` ? spec: indexManagement: mappings: - aliases: - app - logs-app name: app policyRef: app-policy - aliases: - infra - logs-infra name: infra policyRef: infra-policy - aliases: - audit - logs-audit name: audit policyRef: audit-policy policies: - name: app-policy phases: delete: minAge: 1d hot: actions: rollover: maxAge: 1h pollInterval: 15m - name: infra-policy phases: delete: minAge: 7d hot: actions: rollover: maxAge: 8h pollInterval: 15m - name: audit-policy phases: delete: minAge: 3w hot: actions: rollover: maxAge: 1d pollInterval: 15m managementState: Managed nodeSpec: resources: requests: memory: 2Gi nodes: - genUUID: c1sc6df6 nodeCount: 3 resources: {} roles: - client - data - master storage: size: 20Gi storageClassName: gp2 redundancyPolicy: SingleRedundancy status: cluster: activePrimaryShards: 0 activeShards: 0 initializingShards: 0 numDataNodes: 0 numNodes: 0 pendingTasks: 0 relocatingShards: 0 status: cluster health unknown unassignedShards: 0 clusterHealth: "" conditions: - lastTransitionTime: "2020-05-13T06:09:25Z" status: "True" type: Restarting nodes: - deploymentName: elasticsearch-cdm-c1sc6df6-1 upgradeStatus: underUpgrade: "True" upgradePhase: nodeRestarting - deploymentName: elasticsearch-cdm-c1sc6df6-2 upgradeStatus: underUpgrade: "True" upgradePhase: nodeRestarting - deploymentName: elasticsearch-cdm-c1sc6df6-3 upgradeStatus: upgradePhase: controllerUpdated pods: client: failed: [] notReady: [] ready: - elasticsearch-cdm-c1sc6df6-1-85c9c6d4f-4gxrh - elasticsearch-cdm-c1sc6df6-2-68f5555d8-bnwkx - elasticsearch-cdm-c1sc6df6-3-66fc769bc-mwvwk data: failed: [] notReady: [] ready: - elasticsearch-cdm-c1sc6df6-1-85c9c6d4f-4gxrh - elasticsearch-cdm-c1sc6df6-2-68f5555d8-bnwkx - elasticsearch-cdm-c1sc6df6-3-66fc769bc-mwvwk master: failed: [] notReady: [] ready: - elasticsearch-cdm-c1sc6df6-1-85c9c6d4f-4gxrh - elasticsearch-cdm-c1sc6df6-2-68f5555d8-bnwkx - elasticsearch-cdm-c1sc6df6-3-66fc769bc-mwvwk shardAllocationEnabled: shard allocation unknown I'm unable to reproduce this with the latest EO image. 1. Set clusterlogging/instance to Unmanaged 2. Delete secret/master-certs 3. Delete CLO pod 4. Set clusterlogging/instance to Managed 5. Observe all 3 of my ES pods get restarted Can you please retest and confirm you still see this? The Kibana can't connect to ES. I think the kibana pod must be restarted to make the secret effect. What are the updates? The following secrets were updated, curator,elasticsearch, fluentd, kibana and kibana-proxy. The ES pods were restarted in 20minutes after the master-cert were recreated. The fluentd and kibana Pods weren't restarted. What status of each component? Elasticsearch works well. Fluentd can send logs to ES The curator can connect to ES after ES was restarted. The elasticsearch-delete can connect to ES after ES was restarted The elasticsearch-rollover can connect to ES after ES was restarted The Kibana can not connect to ES. Afer restart kibana pod manually. The kibana can connect to ES. {"type":"log","@timestamp":"2020-05-29T06:46:22Z","tags":["error","elasticsearch","admin"],"pid":119,"message":"Request error, retrying\nGET https://elasticsearch.openshift-logging.svc.cluster.local:9200/.kibana/doc/config%3A6.8.1 => unable to verify the first certificate"} Elasticsearch WARNING: 2020-05-29T06:46:32Z Unable to revive connection: https://elasticsearch.openshift-logging.svc.cluster.local:9200/ Elasticsearch WARNING: 2020-05-29T06:46:32Z No living connections Elasticsearch WARNING: 2020-05-29T06:46:32Z Unable to revive connection: https://elasticsearch.openshift-logging.svc.cluster.local:9200/ Elasticsearch WARNING: 2020-05-29T06:46:32Z No living connections Elasticsearch ERROR: 2020-05-29T06:46:32Z Error: Request error, retrying GET https://elasticsearch.openshift-logging.svc.cluster.local:9200/_opendistro/_security/api/permissionsinfo => unable to verify the first certificate at Log.error (/opt/app-root/src/node_modules/elasticsearch/src/lib/log.js:226:56) at checkRespForFailure (/opt/app-root/src/node_modules/elasticsearch/src/lib/transport.js:259:18) at HttpConnector.<anonymous> (/opt/app-root/src/node_modules/elasticsearch/src/lib/connectors/http.js:164:7) at ClientRequest.wrapper (/opt/app-root/src/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19) at ClientRequest.emit (events.js:198:13) at TLSSocket.socketErrorListener (_http_client.js:401:9) at TLSSocket.emit (events.js:198:13) at emitErrorNT (internal/streams/destroy.js:91:8) at emitErrorAndCloseNT (internal/streams/destroy.js:59:3) at process._tickCallback (internal/process/next_tick.js:63:19) Elasticsearch WARNING: 2020-05-29T06:46:33Z Unable to revive connection: https://elasticsearch.openshift-logging.svc.cluster.local:9200/ Elasticsearch WARNING: 2020-05-29T06:46:33Z No living connections {"type":"error","@timestamp":"2020-05-29T06:46:32Z","tags":[],"pid":119,"level":"error","error":{"message":"No Living connections: No Living connections","name":"Error","stack":"Error: No Living connections\n at sendReqWithConnection (/opt/app-root/src/node_modules/elasticsearch/src/lib/transport.js:226:15)\n at next (/opt/app-root/src/node_modules/elasticsearch/src/lib/connection_pool.js:214:7)\n at process._tickCallback (internal/process/next_tick.js:61:11)"},"url":{"protocol":null,"slashes":null,"auth":null,"host":null,"port":null,"hostname":null,"hash":null,"search":null,"query":{},"pathname":"/api/v1/restapiinfo","path":"/api/v1/restapiinfo","href":"/api/v1/restapiinfo"},"message":"No Living connections: No Living connections"} Elasticsearch WARNING: 2020-05-29T06:46:51Z Unable to revive connection: https://elasticsearch.openshift-logging.svc.cluster.local:9200/ The logs for Kibana in https://bugzilla.redhat.com/show_bug.cgi?id=1834576#c4 is due to elasticsearch not being ready. It may have overlap with another bz. Can you please provide the output of the elasticsearch CR and the logs from EO? The ES has been updated, but the Kibana hasn't. $ oc get pod NAME READY STATUS RESTARTS AGE cluster-logging-operator-98f5c5fd-hqbtg 1/1 Running 0 16m elasticsearch-cdm-08a8icmo-1-77c885464f-znm4v 2/2 Running 0 9m10s elasticsearch-cdm-08a8icmo-2-55644885bd-d7tw4 2/2 Running 0 9m10s elasticsearch-cdm-08a8icmo-3-7998cb4dc-5rj59 2/2 Running 0 9m10s elasticsearch-delete-app-1591148700-xs5vf 0/1 Completed 0 102s elasticsearch-delete-infra-1591148700-cj8xg 0/1 Completed 0 102s elasticsearch-rollover-app-1591148700-fsxsp 0/1 Completed 0 102s elasticsearch-rollover-infra-1591148700-6p4t8 0/1 Completed 0 102s fluentd-5jqg9 1/1 Running 0 25m fluentd-7mt9z 1/1 Running 0 25m fluentd-9x9qt 1/1 Running 0 25m fluentd-gwb6b 1/1 Running 0 25m fluentd-pzg6s 1/1 Running 0 25m fluentd-z5x6z 1/1 Running 0 25m kibana-7f5df6fd-9l89g 2/2 Running 0 24m $ oc get elasticsearch -oyaml apiVersion: v1 items: - apiVersion: logging.openshift.io/v1 kind: Elasticsearch metadata: creationTimestamp: "2020-06-03T01:21:29Z" generation: 18 managedFields: - apiVersion: logging.openshift.io/v1 fieldsType: FieldsV1 fieldsV1: f:metadata: f:ownerReferences: .: {} k:{"uid":"d1f9a90c-c76f-41fa-8c37-2ec7bdbeae87"}: .: {} f:apiVersion: {} f:controller: {} f:kind: {} f:name: {} f:uid: {} f:spec: .: {} f:indexManagement: .: {} f:mappings: {} f:policies: {} f:managementState: {} f:nodeSpec: .: {} f:resources: .: {} f:requests: .: {} f:memory: {} f:redundancyPolicy: {} f:status: .: {} f:cluster: .: {} f:initializingShards: {} f:pendingTasks: {} f:unassignedShards: {} f:clusterHealth: {} f:pods: {} manager: cluster-logging-operator operation: Update time: "2020-06-03T01:21:29Z" - apiVersion: logging.openshift.io/v1 fieldsType: FieldsV1 fieldsV1: f:spec: f:nodes: {} f:status: f:cluster: f:activePrimaryShards: {} f:activeShards: {} f:numDataNodes: {} f:numNodes: {} f:relocatingShards: {} f:status: {} f:conditions: {} f:nodes: {} f:pods: f:client: .: {} f:failed: {} f:notReady: {} f:ready: {} f:data: .: {} f:failed: {} f:notReady: {} f:ready: {} f:master: .: {} f:failed: {} f:notReady: {} f:ready: {} f:shardAllocationEnabled: {} manager: elasticsearch-operator operation: Update time: "2020-06-03T01:38:17Z" name: elasticsearch namespace: openshift-logging ownerReferences: - apiVersion: logging.openshift.io/v1 controller: true kind: ClusterLogging name: instance uid: d1f9a90c-c76f-41fa-8c37-2ec7bdbeae87 resourceVersion: "72916" selfLink: /apis/logging.openshift.io/v1/namespaces/openshift-logging/elasticsearches/elasticsearch uid: e69379ff-e00a-4834-b87a-513e5dc84895 spec: indexManagement: mappings: - aliases: - app - logs.app name: app policyRef: app-policy - aliases: - infra - logs.infra name: infra policyRef: infra-policy policies: - name: app-policy phases: delete: minAge: 1d hot: actions: rollover: maxAge: 1h pollInterval: 15m - name: infra-policy phases: delete: minAge: 7d hot: actions: rollover: maxAge: 8h pollInterval: 15m managementState: Managed nodeSpec: resources: requests: memory: 4Gi nodes: - genUUID: 08a8icmo nodeCount: 3 resources: {} roles: - client - data - master storage: size: 20Gi storageClassName: gp2 redundancyPolicy: SingleRedundancy status: cluster: activePrimaryShards: 11 activeShards: 22 initializingShards: 0 numDataNodes: 3 numNodes: 3 pendingTasks: 0 relocatingShards: 0 status: green unassignedShards: 0 clusterHealth: "" conditions: [] nodes: - deploymentName: elasticsearch-cdm-08a8icmo-1 upgradeStatus: upgradePhase: controllerUpdated - deploymentName: elasticsearch-cdm-08a8icmo-2 upgradeStatus: upgradePhase: controllerUpdated - deploymentName: elasticsearch-cdm-08a8icmo-3 upgradeStatus: upgradePhase: controllerUpdated pods: client: failed: [] notReady: [] ready: - elasticsearch-cdm-08a8icmo-1-77c885464f-znm4v - elasticsearch-cdm-08a8icmo-2-55644885bd-d7tw4 - elasticsearch-cdm-08a8icmo-3-7998cb4dc-5rj59 data: failed: [] notReady: [] ready: - elasticsearch-cdm-08a8icmo-1-77c885464f-znm4v - elasticsearch-cdm-08a8icmo-2-55644885bd-d7tw4 - elasticsearch-cdm-08a8icmo-3-7998cb4dc-5rj59 master: failed: [] notReady: [] ready: - elasticsearch-cdm-08a8icmo-1-77c885464f-znm4v - elasticsearch-cdm-08a8icmo-2-55644885bd-d7tw4 - elasticsearch-cdm-08a8icmo-3-7998cb4dc-5rj59 shardAllocationEnabled: all kind: List metadata: resourceVersion: "" selfLink: "" $ oc logs -n openshift-operators-redhat elasticsearch-operator-57bd69d85-t45lg {"level":"info","ts":1591147265.5221949,"logger":"cmd","msg":"Go Version: go1.13.4"} {"level":"info","ts":1591147265.5222158,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"} {"level":"info","ts":1591147265.5222197,"logger":"cmd","msg":"Version of operator-sdk: v0.8.2"} {"level":"info","ts":1591147265.5232148,"logger":"leader","msg":"Trying to become the leader."} {"level":"info","ts":1591147265.6868517,"logger":"leader","msg":"No pre-existing lock was found."} {"level":"info","ts":1591147265.6928344,"logger":"leader","msg":"Became the leader."} {"level":"info","ts":1591147265.8200235,"logger":"cmd","msg":"Registering Components."} {"level":"info","ts":1591147265.820503,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"kibana-controller","source":"kind source: /, Kind="} {"level":"info","ts":1591147265.8206809,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"elasticsearch-controller","source":"kind source: /, Kind="} {"level":"info","ts":1591147265.8208663,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"proxyconfig-controller","source":"kind source: /, Kind="} {"level":"info","ts":1591147265.8209927,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"kibanasecret-controller","source":"kind source: /, Kind="} {"level":"info","ts":1591147265.8211472,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"trustedcabundle-controller","source":"kind source: /, Kind="} {"level":"info","ts":1591147266.0386891,"logger":"metrics","msg":"Metrics Service object created","Service.Name":"elasticsearch-operator","Service.Namespace":"openshift-operators-redhat"} {"level":"info","ts":1591147266.0387192,"logger":"cmd","msg":"This operator no longer honors the image specified by the custom resources so that it is able to properly coordinate the configuration with the image."} {"level":"info","ts":1591147266.0387254,"logger":"cmd","msg":"Starting the Cmd."} W0603 01:21:06.195090 1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: watch of *v1.Kibana ended with: too old resource version: 64907 (64908) {"level":"info","ts":1591147266.738974,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"trustedcabundle-controller"} {"level":"info","ts":1591147266.7390163,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"proxyconfig-controller"} {"level":"info","ts":1591147266.7390492,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"kibanasecret-controller"} {"level":"info","ts":1591147266.7390227,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"kibana-controller"} {"level":"info","ts":1591147266.7390113,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"elasticsearch-controller"} {"level":"info","ts":1591147266.8391533,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"trustedcabundle-controller","worker count":1} {"level":"info","ts":1591147266.8391838,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"kibana-controller","worker count":1} {"level":"info","ts":1591147266.8391578,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"proxyconfig-controller","worker count":1} {"level":"info","ts":1591147266.8391824,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"kibanasecret-controller","worker count":1} {"level":"info","ts":1591147266.8391652,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"elasticsearch-controller","worker count":1} {"level":"error","ts":1591147266.8393264,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1591147267.8395433,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1591147268.8397605,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1591147269.8399775,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1591147270.8402362,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1591147271.8404636,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1591147272.8407023,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1591147273.8409078,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1591147274.8411415,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1591147276.1213143,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1591147278.6814668,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} {"level":"error","ts":1591147283.8016648,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} time="2020-06-03T01:21:29Z" level=error msg="Operator unable to read local file to get contents: open /tmp/ocp-eo/ca.crt: no such file or directory" time="2020-06-03T01:21:29Z" level=error msg="Operator unable to read local file to get contents: open /tmp/ocp-eo/ca.crt: no such file or directory" time="2020-06-03T01:21:59Z" level=info msg="skipping kibana migrations: no index \".kibana\" available" {"level":"error","ts":1591147319.6436105,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"kibana-controller","request":"openshift-logging/kibana","error":"Did not receive hashvalue for trusted CA value","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} time="2020-06-03T01:22:15Z" level=warning msg="unable to get cluster node count. E: Get https://elasticsearch.openshift-logging.svc:9200/_cluster/health: dial tcp 172.30.187.243:9200: i/o timeout\r\n" time="2020-06-03T01:22:30Z" level=info msg="skipping kibana migrations: no index \".kibana\" available" time="2020-06-03T01:22:31Z" level=info msg="Updating status of Kibana" time="2020-06-03T01:22:31Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:22:31Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:23:01Z" level=info msg="skipping kibana migrations: no index \".kibana\" available" time="2020-06-03T01:23:01Z" level=info msg="Updating status of Kibana" time="2020-06-03T01:23:01Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:23:01Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:23:17Z" level=warning msg="unable to get cluster node count. E: Get https://elasticsearch.openshift-logging.svc:9200/_cluster/health: dial tcp 172.30.187.243:9200: i/o timeout\r\n" time="2020-06-03T01:23:31Z" level=info msg="skipping kibana migrations: no index \".kibana\" available" time="2020-06-03T01:23:31Z" level=info msg="Updating status of Kibana" time="2020-06-03T01:23:31Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:23:31Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:24:01Z" level=info msg="skipping kibana migrations: no index \".kibana\" available" time="2020-06-03T01:24:01Z" level=info msg="Updating status of Kibana" time="2020-06-03T01:24:01Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:24:01Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:24:17Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:24:17Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:24:18Z" level=warning msg="unable to get cluster node count. E: Get https://elasticsearch.openshift-logging.svc:9200/_cluster/health: dial tcp 172.30.187.243:9200: i/o timeout\r\n" time="2020-06-03T01:24:31Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:24:32Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:24:32Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:25:02Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:25:02Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:25:02Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:25:32Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:25:32Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:25:32Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:26:02Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:26:02Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:26:03Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:26:33Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:26:33Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:26:33Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:27:03Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:27:03Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:27:03Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:27:33Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:27:33Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:27:33Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:28:04Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:28:04Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:28:04Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:28:34Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:28:34Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:28:34Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:29:04Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:29:04Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:29:04Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:29:34Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:29:35Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:29:35Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:30:05Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:30:05Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:30:05Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:30:21Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:30:21Z" level=info msg="skipping kibana migrations: no index \".kibana\" available" time="2020-06-03T01:30:22Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:30:35Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:30:35Z" level=info msg="skipping kibana migrations: no index \".kibana\" available" time="2020-06-03T01:30:35Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:30:45Z" level=warning msg="unable to get cluster node count. E: Get https://elasticsearch.openshift-logging.svc:9200/_cluster/health: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")\r\n" time="2020-06-03T01:30:46Z" level=warning msg="unable to get cluster node count. E: Get https://elasticsearch.openshift-logging.svc:9200/_cluster/health: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")\r\n" time="2020-06-03T01:30:47Z" level=warning msg="unable to get cluster node count. E: Get https://elasticsearch.openshift-logging.svc:9200/_cluster/health: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")\r\n" time="2020-06-03T01:30:47Z" level=warning msg="Unable to list existing templates in order to reconcile stale ones: Get https://elasticsearch.openshift-logging.svc:9200/_template: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")" time="2020-06-03T01:30:47Z" level=error msg="Error creating index template for mapping app: Put https://elasticsearch.openshift-logging.svc:9200/_template/ocp-gen-app: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")" {"level":"error","ts":1591147847.8516762,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"elasticsearch-controller","request":"openshift-logging/elasticsearch","error":"Failed to reconcile IndexMangement for Elasticsearch cluster: Put https://elasticsearch.openshift-logging.svc:9200/_template/ocp-gen-app: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} time="2020-06-03T01:30:49Z" level=info msg="Beginning full cluster restart for cert redeploy on elasticsearch" time="2020-06-03T01:30:49Z" level=warning msg="Unable to set shard allocation to primaries: Put https://elasticsearch.openshift-logging.svc:9200/_cluster/settings: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")" time="2020-06-03T01:30:49Z" level=warning msg="Unable to perform synchronized flush: Post https://elasticsearch.openshift-logging.svc:9200/_flush/synced: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")" time="2020-06-03T01:30:49Z" level=warning msg="Unable to get cluster size prior to restart for elasticsearch-cdm-08a8icmo-1" time="2020-06-03T01:30:49Z" level=warning msg="Unable to get cluster size prior to restart for elasticsearch-cdm-08a8icmo-2" time="2020-06-03T01:30:49Z" level=warning msg="Unable to get cluster size prior to restart for elasticsearch-cdm-08a8icmo-3" time="2020-06-03T01:30:49Z" level=warning msg="Unable to list existing templates in order to reconcile stale ones: Get https://elasticsearch.openshift-logging.svc:9200/_template: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")" time="2020-06-03T01:30:49Z" level=error msg="Error creating index template for mapping app: Put https://elasticsearch.openshift-logging.svc:9200/_template/ocp-gen-app: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")" {"level":"error","ts":1591147849.2651162,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"elasticsearch-controller","request":"openshift-logging/elasticsearch","error":"Failed to reconcile IndexMangement for Elasticsearch cluster: Put https://elasticsearch.openshift-logging.svc:9200/_template/ocp-gen-app: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} time="2020-06-03T01:31:05Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:31:05Z" level=info msg="skipping kibana migrations: no index \".kibana\" available" time="2020-06-03T01:31:05Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:31:36Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:31:36Z" level=info msg="skipping kibana migrations: no index \".kibana\" available" time="2020-06-03T01:31:36Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:31:50Z" level=info msg="Timed out waiting for elasticsearch-cdm-08a8icmo-1 to leave the cluster" time="2020-06-03T01:32:06Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:32:36Z" level=info msg="skipping kibana migrations: no index \".kibana\" available" time="2020-06-03T01:32:36Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:33:06Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:33:33Z" level=info msg="Timed out waiting for elasticsearch-cdm-08a8icmo-2 to leave the cluster" time="2020-06-03T01:33:36Z" level=info msg="skipping kibana migrations: no index \".kibana\" available" time="2020-06-03T01:33:36Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:34:06Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:34:36Z" level=info msg="skipping kibana migrations: no index \".kibana\" available" time="2020-06-03T01:34:36Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:35:05Z" level=info msg="Timed out waiting for elasticsearch-cdm-08a8icmo-3 to leave the cluster" time="2020-06-03T01:35:06Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:35:36Z" level=info msg="skipping kibana migrations: no index \".kibana\" available" time="2020-06-03T01:35:36Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:36:06Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:36:36Z" level=info msg="skipping kibana migrations: no index \".kibana\" available" time="2020-06-03T01:36:36Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:37:05Z" level=warning msg="Unable to list existing templates in order to reconcile stale ones: Get https://elasticsearch.openshift-logging.svc:9200/_template: dial tcp 172.30.187.243:9200: i/o timeout" time="2020-06-03T01:37:06Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:37:35Z" level=error msg="Error creating index template for mapping app: Put https://elasticsearch.openshift-logging.svc:9200/_template/ocp-gen-app: dial tcp 172.30.187.243:9200: i/o timeout" {"level":"error","ts":1591148255.6102886,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"elasticsearch-controller","request":"openshift-logging/elasticsearch","error":"Failed to reconcile IndexMangement for Elasticsearch cluster: Put https://elasticsearch.openshift-logging.svc:9200/_template/ocp-gen-app: dial tcp 172.30.187.243:9200: i/o timeout","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"} time="2020-06-03T01:37:36Z" level=info msg="skipping kibana migrations: no index \".kibana\" available" time="2020-06-03T01:37:37Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:38:06Z" level=warning msg="Unable to enable shard allocation: Put https://elasticsearch.openshift-logging.svc:9200/_cluster/settings: dial tcp 172.30.187.243:9200: i/o timeout" time="2020-06-03T01:38:07Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:38:14Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:38:15Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:38:15Z" level=info msg="Completed full cluster restart for cert redeploy on elasticsearch" time="2020-06-03T01:38:45Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:38:45Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:38:45Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:39:15Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:39:15Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:39:15Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:39:45Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:39:45Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:39:46Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:40:16Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:40:16Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:40:16Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:40:46Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:40:46Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:40:46Z" level=info msg="Kibana status successfully updated" W0603 01:41:04.321429 1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old. time="2020-06-03T01:41:16Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:41:16Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:41:17Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:41:47Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:41:47Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:41:47Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:42:17Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:42:17Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:42:17Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:42:47Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:42:47Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:42:47Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:43:18Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:43:18Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:43:18Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:43:48Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:43:48Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:43:48Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:44:18Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:44:18Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:44:18Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:44:49Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:44:49Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:44:49Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:45:19Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:45:19Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:45:19Z" level=info msg="Kibana status successfully updated" time="2020-06-03T01:45:49Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed" time="2020-06-03T01:45:49Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\"" time="2020-06-03T01:45:49Z" level=info msg="Kibana status successfully updated" Verified. The Kibana pod is restarted. Kibana works as expected. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:4196 |