Bug 1834576 - The ES and Kibana don't mount new secrets after secret/master-certs updated.
Summary: The ES and Kibana don't mount new secrets after secret/master-certs updated.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.5
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: ---
: 4.6.0
Assignee: ewolinet
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks: 1845947
TreeView+ depends on / blocked
 
Reported: 2020-05-12 01:50 UTC by Qiaoling Tang
Modified: 2020-10-27 15:59 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: The elasticsearch operator was not updating the secret hash for the kibana deployment. Consequence: Kibana pods would not be restarted in the event of a secret update. Fix: Ensured we are correctly updating the hash for the deployment to trigger a redeploy of the pods. Result: Kibana is correctly redeployed in the event of its secret updating
Clone Of:
Environment:
Last Closed: 2020-10-27 15:58:59 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift elasticsearch-operator pull 390 0 None closed Bug 1834576: Fixing kibana not rolling out with secret update 2021-02-15 12:33:28 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 15:59:20 UTC

Description Qiaoling Tang 2020-05-12 01:50:29 UTC
Description of problem:
The Fluentd couldn't connect to ES after the secret/master-certs regenerated. Looks like the Kibana and ES didn't use the new secrets, but the Fluentd was updated to use the new secrets. 

Logs in the Fluentd pod:
$ oc logs fluentd-x2g6h
2020-05-12 00:41:11 +0000 [warn]: [clo_default_output_es] failed to flush the buffer. retry_time=0 next_retry_seconds=2020-05-12 00:41:12 +0000 chunk="5a568aa5350d18f130acd5aa90c78a59" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc.cluster.local\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): Connection refused - connect(2) for 172.30.106.176:9200 (Errno::ECONNREFUSED)"
  2020-05-12 00:41:11 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-4.0.5/lib/fluent/plugin/out_elasticsearch.rb:962:in `rescue in send_bulk'
  2020-05-12 00:41:11 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-4.0.5/lib/fluent/plugin/out_elasticsearch.rb:924:in `send_bulk'
  2020-05-12 00:41:11 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-4.0.5/lib/fluent/plugin/out_elasticsearch.rb:758:in `block in write'
  2020-05-12 00:41:11 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-4.0.5/lib/fluent/plugin/out_elasticsearch.rb:757:in `each'
  2020-05-12 00:41:11 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-4.0.5/lib/fluent/plugin/out_elasticsearch.rb:757:in `write'
  2020-05-12 00:41:11 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.9.2/lib/fluent/plugin/output.rb:1133:in `try_flush'
  2020-05-12 00:41:11 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.9.2/lib/fluent/plugin/output.rb:1439:in `flush_thread_run'
  2020-05-12 00:41:11 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.9.2/lib/fluent/plugin/output.rb:461:in `block (2 levels) in start'
  2020-05-12 00:41:11 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.9.2/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
2020-05-12 00:41:44 +0000 [warn]: [clo_default_output_es] failed to flush the buffer. retry_time=1 next_retry_seconds=2020-05-12 00:41:45 +0000 chunk="5a568aa5350d18f130acd5aa90c78a59" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc.cluster.local\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): [503] Open Distro not initialized"
  2020-05-12 00:41:44 +0000 [warn]: suppressed same stacktrace
2020-05-12 00:41:46 +0000 [warn]: [clo_default_output_es] failed to flush the buffer. retry_time=2 next_retry_seconds=2020-05-12 00:41:48 +0000 chunk="5a568aa5350d18f130acd5aa90c78a59" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc.cluster.local\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): [503] Open Distro not initialized"
  2020-05-12 00:41:46 +0000 [warn]: suppressed same stacktrace
2020-05-12 00:41:48 +0000 [warn]: [clo_default_output_es] failed to flush the buffer. retry_time=3 next_retry_seconds=2020-05-12 00:41:52 +0000 chunk="5a568aa5350d18f130acd5aa90c78a59" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc.cluster.local\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): [503] Open Distro not initialized"
  2020-05-12 00:41:48 +0000 [warn]: suppressed same stacktrace
2020-05-12 00:41:52 +0000 [warn]: [clo_default_output_es] retry succeeded. chunk_id="5a568aa5350d18f130acd5aa90c78a59"
2020-05-12 00:42:12 +0000 [warn]: [clo_default_output_es] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=64.46370552199733 slow_flush_log_threshold=20.0 plugin_id="clo_default_output_es"
2020-05-12 00:43:25 +0000 [warn]: [retry_clo_default_output_es] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=64.77023854700019 slow_flush_log_threshold=20.0 plugin_id="retry_clo_default_output_es"
2020-05-12 00:43:25 +0000 [warn]: [clo_default_output_es] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=60.05987231400286 slow_flush_log_threshold=20.0 plugin_id="clo_default_output_es"
2020-05-12 00:43:29 +0000 [warn]: [clo_default_output_es] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=60.05105542599995 slow_flush_log_threshold=20.0 plugin_id="clo_default_output_es"
2020-05-12 00:43:56 +0000 [warn]: [clo_default_output_es] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=31.028722502000164 slow_flush_log_threshold=20.0 plugin_id="clo_default_output_es"
2020-05-12 00:43:56 +0000 [warn]: [retry_clo_default_output_es] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=98.55270952800129 slow_flush_log_threshold=20.0 plugin_id="retry_clo_default_output_es"
2020-05-12 00:43:57 +0000 [warn]: [clo_default_output_es] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=27.71531252299974 slow_flush_log_threshold=20.0 plugin_id="clo_default_output_es"
2020-05-12 00:43:57 +0000 [warn]: [retry_clo_default_output_es] buffer flush took longer time than slow_flush_log_threshold: elapsed_time=32.02300949600249 slow_flush_log_threshold=20.0 plugin_id="retry_clo_default_output_es"
2020-05-12 01:03:11 +0000 [warn]: [clo_default_output_es] failed to flush the buffer. retry_time=0 next_retry_seconds=2020-05-12 01:03:12 +0000 chunk="5a56906dc2438626c68475e23c2fc181" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc.cluster.local\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate) (OpenSSL::SSL::SSLError) Unable to verify certificate. This may be an issue with the remote host or with Excon. Excon has certificates bundled, but these can be customized:\n\n            `Excon.defaults[:ssl_ca_path] = path_to_certs`\n            `ENV['SSL_CERT_DIR'] = path_to_certs`\n            `Excon.defaults[:ssl_ca_file] = path_to_file`\n            `ENV['SSL_CERT_FILE'] = path_to_file`\n            `Excon.defaults[:ssl_verify_callback] = callback`\n                (see OpenSSL::SSL::SSLContext#verify_callback)\nor:\n            `Excon.defaults[:ssl_verify_peer] = false` (less secure).\n"
  2020-05-12 01:03:11 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-4.0.5/lib/fluent/plugin/out_elasticsearch.rb:962:in `rescue in send_bulk'
  2020-05-12 01:03:11 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-4.0.5/lib/fluent/plugin/out_elasticsearch.rb:924:in `send_bulk'
  2020-05-12 01:03:11 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-4.0.5/lib/fluent/plugin/out_elasticsearch.rb:758:in `block in write'
  2020-05-12 01:03:11 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-4.0.5/lib/fluent/plugin/out_elasticsearch.rb:757:in `each'
  2020-05-12 01:03:11 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluent-plugin-elasticsearch-4.0.5/lib/fluent/plugin/out_elasticsearch.rb:757:in `write'
  2020-05-12 01:03:11 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.9.2/lib/fluent/plugin/output.rb:1133:in `try_flush'
  2020-05-12 01:03:11 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.9.2/lib/fluent/plugin/output.rb:1439:in `flush_thread_run'
  2020-05-12 01:03:11 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.9.2/lib/fluent/plugin/output.rb:461:in `block (2 levels) in start'
  2020-05-12 01:03:11 +0000 [warn]: /opt/rh/rh-ruby25/root/usr/local/share/gems/gems/fluentd-1.9.2/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
2020-05-12 01:03:11 +0000 [warn]: [clo_default_output_es] failed to flush the buffer. retry_time=1 next_retry_seconds=2020-05-12 01:03:12 +0000 chunk="5a56906eb5d3a871b8d3443702693564" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc.cluster.local\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate) (OpenSSL::SSL::SSLError) Unable to verify certificate. This may be an issue with the remote host or with Excon. Excon has certificates bundled, but these can be customized:\n\n            `Excon.defaults[:ssl_ca_path] = path_to_certs`\n            `ENV['SSL_CERT_DIR'] = path_to_certs`\n            `Excon.defaults[:ssl_ca_file] = path_to_file`\n            `ENV['SSL_CERT_FILE'] = path_to_file`\n            `Excon.defaults[:ssl_verify_callback] = callback`\n                (see OpenSSL::SSL::SSLContext#verify_callback)\nor:\n            `Excon.defaults[:ssl_verify_peer] = false` (less secure).\n"
  2020-05-12 01:03:11 +0000 [warn]: suppressed same stacktrace
2020-05-12 01:03:12 +0000 [warn]: [clo_default_output_es] failed to flush the buffer. retry_time=2 next_retry_seconds=2020-05-12 01:03:14 +0000 chunk="5a56906eb5d3a871b8d3443702693564" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc.cluster.local\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate) (OpenSSL::SSL::SSLError) Unable to verify certificate. This may be an issue with the remote host or with Excon. Excon has certificates bundled, but these can be customized:\n\n            `Excon.defaults[:ssl_ca_path] = path_to_certs`\n            `ENV['SSL_CERT_DIR'] = path_to_certs`\n            `Excon.defaults[:ssl_ca_file] = path_to_file`\n            `ENV['SSL_CERT_FILE'] = path_to_file`\n            `Excon.defaults[:ssl_verify_callback] = callback`\n                (see OpenSSL::SSL::SSLContext#verify_callback)\nor:\n            `Excon.defaults[:ssl_verify_peer] = false` (less secure).\n"
  2020-05-12 01:03:12 +0000 [warn]: suppressed same stacktrace
2020-05-12 01:03:14 +0000 [warn]: [clo_default_output_es] failed to flush the buffer. retry_time=3 next_retry_seconds=2020-05-12 01:03:18 +0000 chunk="5a56906dc2438626c68475e23c2fc181" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc.cluster.local\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate) (OpenSSL::SSL::SSLError) Unable to verify certificate. This may be an issue with the remote host or with Excon. Excon has certificates bundled, but these can be customized:\n\n            `Excon.defaults[:ssl_ca_path] = path_to_certs`\n            `ENV['SSL_CERT_DIR'] = path_to_certs`\n            `Excon.defaults[:ssl_ca_file] = path_to_file`\n            `ENV['SSL_CERT_FILE'] = path_to_file`\n            `Excon.defaults[:ssl_verify_callback] = callback`\n                (see OpenSSL::SSL::SSLContext#verify_callback)\nor:\n            `Excon.defaults[:ssl_verify_peer] = false` (less secure).\n"
  2020-05-12 01:03:14 +0000 [warn]: suppressed same stacktrace
2020-05-12 01:03:14 +0000 [warn]: [clo_default_output_es] failed to flush the buffer. retry_time=4 next_retry_seconds=2020-05-12 01:03:22 +0000 chunk="5a56906eb5d3a871b8d3443702693564" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc.cluster.local\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate) (OpenSSL::SSL::SSLError) Unable to verify certificate. This may be an issue with the remote host or with Excon. Excon has certificates bundled, but these can be customized:\n\n            `Excon.defaults[:ssl_ca_path] = path_to_certs`\n            `ENV['SSL_CERT_DIR'] = path_to_certs`\n            `Excon.defaults[:ssl_ca_file] = path_to_file`\n            `ENV['SSL_CERT_FILE'] = path_to_file`\n            `Excon.defaults[:ssl_verify_callback] = callback`\n                (see OpenSSL::SSL::SSLContext#verify_callback)\nor:\n            `Excon.defaults[:ssl_verify_peer] = false` (less secure).\n"
  2020-05-12 01:03:14 +0000 [warn]: suppressed same stacktrace
2020-05-12 01:03:22 +0000 [warn]: [clo_default_output_es] failed to flush the buffer. retry_time=5 next_retry_seconds=2020-05-12 01:03:39 +0000 chunk="5a56906eb5d3a871b8d3443702693564" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc.cluster.local\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate) (OpenSSL::SSL::SSLError) Unable to verify certificate. This may be an issue with the remote host or with Excon. Excon has certificates bundled, but these can be customized:\n\n            `Excon.defaults[:ssl_ca_path] = path_to_certs`\n            `ENV['SSL_CERT_DIR'] = path_to_certs`\n            `Excon.defaults[:ssl_ca_file] = path_to_file`\n            `ENV['SSL_CERT_FILE'] = path_to_file`\n            `Excon.defaults[:ssl_verify_callback] = callback`\n                (see OpenSSL::SSL::SSLContext#verify_callback)\nor:\n            `Excon.defaults[:ssl_verify_peer] = false` (less secure).\n"
  2020-05-12 01:03:22 +0000 [warn]: suppressed same stacktrace
2020-05-12 01:03:22 +0000 [warn]: [clo_default_output_es] failed to flush the buffer. retry_time=6 next_retry_seconds=2020-05-12 01:03:54 +0000 chunk="5a56906dc2438626c68475e23c2fc181" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc.cluster.local\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate) (OpenSSL::SSL::SSLError) Unable to verify certificate. This may be an issue with the remote host or with Excon. Excon has certificates bundled, but these can be customized:\n\n            `Excon.defaults[:ssl_ca_path] = path_to_certs`\n            `ENV['SSL_CERT_DIR'] = path_to_certs`\n            `Excon.defaults[:ssl_ca_file] = path_to_file`\n            `ENV['SSL_CERT_FILE'] = path_to_file`\n            `Excon.defaults[:ssl_verify_callback] = callback`\n                (see OpenSSL::SSL::SSLContext#verify_callback)\nor:\n            `Excon.defaults[:ssl_verify_peer] = false` (less secure).\n"
  2020-05-12 01:03:22 +0000 [warn]: suppressed same stacktrace
2020-05-12 01:03:54 +0000 [warn]: [clo_default_output_es] failed to flush the buffer. retry_time=7 next_retry_seconds=2020-05-12 01:05:05 +0000 chunk="5a56906eb5d3a871b8d3443702693564" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc.cluster.local\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate) (OpenSSL::SSL::SSLError) Unable to verify certificate. This may be an issue with the remote host or with Excon. Excon has certificates bundled, but these can be customized:\n\n            `Excon.defaults[:ssl_ca_path] = path_to_certs`\n            `ENV['SSL_CERT_DIR'] = path_to_certs`\n            `Excon.defaults[:ssl_ca_file] = path_to_file`\n            `ENV['SSL_CERT_FILE'] = path_to_file`\n            `Excon.defaults[:ssl_verify_callback] = callback`\n                (see OpenSSL::SSL::SSLContext#verify_callback)\nor:\n            `Excon.defaults[:ssl_verify_peer] = false` (less secure).\n"
  2020-05-12 01:03:54 +0000 [warn]: suppressed same stacktrace
2020-05-12 01:03:54 +0000 [warn]: [clo_default_output_es] failed to flush the buffer. retry_time=8 next_retry_seconds=2020-05-12 01:06:15 +0000 chunk="5a56906dc2438626c68475e23c2fc181" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc.cluster.local\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate) (OpenSSL::SSL::SSLError) Unable to verify certificate. This may be an issue with the remote host or with Excon. Excon has certificates bundled, but these can be customized:\n\n            `Excon.defaults[:ssl_ca_path] = path_to_certs`\n            `ENV['SSL_CERT_DIR'] = path_to_certs`\n            `Excon.defaults[:ssl_ca_file] = path_to_file`\n            `ENV['SSL_CERT_FILE'] = path_to_file`\n            `Excon.defaults[:ssl_verify_callback] = callback`\n                (see OpenSSL::SSL::SSLContext#verify_callback)\nor:\n            `Excon.defaults[:ssl_verify_peer] = false` (less secure).\n"
  2020-05-12 01:03:54 +0000 [warn]: suppressed same stacktrace
2020-05-12 01:06:15 +0000 [warn]: [clo_default_output_es] failed to flush the buffer. retry_time=9 next_retry_seconds=2020-05-12 01:10:23 +0000 chunk="5a56906eb5d3a871b8d3443702693564" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc.cluster.local\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate) (OpenSSL::SSL::SSLError) Unable to verify certificate. This may be an issue with the remote host or with Excon. Excon has certificates bundled, but these can be customized:\n\n            `Excon.defaults[:ssl_ca_path] = path_to_certs`\n            `ENV['SSL_CERT_DIR'] = path_to_certs`\n            `Excon.defaults[:ssl_ca_file] = path_to_file`\n            `ENV['SSL_CERT_FILE'] = path_to_file`\n            `Excon.defaults[:ssl_verify_callback] = callback`\n                (see OpenSSL::SSL::SSLContext#verify_callback)\nor:\n            `Excon.defaults[:ssl_verify_peer] = false` (less secure).\n"
  2020-05-12 01:06:15 +0000 [warn]: suppressed same stacktrace
2020-05-12 01:06:15 +0000 [warn]: [clo_default_output_es] failed to flush the buffer. retry_time=10 next_retry_seconds=2020-05-12 01:11:45 +0000 chunk="5a56906dc2438626c68475e23c2fc181" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc.cluster.local\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate) (OpenSSL::SSL::SSLError) Unable to verify certificate. This may be an issue with the remote host or with Excon. Excon has certificates bundled, but these can be customized:\n\n            `Excon.defaults[:ssl_ca_path] = path_to_certs`\n            `ENV['SSL_CERT_DIR'] = path_to_certs`\n            `Excon.defaults[:ssl_ca_file] = path_to_file`\n            `ENV['SSL_CERT_FILE'] = path_to_file`\n            `Excon.defaults[:ssl_verify_callback] = callback`\n                (see OpenSSL::SSL::SSLContext#verify_callback)\nor:\n            `Excon.defaults[:ssl_verify_peer] = false` (less secure).\n"
  2020-05-12 01:06:15 +0000 [warn]: suppressed same stacktrace
2020-05-12 01:11:45 +0000 [warn]: [clo_default_output_es] failed to flush the buffer. retry_time=11 next_retry_seconds=2020-05-12 01:17:02 +0000 chunk="5a56906eb5d3a871b8d3443702693564" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc.cluster.local\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate) (OpenSSL::SSL::SSLError) Unable to verify certificate. This may be an issue with the remote host or with Excon. Excon has certificates bundled, but these can be customized:\n\n            `Excon.defaults[:ssl_ca_path] = path_to_certs`\n            `ENV['SSL_CERT_DIR'] = path_to_certs`\n            `Excon.defaults[:ssl_ca_file] = path_to_file`\n            `ENV['SSL_CERT_FILE'] = path_to_file`\n            `Excon.defaults[:ssl_verify_callback] = callback`\n                (see OpenSSL::SSL::SSLContext#verify_callback)\nor:\n            `Excon.defaults[:ssl_verify_peer] = false` (less secure).\n"
  2020-05-12 01:11:45 +0000 [warn]: suppressed same stacktrace
2020-05-12 01:11:50 +0000 [warn]: [clo_default_output_es] failed to flush the buffer. retry_time=12 next_retry_seconds=2020-05-12 01:16:33 +0000 chunk="5a56906dc2438626c68475e23c2fc181" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc.cluster.local\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): SSL_connect returned=1 errno=0 state=error: certificate verify failed (unable to get local issuer certificate) (OpenSSL::SSL::SSLError) Unable to verify certificate. This may be an issue with the remote host or with Excon. Excon has certificates bundled, but these can be customized:\n\n            `Excon.defaults[:ssl_ca_path] = path_to_certs`\n            `ENV['SSL_CERT_DIR'] = path_to_certs`\n            `Excon.defaults[:ssl_ca_file] = path_to_file`\n            `ENV['SSL_CERT_FILE'] = path_to_file`\n            `Excon.defaults[:ssl_verify_callback] = callback`\n                (see OpenSSL::SSL::SSLContext#verify_callback)\nor:\n            `Excon.defaults[:ssl_verify_peer] = false` (less secure).\n"
  2020-05-12 01:11:50 +0000 [warn]: suppressed same stacktrace

EO log:
time="2020-05-12T01:02:36Z" level=info msg="Kibana status successfully updated"
time="2020-05-12T01:02:40Z" level=info msg="Timed out waiting for node elasticsearch-cdm-7copec07-1 to rollout"
time="2020-05-12T01:02:40Z" level=warning msg="Error occurred while updating node elasticsearch-cdm-7copec07-1: timed out waiting for the condition"
time="2020-05-12T01:02:40Z" level=warning msg="Unable to list existing templates in order to reconcile stale ones: Get https://elasticsearch.openshift-logging.svc:9200/_template: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")"
time="2020-05-12T01:02:40Z" level=error msg="Error creating index template for mapping app: Put https://elasticsearch.openshift-logging.svc:9200/_template/ocp-gen-app: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")"
{"level":"error","ts":1589245360.3139858,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"elasticsearch-controller","request":"openshift-logging/elasticsearch","error":"Failed to reconcile IndexMangement for Elasticsearch cluster: Put https://elasticsearch.openshift-logging.svc:9200/_template/ocp-gen-app: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2020-05-12T01:02:41Z" level=info msg="Beginning full cluster restart for cert redeploy on elasticsearch"
time="2020-05-12T01:02:41Z" level=warning msg="Unable to disable shard allocation: Put https://elasticsearch.openshift-logging.svc:9200/_cluster/settings: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")"
time="2020-05-12T01:02:41Z" level=warning msg="Unable to perform synchronized flush: Post https://elasticsearch.openshift-logging.svc:9200/_flush/synced: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")"
time="2020-05-12T01:02:41Z" level=warning msg="Unable to enable shard allocation: Put https://elasticsearch.openshift-logging.svc:9200/_cluster/settings: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")"
time="2020-05-12T01:02:41Z" level=warning msg="Unable to list existing templates in order to reconcile stale ones: Get https://elasticsearch.openshift-logging.svc:9200/_template: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")"
time="2020-05-12T01:02:41Z" level=error msg="Error creating index template for mapping app: Put https://elasticsearch.openshift-logging.svc:9200/_template/ocp-gen-app: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")"
{"level":"error","ts":1589245361.7059393,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"elasticsearch-controller","request":"openshift-logging/elasticsearch","error":"Failed to reconcile IndexMangement for Elasticsearch cluster: Put https://elasticsearch.openshift-logging.svc:9200/_template/ocp-gen-app: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2020-05-12T01:02:42Z" level=info msg="Waiting for cluster to complete recovery:  / green"
time="2020-05-12T01:02:43Z" level=warning msg="Unable to list existing templates in order to reconcile stale ones: Get https://elasticsearch.openshift-logging.svc:9200/_template: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")"
time="2020-05-12T01:02:43Z" level=error msg="Error creating index template for mapping app: Put https://elasticsearch.openshift-logging.svc:9200/_template/ocp-gen-app: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")"


Version-Release number of selected component (if applicable):
cluster version: 4.5.0-0.nightly-2020-05-10-180138
logging images are from 4.5.0-0.ci-2020-05-11-212141
manifests are copied from master branch

How reproducible:
Always

Steps to Reproduce:
1. deploy clusterlogging
2. scale down cluster-logging-operator to 0
3. delete secret/master-certs
4. scale up cluster-logging-operator to 1
5. wait until the CLO recreate secret/master-certs, check the indices in the ES, and check logs in the Fluentd pod.

Actual results:
The Fluentd couldn't connect to ES after the secret/master-certs regenerated.

Expected results:
The logging stack could work well after secrets regenerated.

Additional info:

Comment 1 ewolinet 2020-05-12 22:12:01 UTC
Can you provide the output of `oc get elasticsearch elasticsearch -o yaml` ?

Comment 2 Qiaoling Tang 2020-05-13 06:10:22 UTC
  spec:
    indexManagement:
      mappings:
      - aliases:
        - app
        - logs-app
        name: app
        policyRef: app-policy
      - aliases:
        - infra
        - logs-infra
        name: infra
        policyRef: infra-policy
      - aliases:
        - audit
        - logs-audit
        name: audit
        policyRef: audit-policy
      policies:
      - name: app-policy
        phases:
          delete:
            minAge: 1d
          hot:
            actions:
              rollover:
                maxAge: 1h
        pollInterval: 15m
      - name: infra-policy
        phases:
          delete:
            minAge: 7d
          hot:
            actions:
              rollover:
                maxAge: 8h
        pollInterval: 15m
      - name: audit-policy
        phases:
          delete:
            minAge: 3w
          hot:
            actions:
              rollover:
                maxAge: 1d
        pollInterval: 15m
    managementState: Managed
    nodeSpec:
      resources:
        requests:
          memory: 2Gi
    nodes:
    - genUUID: c1sc6df6
      nodeCount: 3
      resources: {}
      roles:
      - client
      - data
      - master
      storage:
        size: 20Gi
        storageClassName: gp2
    redundancyPolicy: SingleRedundancy
  status:
    cluster:
      activePrimaryShards: 0
      activeShards: 0
      initializingShards: 0
      numDataNodes: 0
      numNodes: 0
      pendingTasks: 0
      relocatingShards: 0
      status: cluster health unknown
      unassignedShards: 0
    clusterHealth: ""
    conditions:
    - lastTransitionTime: "2020-05-13T06:09:25Z"
      status: "True"
      type: Restarting
    nodes:
    - deploymentName: elasticsearch-cdm-c1sc6df6-1
      upgradeStatus:
        underUpgrade: "True"
        upgradePhase: nodeRestarting
    - deploymentName: elasticsearch-cdm-c1sc6df6-2
      upgradeStatus:
        underUpgrade: "True"
        upgradePhase: nodeRestarting
    - deploymentName: elasticsearch-cdm-c1sc6df6-3
      upgradeStatus:
        upgradePhase: controllerUpdated
    pods:
      client:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-c1sc6df6-1-85c9c6d4f-4gxrh
        - elasticsearch-cdm-c1sc6df6-2-68f5555d8-bnwkx
        - elasticsearch-cdm-c1sc6df6-3-66fc769bc-mwvwk
      data:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-c1sc6df6-1-85c9c6d4f-4gxrh
        - elasticsearch-cdm-c1sc6df6-2-68f5555d8-bnwkx
        - elasticsearch-cdm-c1sc6df6-3-66fc769bc-mwvwk
      master:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-c1sc6df6-1-85c9c6d4f-4gxrh
        - elasticsearch-cdm-c1sc6df6-2-68f5555d8-bnwkx
        - elasticsearch-cdm-c1sc6df6-3-66fc769bc-mwvwk
    shardAllocationEnabled: shard allocation unknown

Comment 3 ewolinet 2020-05-27 21:22:31 UTC
I'm unable to reproduce this with the latest EO image.

1. Set clusterlogging/instance to Unmanaged
2. Delete secret/master-certs
3. Delete CLO pod
4. Set clusterlogging/instance to Managed
5. Observe all 3 of my ES pods get restarted

Can you please retest and confirm you still see this?

Comment 4 Anping Li 2020-05-29 11:25:03 UTC
The Kibana can't connect to ES. I think the kibana pod must be restarted to make the secret effect.

What are the updates?
The following secrets were updated, curator,elasticsearch, fluentd, kibana and kibana-proxy. 
The ES pods were restarted in 20minutes after the master-cert were recreated.
The fluentd and kibana Pods weren't restarted.

What status of each component?
Elasticsearch works well. 
Fluentd can send logs to ES
The curator can connect to ES after ES was restarted.
The elasticsearch-delete can connect to ES after ES was restarted
The elasticsearch-rollover can connect to ES after ES was restarted


The Kibana can not connect to ES.  Afer restart kibana pod manually. The kibana can connect to ES.

{"type":"log","@timestamp":"2020-05-29T06:46:22Z","tags":["error","elasticsearch","admin"],"pid":119,"message":"Request error, retrying\nGET https://elasticsearch.openshift-logging.svc.cluster.local:9200/.kibana/doc/config%3A6.8.1 => unable to verify the first certificate"}
Elasticsearch WARNING: 2020-05-29T06:46:32Z
  Unable to revive connection: https://elasticsearch.openshift-logging.svc.cluster.local:9200/

Elasticsearch WARNING: 2020-05-29T06:46:32Z
  No living connections

Elasticsearch WARNING: 2020-05-29T06:46:32Z
  Unable to revive connection: https://elasticsearch.openshift-logging.svc.cluster.local:9200/

Elasticsearch WARNING: 2020-05-29T06:46:32Z
  No living connections

Elasticsearch ERROR: 2020-05-29T06:46:32Z
  Error: Request error, retrying
  GET https://elasticsearch.openshift-logging.svc.cluster.local:9200/_opendistro/_security/api/permissionsinfo => unable to verify the first certificate
      at Log.error (/opt/app-root/src/node_modules/elasticsearch/src/lib/log.js:226:56)
      at checkRespForFailure (/opt/app-root/src/node_modules/elasticsearch/src/lib/transport.js:259:18)
      at HttpConnector.<anonymous> (/opt/app-root/src/node_modules/elasticsearch/src/lib/connectors/http.js:164:7)
      at ClientRequest.wrapper (/opt/app-root/src/node_modules/elasticsearch/node_modules/lodash/lodash.js:4935:19)
      at ClientRequest.emit (events.js:198:13)
      at TLSSocket.socketErrorListener (_http_client.js:401:9)
      at TLSSocket.emit (events.js:198:13)
      at emitErrorNT (internal/streams/destroy.js:91:8)
      at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)
      at process._tickCallback (internal/process/next_tick.js:63:19)

Elasticsearch WARNING: 2020-05-29T06:46:33Z
  Unable to revive connection: https://elasticsearch.openshift-logging.svc.cluster.local:9200/

Elasticsearch WARNING: 2020-05-29T06:46:33Z
  No living connections

{"type":"error","@timestamp":"2020-05-29T06:46:32Z","tags":[],"pid":119,"level":"error","error":{"message":"No Living connections: No Living connections","name":"Error","stack":"Error: No Living connections\n    at sendReqWithConnection (/opt/app-root/src/node_modules/elasticsearch/src/lib/transport.js:226:15)\n    at next (/opt/app-root/src/node_modules/elasticsearch/src/lib/connection_pool.js:214:7)\n    at process._tickCallback (internal/process/next_tick.js:61:11)"},"url":{"protocol":null,"slashes":null,"auth":null,"host":null,"port":null,"hostname":null,"hash":null,"search":null,"query":{},"pathname":"/api/v1/restapiinfo","path":"/api/v1/restapiinfo","href":"/api/v1/restapiinfo"},"message":"No Living connections: No Living connections"}
Elasticsearch WARNING: 2020-05-29T06:46:51Z
  Unable to revive connection: https://elasticsearch.openshift-logging.svc.cluster.local:9200/

Comment 5 ewolinet 2020-06-02 18:33:38 UTC
The logs for Kibana in https://bugzilla.redhat.com/show_bug.cgi?id=1834576#c4 is due to elasticsearch not being ready. 
It may have overlap with another bz.

Comment 6 ewolinet 2020-06-02 18:34:40 UTC
Can you please provide the output of the elasticsearch CR and the logs from EO?

Comment 7 Qiaoling Tang 2020-06-03 01:49:38 UTC
The ES has been updated, but the Kibana hasn't.

$ oc get pod
NAME                                            READY   STATUS      RESTARTS   AGE
cluster-logging-operator-98f5c5fd-hqbtg         1/1     Running     0          16m
elasticsearch-cdm-08a8icmo-1-77c885464f-znm4v   2/2     Running     0          9m10s
elasticsearch-cdm-08a8icmo-2-55644885bd-d7tw4   2/2     Running     0          9m10s
elasticsearch-cdm-08a8icmo-3-7998cb4dc-5rj59    2/2     Running     0          9m10s
elasticsearch-delete-app-1591148700-xs5vf       0/1     Completed   0          102s
elasticsearch-delete-infra-1591148700-cj8xg     0/1     Completed   0          102s
elasticsearch-rollover-app-1591148700-fsxsp     0/1     Completed   0          102s
elasticsearch-rollover-infra-1591148700-6p4t8   0/1     Completed   0          102s
fluentd-5jqg9                                   1/1     Running     0          25m
fluentd-7mt9z                                   1/1     Running     0          25m
fluentd-9x9qt                                   1/1     Running     0          25m
fluentd-gwb6b                                   1/1     Running     0          25m
fluentd-pzg6s                                   1/1     Running     0          25m
fluentd-z5x6z                                   1/1     Running     0          25m
kibana-7f5df6fd-9l89g                           2/2     Running     0          24m



$ oc get elasticsearch -oyaml
apiVersion: v1
items:
- apiVersion: logging.openshift.io/v1
  kind: Elasticsearch
  metadata:
    creationTimestamp: "2020-06-03T01:21:29Z"
    generation: 18
    managedFields:
    - apiVersion: logging.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:metadata:
          f:ownerReferences:
            .: {}
            k:{"uid":"d1f9a90c-c76f-41fa-8c37-2ec7bdbeae87"}:
              .: {}
              f:apiVersion: {}
              f:controller: {}
              f:kind: {}
              f:name: {}
              f:uid: {}
        f:spec:
          .: {}
          f:indexManagement:
            .: {}
            f:mappings: {}
            f:policies: {}
          f:managementState: {}
          f:nodeSpec:
            .: {}
            f:resources:
              .: {}
              f:requests:
                .: {}
                f:memory: {}
          f:redundancyPolicy: {}
        f:status:
          .: {}
          f:cluster:
            .: {}
            f:initializingShards: {}
            f:pendingTasks: {}
            f:unassignedShards: {}
          f:clusterHealth: {}
          f:pods: {}
      manager: cluster-logging-operator
      operation: Update
      time: "2020-06-03T01:21:29Z"
    - apiVersion: logging.openshift.io/v1
      fieldsType: FieldsV1
      fieldsV1:
        f:spec:
          f:nodes: {}
        f:status:
          f:cluster:
            f:activePrimaryShards: {}
            f:activeShards: {}
            f:numDataNodes: {}
            f:numNodes: {}
            f:relocatingShards: {}
            f:status: {}
          f:conditions: {}
          f:nodes: {}
          f:pods:
            f:client:
              .: {}
              f:failed: {}
              f:notReady: {}
              f:ready: {}
            f:data:
              .: {}
              f:failed: {}
              f:notReady: {}
              f:ready: {}
            f:master:
              .: {}
              f:failed: {}
              f:notReady: {}
              f:ready: {}
          f:shardAllocationEnabled: {}
      manager: elasticsearch-operator
      operation: Update
      time: "2020-06-03T01:38:17Z"
    name: elasticsearch
    namespace: openshift-logging
    ownerReferences:
    - apiVersion: logging.openshift.io/v1
      controller: true
      kind: ClusterLogging
      name: instance
      uid: d1f9a90c-c76f-41fa-8c37-2ec7bdbeae87
    resourceVersion: "72916"
    selfLink: /apis/logging.openshift.io/v1/namespaces/openshift-logging/elasticsearches/elasticsearch
    uid: e69379ff-e00a-4834-b87a-513e5dc84895
  spec:
    indexManagement:
      mappings:
      - aliases:
        - app
        - logs.app
        name: app
        policyRef: app-policy
      - aliases:
        - infra
        - logs.infra
        name: infra
        policyRef: infra-policy
      policies:
      - name: app-policy
        phases:
          delete:
            minAge: 1d
          hot:
            actions:
              rollover:
                maxAge: 1h
        pollInterval: 15m
      - name: infra-policy
        phases:
          delete:
            minAge: 7d
          hot:
            actions:
              rollover:
                maxAge: 8h
        pollInterval: 15m
    managementState: Managed
    nodeSpec:
      resources:
        requests:
          memory: 4Gi
    nodes:
    - genUUID: 08a8icmo
      nodeCount: 3
      resources: {}
      roles:
      - client
      - data
      - master
      storage:
        size: 20Gi
        storageClassName: gp2
    redundancyPolicy: SingleRedundancy
  status:
    cluster:
      activePrimaryShards: 11
      activeShards: 22
      initializingShards: 0
      numDataNodes: 3
      numNodes: 3
      pendingTasks: 0
      relocatingShards: 0
      status: green
      unassignedShards: 0
    clusterHealth: ""
    conditions: []
    nodes:
    - deploymentName: elasticsearch-cdm-08a8icmo-1
      upgradeStatus:
        upgradePhase: controllerUpdated
    - deploymentName: elasticsearch-cdm-08a8icmo-2
      upgradeStatus:
        upgradePhase: controllerUpdated
    - deploymentName: elasticsearch-cdm-08a8icmo-3
      upgradeStatus:
        upgradePhase: controllerUpdated
    pods:
      client:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-08a8icmo-1-77c885464f-znm4v
        - elasticsearch-cdm-08a8icmo-2-55644885bd-d7tw4
        - elasticsearch-cdm-08a8icmo-3-7998cb4dc-5rj59
      data:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-08a8icmo-1-77c885464f-znm4v
        - elasticsearch-cdm-08a8icmo-2-55644885bd-d7tw4
        - elasticsearch-cdm-08a8icmo-3-7998cb4dc-5rj59
      master:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-08a8icmo-1-77c885464f-znm4v
        - elasticsearch-cdm-08a8icmo-2-55644885bd-d7tw4
        - elasticsearch-cdm-08a8icmo-3-7998cb4dc-5rj59
    shardAllocationEnabled: all
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""



$ oc logs -n openshift-operators-redhat elasticsearch-operator-57bd69d85-t45lg 
{"level":"info","ts":1591147265.5221949,"logger":"cmd","msg":"Go Version: go1.13.4"}
{"level":"info","ts":1591147265.5222158,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"}
{"level":"info","ts":1591147265.5222197,"logger":"cmd","msg":"Version of operator-sdk: v0.8.2"}
{"level":"info","ts":1591147265.5232148,"logger":"leader","msg":"Trying to become the leader."}
{"level":"info","ts":1591147265.6868517,"logger":"leader","msg":"No pre-existing lock was found."}
{"level":"info","ts":1591147265.6928344,"logger":"leader","msg":"Became the leader."}
{"level":"info","ts":1591147265.8200235,"logger":"cmd","msg":"Registering Components."}
{"level":"info","ts":1591147265.820503,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"kibana-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1591147265.8206809,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"elasticsearch-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1591147265.8208663,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"proxyconfig-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1591147265.8209927,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"kibanasecret-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1591147265.8211472,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"trustedcabundle-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1591147266.0386891,"logger":"metrics","msg":"Metrics Service object created","Service.Name":"elasticsearch-operator","Service.Namespace":"openshift-operators-redhat"}
{"level":"info","ts":1591147266.0387192,"logger":"cmd","msg":"This operator no longer honors the image specified by the custom resources so that it is able to properly coordinate the configuration with the image."}
{"level":"info","ts":1591147266.0387254,"logger":"cmd","msg":"Starting the Cmd."}
W0603 01:21:06.195090       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:126: watch of *v1.Kibana ended with: too old resource version: 64907 (64908)
{"level":"info","ts":1591147266.738974,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"trustedcabundle-controller"}
{"level":"info","ts":1591147266.7390163,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"proxyconfig-controller"}
{"level":"info","ts":1591147266.7390492,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"kibanasecret-controller"}
{"level":"info","ts":1591147266.7390227,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"kibana-controller"}
{"level":"info","ts":1591147266.7390113,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"elasticsearch-controller"}
{"level":"info","ts":1591147266.8391533,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"trustedcabundle-controller","worker count":1}
{"level":"info","ts":1591147266.8391838,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"kibana-controller","worker count":1}
{"level":"info","ts":1591147266.8391578,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"proxyconfig-controller","worker count":1}
{"level":"info","ts":1591147266.8391824,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"kibanasecret-controller","worker count":1}
{"level":"info","ts":1591147266.8391652,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"elasticsearch-controller","worker count":1}
{"level":"error","ts":1591147266.8393264,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1591147267.8395433,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1591147268.8397605,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1591147269.8399775,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1591147270.8402362,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1591147271.8404636,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1591147272.8407023,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1591147273.8409078,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1591147274.8411415,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1591147276.1213143,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1591147278.6814668,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
{"level":"error","ts":1591147283.8016648,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"proxyconfig-controller","request":"/cluster","error":"skipping proxy config reconciliation in \"\": failed to find elasticsearch instance in \"\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2020-06-03T01:21:29Z" level=error msg="Operator unable to read local file to get contents: open /tmp/ocp-eo/ca.crt: no such file or directory"
time="2020-06-03T01:21:29Z" level=error msg="Operator unable to read local file to get contents: open /tmp/ocp-eo/ca.crt: no such file or directory"
time="2020-06-03T01:21:59Z" level=info msg="skipping kibana migrations: no index \".kibana\" available"
{"level":"error","ts":1591147319.6436105,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"kibana-controller","request":"openshift-logging/kibana","error":"Did not receive hashvalue for trusted CA value","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2020-06-03T01:22:15Z" level=warning msg="unable to get cluster node count. E: Get https://elasticsearch.openshift-logging.svc:9200/_cluster/health: dial tcp 172.30.187.243:9200: i/o timeout\r\n"
time="2020-06-03T01:22:30Z" level=info msg="skipping kibana migrations: no index \".kibana\" available"
time="2020-06-03T01:22:31Z" level=info msg="Updating status of Kibana"
time="2020-06-03T01:22:31Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:22:31Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:23:01Z" level=info msg="skipping kibana migrations: no index \".kibana\" available"
time="2020-06-03T01:23:01Z" level=info msg="Updating status of Kibana"
time="2020-06-03T01:23:01Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:23:01Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:23:17Z" level=warning msg="unable to get cluster node count. E: Get https://elasticsearch.openshift-logging.svc:9200/_cluster/health: dial tcp 172.30.187.243:9200: i/o timeout\r\n"
time="2020-06-03T01:23:31Z" level=info msg="skipping kibana migrations: no index \".kibana\" available"
time="2020-06-03T01:23:31Z" level=info msg="Updating status of Kibana"
time="2020-06-03T01:23:31Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:23:31Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:24:01Z" level=info msg="skipping kibana migrations: no index \".kibana\" available"
time="2020-06-03T01:24:01Z" level=info msg="Updating status of Kibana"
time="2020-06-03T01:24:01Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:24:01Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:24:17Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:24:17Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:24:18Z" level=warning msg="unable to get cluster node count. E: Get https://elasticsearch.openshift-logging.svc:9200/_cluster/health: dial tcp 172.30.187.243:9200: i/o timeout\r\n"
time="2020-06-03T01:24:31Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:24:32Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:24:32Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:25:02Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:25:02Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:25:02Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:25:32Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:25:32Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:25:32Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:26:02Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:26:02Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:26:03Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:26:33Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:26:33Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:26:33Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:27:03Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:27:03Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:27:03Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:27:33Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:27:33Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:27:33Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:28:04Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:28:04Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:28:04Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:28:34Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:28:34Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:28:34Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:29:04Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:29:04Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:29:04Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:29:34Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:29:35Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:29:35Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:30:05Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:30:05Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:30:05Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:30:21Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:30:21Z" level=info msg="skipping kibana migrations: no index \".kibana\" available"
time="2020-06-03T01:30:22Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:30:35Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:30:35Z" level=info msg="skipping kibana migrations: no index \".kibana\" available"
time="2020-06-03T01:30:35Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:30:45Z" level=warning msg="unable to get cluster node count. E: Get https://elasticsearch.openshift-logging.svc:9200/_cluster/health: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")\r\n"
time="2020-06-03T01:30:46Z" level=warning msg="unable to get cluster node count. E: Get https://elasticsearch.openshift-logging.svc:9200/_cluster/health: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")\r\n"
time="2020-06-03T01:30:47Z" level=warning msg="unable to get cluster node count. E: Get https://elasticsearch.openshift-logging.svc:9200/_cluster/health: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")\r\n"
time="2020-06-03T01:30:47Z" level=warning msg="Unable to list existing templates in order to reconcile stale ones: Get https://elasticsearch.openshift-logging.svc:9200/_template: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")"
time="2020-06-03T01:30:47Z" level=error msg="Error creating index template for mapping app: Put https://elasticsearch.openshift-logging.svc:9200/_template/ocp-gen-app: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")"
{"level":"error","ts":1591147847.8516762,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"elasticsearch-controller","request":"openshift-logging/elasticsearch","error":"Failed to reconcile IndexMangement for Elasticsearch cluster: Put https://elasticsearch.openshift-logging.svc:9200/_template/ocp-gen-app: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2020-06-03T01:30:49Z" level=info msg="Beginning full cluster restart for cert redeploy on elasticsearch"
time="2020-06-03T01:30:49Z" level=warning msg="Unable to set shard allocation to primaries: Put https://elasticsearch.openshift-logging.svc:9200/_cluster/settings: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")"
time="2020-06-03T01:30:49Z" level=warning msg="Unable to perform synchronized flush: Post https://elasticsearch.openshift-logging.svc:9200/_flush/synced: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")"
time="2020-06-03T01:30:49Z" level=warning msg="Unable to get cluster size prior to restart for elasticsearch-cdm-08a8icmo-1"
time="2020-06-03T01:30:49Z" level=warning msg="Unable to get cluster size prior to restart for elasticsearch-cdm-08a8icmo-2"
time="2020-06-03T01:30:49Z" level=warning msg="Unable to get cluster size prior to restart for elasticsearch-cdm-08a8icmo-3"
time="2020-06-03T01:30:49Z" level=warning msg="Unable to list existing templates in order to reconcile stale ones: Get https://elasticsearch.openshift-logging.svc:9200/_template: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")"
time="2020-06-03T01:30:49Z" level=error msg="Error creating index template for mapping app: Put https://elasticsearch.openshift-logging.svc:9200/_template/ocp-gen-app: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")"
{"level":"error","ts":1591147849.2651162,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"elasticsearch-controller","request":"openshift-logging/elasticsearch","error":"Failed to reconcile IndexMangement for Elasticsearch cluster: Put https://elasticsearch.openshift-logging.svc:9200/_template/ocp-gen-app: x509: certificate signed by unknown authority (possibly because of \"crypto/rsa: verification error\" while trying to verify candidate authority certificate \"openshift-cluster-logging-signer\")","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2020-06-03T01:31:05Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:31:05Z" level=info msg="skipping kibana migrations: no index \".kibana\" available"
time="2020-06-03T01:31:05Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:31:36Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:31:36Z" level=info msg="skipping kibana migrations: no index \".kibana\" available"
time="2020-06-03T01:31:36Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:31:50Z" level=info msg="Timed out waiting for elasticsearch-cdm-08a8icmo-1 to leave the cluster"
time="2020-06-03T01:32:06Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:32:36Z" level=info msg="skipping kibana migrations: no index \".kibana\" available"
time="2020-06-03T01:32:36Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:33:06Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:33:33Z" level=info msg="Timed out waiting for elasticsearch-cdm-08a8icmo-2 to leave the cluster"
time="2020-06-03T01:33:36Z" level=info msg="skipping kibana migrations: no index \".kibana\" available"
time="2020-06-03T01:33:36Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:34:06Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:34:36Z" level=info msg="skipping kibana migrations: no index \".kibana\" available"
time="2020-06-03T01:34:36Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:35:05Z" level=info msg="Timed out waiting for elasticsearch-cdm-08a8icmo-3 to leave the cluster"
time="2020-06-03T01:35:06Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:35:36Z" level=info msg="skipping kibana migrations: no index \".kibana\" available"
time="2020-06-03T01:35:36Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:36:06Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:36:36Z" level=info msg="skipping kibana migrations: no index \".kibana\" available"
time="2020-06-03T01:36:36Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:37:05Z" level=warning msg="Unable to list existing templates in order to reconcile stale ones: Get https://elasticsearch.openshift-logging.svc:9200/_template: dial tcp 172.30.187.243:9200: i/o timeout"
time="2020-06-03T01:37:06Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:37:35Z" level=error msg="Error creating index template for mapping app: Put https://elasticsearch.openshift-logging.svc:9200/_template/ocp-gen-app: dial tcp 172.30.187.243:9200: i/o timeout"
{"level":"error","ts":1591148255.6102886,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"elasticsearch-controller","request":"openshift-logging/elasticsearch","error":"Failed to reconcile IndexMangement for Elasticsearch cluster: Put https://elasticsearch.openshift-logging.svc:9200/_template/ocp-gen-app: dial tcp 172.30.187.243:9200: i/o timeout","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2020-06-03T01:37:36Z" level=info msg="skipping kibana migrations: no index \".kibana\" available"
time="2020-06-03T01:37:37Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:38:06Z" level=warning msg="Unable to enable shard allocation: Put https://elasticsearch.openshift-logging.svc:9200/_cluster/settings: dial tcp 172.30.187.243:9200: i/o timeout"
time="2020-06-03T01:38:07Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:38:14Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:38:15Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:38:15Z" level=info msg="Completed full cluster restart for cert redeploy on elasticsearch"
time="2020-06-03T01:38:45Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:38:45Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:38:45Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:39:15Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:39:15Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:39:15Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:39:45Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:39:45Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:39:46Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:40:16Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:40:16Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:40:16Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:40:46Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:40:46Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:40:46Z" level=info msg="Kibana status successfully updated"
W0603 01:41:04.321429       1 reflector.go:270] sigs.k8s.io/controller-runtime/pkg/cache/internal/informers_map.go:196: watch of *v1.Route ended with: The resourceVersion for the provided watch is too old.
time="2020-06-03T01:41:16Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:41:16Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:41:17Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:41:47Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:41:47Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:41:47Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:42:17Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:42:17Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:42:17Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:42:47Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:42:47Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:42:47Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:43:18Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:43:18Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:43:18Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:43:48Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:43:48Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:43:48Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:44:18Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:44:18Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:44:18Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:44:49Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:44:49Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:44:49Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:45:19Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:45:19Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:45:19Z" level=info msg="Kibana status successfully updated"
time="2020-06-03T01:45:49Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-03T01:45:49Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-03T01:45:49Z" level=info msg="Kibana status successfully updated"

Comment 12 Anping Li 2020-06-10 13:16:49 UTC
Verified. The Kibana pod is restarted.  Kibana works as expected.

Comment 14 errata-xmlrpc 2020-10-27 15:58:59 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.