Bug 1842865 - Sometimes the deploy/kibana couldn't be created.
Summary: Sometimes the deploy/kibana couldn't be created.
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.5
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.6.0
Assignee: Vimal Kumar
QA Contact: Qiaoling Tang
URL:
Whiteboard:
Depends On:
Blocks: 1844464
TreeView+ depends on / blocked
 
Reported: 2020-06-02 09:14 UTC by Qiaoling Tang
Modified: 2020-10-27 16:04 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-10-27 16:03:47 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift elasticsearch-operator pull 379 0 None closed Bug 1842865: Fix Reconcile loop return values 2021-02-15 07:09:58 UTC
Red Hat Product Errata RHBA-2020:4196 0 None None None 2020-10-27 16:04:20 UTC

Description Qiaoling Tang 2020-06-02 09:14:03 UTC
Description of problem:
deploy logging via OLM, the deploy/kibana couldn't be created, EO logs:

time="2020-06-02T08:24:46Z" level=info msg="skipping kibana migrations: no index \".kibana\" available"
time="2020-06-02T08:24:46Z" level=info msg="Updating status of Kibana"
time="2020-06-02T08:24:46Z" level=info msg="Kibana status successfully updated"
time="2020-06-02T08:24:46Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-02T08:24:46Z" level=info msg="skipping kibana migrations: no index \".kibana\" available"
time="2020-06-02T08:24:46Z" level=info msg="Updating status of Kibana"
time="2020-06-02T08:24:46Z" level=info msg="Kibana status successfully updated"
time="2020-06-02T08:24:46Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-02T08:24:46Z" level=info msg="skipping kibana migrations: no index \".kibana\" available"
time="2020-06-02T08:24:46Z" level=info msg="Updating status of Kibana"
time="2020-06-02T08:24:46Z" level=info msg="Kibana status successfully updated"
time="2020-06-02T08:25:16Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-02T08:25:16Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-02T08:25:16Z" level=info msg="Updating status of Kibana"
time="2020-06-02T08:25:16Z" level=info msg="Kibana status successfully updated"
time="2020-06-02T08:25:16Z" level=info msg="skipping deleting kibana 5 image because kibana 6 installed"
time="2020-06-02T08:25:16Z" level=info msg="migration completed: re-indexing \".kibana\" to \".kibana-6\""
time="2020-06-02T08:25:16Z" level=info msg="Kibana status successfully updated"
time="2020-06-02T08:25:42Z" level=info msg="Flushing nodes for openshift-logging/elasticsearch"
{"level":"error","ts":1591086346.5515056,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"kibana-controller","request":"openshift-logging/kibana","error":"skipping kibana reconciliation in \"openshift-logging\": failed to find elasticsearch instance in \"openshift-logging\": empty result set","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2020-06-02T08:26:08Z" level=info msg="Flushing nodes for openshift-logging/elasticsearch"
time="2020-06-02T08:26:17Z" level=info msg="skipping kibana migrations: no index \".kibana\" available"
{"level":"error","ts":1591086377.2394996,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"kibana-controller","request":"openshift-logging/kibana","error":"Did not receive hashvalue for trusted CA value","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2020-06-02T08:26:19Z" level=warning msg="unable to get cluster node count. E: Get https://elasticsearch.openshift-logging.svc:9200/_cluster/health: dial tcp 172.30.85.52:9200: connect: connection refused\r\n"
time="2020-06-02T08:26:24Z" level=warning msg="Unable to list existing templates in order to reconcile stale ones: Get https://elasticsearch.openshift-logging.svc:9200/_template: dial tcp 172.30.85.52:9200: connect: connection refused"
time="2020-06-02T08:26:25Z" level=error msg="Error creating index template for mapping app: Put https://elasticsearch.openshift-logging.svc:9200/_template/ocp-gen-app: dial tcp 172.30.85.52:9200: connect: connection refused"
{"level":"error","ts":1591086385.7717934,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"elasticsearch-controller","request":"openshift-logging/elasticsearch","error":"Failed to reconcile IndexMangement for Elasticsearch cluster: Put https://elasticsearch.openshift-logging.svc:9200/_template/ocp-gen-app: dial tcp 172.30.85.52:9200: connect: connection refused","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2020-06-02T08:26:58Z" level=warning msg="unable to get cluster node count. E: Get https://elasticsearch.openshift-logging.svc:9200/_cluster/health: dial tcp 172.30.85.52:9200: i/o timeout\r\n"


$ oc get pod
NAME                                            READY   STATUS    RESTARTS   AGE
cluster-logging-operator-98f5c5fd-jmbbz         1/1     Running   0          21m
elasticsearch-cdm-15wxz44f-1-5b9b545bb4-cv5xc   2/2     Running   0          2m38s
fluentd-42mwg                                   0/1     Pending   0          2m37s
fluentd-g4k72                                   1/1     Running   0          2m37s
fluentd-g85xs                                   0/1     Pending   0          2m37s
fluentd-mxlq4                                   1/1     Running   0          2m37s
fluentd-ppfp5                                   1/1     Running   0          2m37s
fluentd-ptc2d                                   0/1     Pending   0          2m37s

$ oc get kibana
NAME     AGE
kibana   2m46s

$ oc get deploy
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
cluster-logging-operator       1/1     1            1           4h19m
elasticsearch-cdm-15wxz44f-1   1/1     1            1           2m50s

$ oc get kibana kibana -oyaml
apiVersion: logging.openshift.io/v1
kind: Kibana
metadata:
  creationTimestamp: "2020-06-02T08:26:17Z"
  generation: 1
  managedFields:
  - apiVersion: logging.openshift.io/v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:ownerReferences:
          .: {}
          k:{"uid":"bd91d938-5940-43d2-9a95-9b66c899c932"}:
            .: {}
            f:apiVersion: {}
            f:controller: {}
            f:kind: {}
            f:name: {}
            f:uid: {}
      f:spec:
        .: {}
        f:managementState: {}
        f:proxy:
          .: {}
          f:resources:
            .: {}
            f:limits:
              .: {}
              f:memory: {}
            f:requests:
              .: {}
              f:cpu: {}
              f:memory: {}
        f:replicas: {}
        f:resources:
          .: {}
          f:limits:
            .: {}
            f:memory: {}
          f:requests:
            .: {}
            f:cpu: {}
            f:memory: {}
    manager: cluster-logging-operator
    operation: Update
    time: "2020-06-02T08:26:17Z"
  name: kibana
  namespace: openshift-logging
  ownerReferences:
  - apiVersion: logging.openshift.io/v1
    controller: true
    kind: ClusterLogging
    name: instance
    uid: bd91d938-5940-43d2-9a95-9b66c899c932
  resourceVersion: "618711"
  selfLink: /apis/logging.openshift.io/v1/namespaces/openshift-logging/kibanas/kibana
  uid: 479683ff-cca3-4191-9bc3-965a179b970a
spec:
  managementState: Managed
  proxy:
    resources:
      limits:
        memory: 256Mi
      requests:
        cpu: 100m
        memory: 256Mi
  replicas: 1
  resources:
    limits:
      memory: 736Mi
    requests:
      cpu: 100m
      memory: 736Mi

$ oc exec elasticsearch-cdm-15wxz44f-1-5b9b545bb4-cv5xc -- indices
Defaulting container name to elasticsearch.
Use 'oc describe pod/elasticsearch-cdm-15wxz44f-1-5b9b545bb4-cv5xc -n openshift-logging' to see all of the containers in this pod.
Tue Jun  2 08:29:37 UTC 2020
health status index        uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   infra-000001 nRaPiA5zQRyhrguBuWeJnQ   1   0      13526            0          8              8
green  open   app-000001   tk2ZutJ2TlKrg-Pn736EUQ   1   0          0            0          0              0
green  open   audit-000001 djsc7NkVS5G6jnimLhh2GQ   1   0          0            0          0              0
green  open   .security    W5sVZgbfRnKfWUM_EHxx1Q   1   0          5            0          0              0


$ oc logs -c elasticsearch elasticsearch-cdm-15wxz44f-1-5b9b545bb4-cv5xc
[2020-06-02 08:26:20,333][INFO ][container.run            ] Begin Elasticsearch startup script
[2020-06-02 08:26:20,338][INFO ][container.run            ] Comparing the specified RAM to the maximum recommended for Elasticsearch...
[2020-06-02 08:26:20,339][INFO ][container.run            ] Inspecting the maximum RAM available...
[2020-06-02 08:26:20,342][INFO ][container.run            ] ES_JAVA_OPTS: ' -Xms1024m -Xmx1024m'
[2020-06-02 08:26:20,343][INFO ][container.run            ] Copying certs from /etc/openshift/elasticsearch/secret to /etc/elasticsearch//secret
[2020-06-02 08:26:20,351][INFO ][container.run            ] Building required jks files and truststore
Importing keystore /etc/elasticsearch//secret/admin.p12 to /etc/elasticsearch//secret/admin.jks...
Entry for alias 1 successfully imported.
Import command completed:  1 entries successfully imported, 0 entries failed or cancelled

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/elasticsearch//secret/admin.jks -destkeystore /etc/elasticsearch//secret/admin.jks -deststoretype pkcs12".

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/elasticsearch//secret/admin.jks -destkeystore /etc/elasticsearch//secret/admin.jks -deststoretype pkcs12".
Certificate was added to keystore

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/elasticsearch//secret/admin.jks -destkeystore /etc/elasticsearch//secret/admin.jks -deststoretype pkcs12".
Importing keystore /etc/elasticsearch//secret/elasticsearch.p12 to /etc/elasticsearch//secret/elasticsearch.jks...
Entry for alias 1 successfully imported.
Import command completed:  1 entries successfully imported, 0 entries failed or cancelled

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/elasticsearch//secret/elasticsearch.jks -destkeystore /etc/elasticsearch//secret/elasticsearch.jks -deststoretype pkcs12".

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/elasticsearch//secret/elasticsearch.jks -destkeystore /etc/elasticsearch//secret/elasticsearch.jks -deststoretype pkcs12".
Certificate was added to keystore

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/elasticsearch//secret/elasticsearch.jks -destkeystore /etc/elasticsearch//secret/elasticsearch.jks -deststoretype pkcs12".
Importing keystore /etc/elasticsearch//secret/logging-es.p12 to /etc/elasticsearch//secret/logging-es.jks...
Entry for alias 1 successfully imported.
Import command completed:  1 entries successfully imported, 0 entries failed or cancelled

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/elasticsearch//secret/logging-es.jks -destkeystore /etc/elasticsearch//secret/logging-es.jks -deststoretype pkcs12".

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/elasticsearch//secret/logging-es.jks -destkeystore /etc/elasticsearch//secret/logging-es.jks -deststoretype pkcs12".
Certificate was added to keystore

Warning:
The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore /etc/elasticsearch//secret/logging-es.jks -destkeystore /etc/elasticsearch//secret/logging-es.jks -deststoretype pkcs12".
Certificate was added to keystore
Certificate was added to keystore
[2020-06-02 08:26:23,312][INFO ][container.run            ] Setting heap dump location /elasticsearch/persistent/heapdump.hprof
[2020-06-02 08:26:23,313][INFO ][container.run            ] Checking if Elasticsearch is ready
[2020-06-02 08:26:23,313][INFO ][container.run            ] ES_JAVA_OPTS: ' -Xms1024m -Xmx1024m -XX:HeapDumpPath=/elasticsearch/persistent/heapdump.hprof -Xloggc:/elasticsearch/persistent/elasticsearch/logs/gc.log -XX:ErrorFile=/elasticsearch/persistent/elasticsearch/logs/error.log'
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
[2020-06-02T08:26:25,494][WARN ][o.e.c.l.LogConfigurator  ] [elasticsearch-cdm-15wxz44f-1] Some logging configurations have %marker but don't have %node_name. We will automatically add %node_name to the pattern to ease the migration for users who customize log4j2.properties but will stop this behavior in 7.0. You should manually replace `%node_name` with `[%node_name]%marker ` in these locations:
  /etc/elasticsearch/log4j2.properties
[2020-06-02T08:26:25,774][INFO ][o.e.e.NodeEnvironment    ] [elasticsearch-cdm-15wxz44f-1] using [1] data paths, mounts [[/elasticsearch/persistent (/dev/mapper/coreos-luks-root-nocrypt)]], net usable_space [36.8gb], net total_space [49.4gb], types [xfs]
[2020-06-02T08:26:25,775][INFO ][o.e.e.NodeEnvironment    ] [elasticsearch-cdm-15wxz44f-1] heap size [1015.6mb], compressed ordinary object pointers [true]
[2020-06-02T08:26:25,777][INFO ][o.e.n.Node               ] [elasticsearch-cdm-15wxz44f-1] node name [elasticsearch-cdm-15wxz44f-1], node ID [-txDehvUQpeeq7aXUDm_pg]
[2020-06-02T08:26:25,777][INFO ][o.e.n.Node               ] [elasticsearch-cdm-15wxz44f-1] version[6.8.1-SNAPSHOT], pid[1], build[oss/zip/Unknown/Unknown], OS[Linux/4.18.0-147.8.1.el8_1.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_252/25.252-b09]
[2020-06-02T08:26:25,777][INFO ][o.e.n.Node               ] [elasticsearch-cdm-15wxz44f-1] JVM arguments [-XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/tmp/elasticsearch-6595364353102486308, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -XX:+UnlockExperimentalVMOptions, -XX:+UseCGroupMemoryLimitForHeap, -XX:MaxRAMFraction=2, -XX:InitialRAMFraction=2, -XX:MinRAMFraction=2, -Xms1024m, -Xmx1024m, -XX:HeapDumpPath=/elasticsearch/persistent/heapdump.hprof, -Xloggc:/elasticsearch/persistent/elasticsearch/logs/gc.log, -XX:ErrorFile=/elasticsearch/persistent/elasticsearch/logs/error.log, -Djdk.tls.ephemeralDHKeySize=2048, -Des.path.home=/usr/share/elasticsearch, -Des.path.conf=/etc/elasticsearch, -Des.distribution.flavor=oss, -Des.distribution.type=zip]
[2020-06-02T08:26:25,777][WARN ][o.e.n.Node               ] [elasticsearch-cdm-15wxz44f-1] version [6.8.1-SNAPSHOT] is a pre-release version of Elasticsearch and is not suitable for production
[2020-06-02T08:26:26,817][INFO ][o.e.p.p.PrometheusExporterPlugin] [elasticsearch-cdm-15wxz44f-1] starting Prometheus exporter plugin
[2020-06-02T08:26:27,068][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [elasticsearch-cdm-15wxz44f-1] ES Config path is /etc/elasticsearch
[2020-06-02T08:26:27,137][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [elasticsearch-cdm-15wxz44f-1] OpenSSL not available (this is not an error, we simply fallback to built-in JDK SSL) because of java.lang.ClassNotFoundException: io.netty.internal.tcnative.SSL
[2020-06-02T08:26:27,232][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [elasticsearch-cdm-15wxz44f-1] Config directory is /etc/elasticsearch/, from there the key- and truststore files are resolved relatively
[2020-06-02T08:26:27,236][INFO ][c.a.o.s.s.u.SSLCertificateHelper] [elasticsearch-cdm-15wxz44f-1] No alias given, use the first one: elasticsearch
[2020-06-02T08:26:27,263][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [elasticsearch-cdm-15wxz44f-1] HTTPS client auth mode OPTIONAL
[2020-06-02T08:26:27,264][INFO ][c.a.o.s.s.u.SSLCertificateHelper] [elasticsearch-cdm-15wxz44f-1] No alias given, use the first one: logging-es
[2020-06-02T08:26:27,266][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [elasticsearch-cdm-15wxz44f-1] TLS Transport Client Provider : JDK
[2020-06-02T08:26:27,267][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [elasticsearch-cdm-15wxz44f-1] TLS Transport Server Provider : JDK
[2020-06-02T08:26:27,267][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [elasticsearch-cdm-15wxz44f-1] TLS HTTP Provider             : JDK
[2020-06-02T08:26:27,267][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [elasticsearch-cdm-15wxz44f-1] Enabled TLS protocols for transport layer : [TLSv1.1, TLSv1.2]
[2020-06-02T08:26:27,267][INFO ][c.a.o.s.s.DefaultOpenDistroSecurityKeyStore] [elasticsearch-cdm-15wxz44f-1] Enabled TLS protocols for HTTP layer      : [TLSv1.1, TLSv1.2]
[2020-06-02T08:26:27,642][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [elasticsearch-cdm-15wxz44f-1] Clustername: elasticsearch
[2020-06-02T08:26:27,708][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [elasticsearch-cdm-15wxz44f-1] Directory /etc/elasticsearch has insecure file permissions (should be 0700)
[2020-06-02T08:26:27,709][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [elasticsearch-cdm-15wxz44f-1] Directory /etc/elasticsearch/scripts has insecure file permissions (should be 0700)
[2020-06-02T08:26:27,709][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [elasticsearch-cdm-15wxz44f-1] Directory /etc/elasticsearch/secret has insecure file permissions (should be 0700)
[2020-06-02T08:26:27,709][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [elasticsearch-cdm-15wxz44f-1] File /etc/elasticsearch/secret/admin.p12 has insecure file permissions (should be 0600)
[2020-06-02T08:26:27,709][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [elasticsearch-cdm-15wxz44f-1] File /etc/elasticsearch/secret/admin.jks has insecure file permissions (should be 0600)
[2020-06-02T08:26:27,709][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [elasticsearch-cdm-15wxz44f-1] File /etc/elasticsearch/secret/elasticsearch.p12 has insecure file permissions (should be 0600)
[2020-06-02T08:26:27,709][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [elasticsearch-cdm-15wxz44f-1] File /etc/elasticsearch/secret/searchguard.key has insecure file permissions (should be 0600)
[2020-06-02T08:26:27,709][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [elasticsearch-cdm-15wxz44f-1] File /etc/elasticsearch/secret/logging-es.p12 has insecure file permissions (should be 0600)
[2020-06-02T08:26:27,709][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [elasticsearch-cdm-15wxz44f-1] File /etc/elasticsearch/secret/key has insecure file permissions (should be 0600)
[2020-06-02T08:26:27,709][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [elasticsearch-cdm-15wxz44f-1] File /etc/elasticsearch/secret/truststore has insecure file permissions (should be 0600)
[2020-06-02T08:26:27,710][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [elasticsearch-cdm-15wxz44f-1] File /etc/elasticsearch/secret/searchguard.truststore has insecure file permissions (should be 0600)
[2020-06-02T08:26:27,710][WARN ][c.a.o.s.OpenDistroSecurityPlugin] [elasticsearch-cdm-15wxz44f-1] File /etc/elasticsearch/index_settings has insecure file permissions (should be 0600)
[2020-06-02T08:26:27,735][INFO ][o.e.p.PluginsService     ] [elasticsearch-cdm-15wxz44f-1] loaded module [aggs-matrix-stats]
[2020-06-02T08:26:27,735][INFO ][o.e.p.PluginsService     ] [elasticsearch-cdm-15wxz44f-1] loaded module [analysis-common]
[2020-06-02T08:26:27,735][INFO ][o.e.p.PluginsService     ] [elasticsearch-cdm-15wxz44f-1] loaded module [ingest-common]
[2020-06-02T08:26:27,735][INFO ][o.e.p.PluginsService     ] [elasticsearch-cdm-15wxz44f-1] loaded module [ingest-user-agent]
[2020-06-02T08:26:27,735][INFO ][o.e.p.PluginsService     ] [elasticsearch-cdm-15wxz44f-1] loaded module [lang-expression]
[2020-06-02T08:26:27,735][INFO ][o.e.p.PluginsService     ] [elasticsearch-cdm-15wxz44f-1] loaded module [lang-mustache]
[2020-06-02T08:26:27,735][INFO ][o.e.p.PluginsService     ] [elasticsearch-cdm-15wxz44f-1] loaded module [lang-painless]
[2020-06-02T08:26:27,736][INFO ][o.e.p.PluginsService     ] [elasticsearch-cdm-15wxz44f-1] loaded module [mapper-extras]
[2020-06-02T08:26:27,736][INFO ][o.e.p.PluginsService     ] [elasticsearch-cdm-15wxz44f-1] loaded module [parent-join]
[2020-06-02T08:26:27,736][INFO ][o.e.p.PluginsService     ] [elasticsearch-cdm-15wxz44f-1] loaded module [percolator]
[2020-06-02T08:26:27,736][INFO ][o.e.p.PluginsService     ] [elasticsearch-cdm-15wxz44f-1] loaded module [rank-eval]
[2020-06-02T08:26:27,736][INFO ][o.e.p.PluginsService     ] [elasticsearch-cdm-15wxz44f-1] loaded module [reindex]
[2020-06-02T08:26:27,736][INFO ][o.e.p.PluginsService     ] [elasticsearch-cdm-15wxz44f-1] loaded module [repository-url]
[2020-06-02T08:26:27,736][INFO ][o.e.p.PluginsService     ] [elasticsearch-cdm-15wxz44f-1] loaded module [transport-netty4]
[2020-06-02T08:26:27,736][INFO ][o.e.p.PluginsService     ] [elasticsearch-cdm-15wxz44f-1] loaded module [tribe]
[2020-06-02T08:26:27,736][INFO ][o.e.p.PluginsService     ] [elasticsearch-cdm-15wxz44f-1] loaded plugin [opendistro_security]
[2020-06-02T08:26:27,736][INFO ][o.e.p.PluginsService     ] [elasticsearch-cdm-15wxz44f-1] loaded plugin [prometheus-exporter]
[2020-06-02T08:26:27,763][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [elasticsearch-cdm-15wxz44f-1] Disabled https compression by default to mitigate BREACH attacks. You can enable it by setting 'http.compression: true' in elasticsearch.yml
[2020-06-02T08:26:31,291][INFO ][c.a.o.s.a.i.AuditLogImpl ] [elasticsearch-cdm-15wxz44f-1] Configured categories on rest layer to ignore: [AUTHENTICATED, GRANTED_PRIVILEGES]
[2020-06-02T08:26:31,291][INFO ][c.a.o.s.a.i.AuditLogImpl ] [elasticsearch-cdm-15wxz44f-1] Configured categories on transport layer to ignore: [AUTHENTICATED, GRANTED_PRIVILEGES]
[2020-06-02T08:26:31,291][INFO ][c.a.o.s.a.i.AuditLogImpl ] [elasticsearch-cdm-15wxz44f-1] Configured Users to ignore: [kibanaserver]
[2020-06-02T08:26:31,291][INFO ][c.a.o.s.a.i.AuditLogImpl ] [elasticsearch-cdm-15wxz44f-1] Configured Users to ignore for read compliance events: [kibanaserver]
[2020-06-02T08:26:31,291][INFO ][c.a.o.s.a.i.AuditLogImpl ] [elasticsearch-cdm-15wxz44f-1] Configured Users to ignore for write compliance events: [kibanaserver]
[2020-06-02T08:26:31,296][ERROR][c.a.o.s.a.s.SinkProvider ] [elasticsearch-cdm-15wxz44f-1] Default endpoint could not be created, auditlog will not work properly.
[2020-06-02T08:26:31,297][WARN ][c.a.o.s.a.r.AuditMessageRouter] [elasticsearch-cdm-15wxz44f-1] No default storage available, audit log may not work properly. Please check configuration.
[2020-06-02T08:26:31,297][INFO ][c.a.o.s.a.i.AuditLogImpl ] [elasticsearch-cdm-15wxz44f-1] Message routing enabled: false
[2020-06-02T08:26:31,300][WARN ][c.a.o.s.c.ComplianceConfig] [elasticsearch-cdm-15wxz44f-1] If you plan to use field masking pls configure opendistro_security.compliance.salt to be a random string of 16 chars length identical on all nodes
[2020-06-02T08:26:31,300][INFO ][c.a.o.s.c.ComplianceConfig] [elasticsearch-cdm-15wxz44f-1] PII configuration [auditLogPattern=null,  auditLogIndex=null]: {}
[2020-06-02T08:26:31,534][DEBUG][o.e.a.ActionModule       ] [elasticsearch-cdm-15wxz44f-1] Using REST wrapper from plugin com.amazon.opendistroforelasticsearch.security.OpenDistroSecurityPlugin
[2020-06-02T08:26:31,623][INFO ][o.e.d.DiscoveryModule    ] [elasticsearch-cdm-15wxz44f-1] using discovery type [zen] and host providers [settings]
Registering Handler
[2020-06-02T08:26:32,233][INFO ][o.e.n.Node               ] [elasticsearch-cdm-15wxz44f-1] initialized
[2020-06-02T08:26:32,233][INFO ][o.e.n.Node               ] [elasticsearch-cdm-15wxz44f-1] starting ...
[2020-06-02T08:26:32,432][INFO ][o.e.t.TransportService   ] [elasticsearch-cdm-15wxz44f-1] publish_address {10.131.0.112:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}, {10.131.0.112:9300}
[2020-06-02T08:26:32,444][INFO ][o.e.b.BootstrapChecks    ] [elasticsearch-cdm-15wxz44f-1] bound or publishing to a non-loopback address, enforcing bootstrap checks
[2020-06-02T08:26:32,457][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [elasticsearch-cdm-15wxz44f-1] Check if .security index exists ...
[2020-06-02T08:26:32,467][DEBUG][o.e.a.a.i.e.i.TransportIndicesExistsAction] [elasticsearch-cdm-15wxz44f-1] no known master node, scheduling a retry
[2020-06-02T08:26:35,588][INFO ][o.e.c.s.MasterService    ] [elasticsearch-cdm-15wxz44f-1] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {elasticsearch-cdm-15wxz44f-1}{-txDehvUQpeeq7aXUDm_pg}{8YFCsfB8RE2aZJNdlY78mQ}{10.131.0.112}{10.131.0.112:9300}
[2020-06-02T08:26:35,594][INFO ][o.e.c.s.ClusterApplierService] [elasticsearch-cdm-15wxz44f-1] new_master {elasticsearch-cdm-15wxz44f-1}{-txDehvUQpeeq7aXUDm_pg}{8YFCsfB8RE2aZJNdlY78mQ}{10.131.0.112}{10.131.0.112:9300}, reason: apply cluster state (from master [master {elasticsearch-cdm-15wxz44f-1}{-txDehvUQpeeq7aXUDm_pg}{8YFCsfB8RE2aZJNdlY78mQ}{10.131.0.112}{10.131.0.112:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2020-06-02T08:26:35,619][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [elasticsearch-cdm-15wxz44f-1] .security index does not exist yet, use either securityadmin to initialize cluster or wait until cluster is fully formed and up
[2020-06-02T08:26:35,620][INFO ][o.e.g.GatewayService     ] [elasticsearch-cdm-15wxz44f-1] recovered [0] indices into cluster_state
[2020-06-02T08:26:35,624][INFO ][o.e.h.n.Netty4HttpServerTransport] [elasticsearch-cdm-15wxz44f-1] publish_address {10.131.0.112:9200}, bound_addresses {[::1]:9200}, {127.0.0.1:9200}, {10.131.0.112:9200}
[2020-06-02T08:26:35,625][INFO ][o.e.n.Node               ] [elasticsearch-cdm-15wxz44f-1] started
[2020-06-02T08:26:35,625][INFO ][c.a.o.s.OpenDistroSecurityPlugin] [elasticsearch-cdm-15wxz44f-1] 4 Open Distro Security modules loaded so far: [Module [type=DLSFLS, implementing class=com.amazon.opendistroforelasticsearch.security.configuration.OpenDistroSecurityFlsDlsIndexSearcherWrapper], Module [type=MULTITENANCY, implementing class=com.amazon.opendistroforelasticsearch.security.configuration.PrivilegesInterceptorImpl], Module [type=REST_MANAGEMENT_API, implementing class=com.amazon.opendistroforelasticsearch.security.dlic.rest.api.OpenDistroSecurityRestApiActions], Module [type=AUDITLOG, implementing class=com.amazon.opendistroforelasticsearch.security.auditlog.impl.AuditLogImpl]]
[2020-06-02 08:26:36,796][INFO ][container.run            ] Elasticsearch is ready and listening
/usr/share/elasticsearch/init ~
[2020-06-02 08:26:37,131][INFO ][container.run            ] Starting init script: 0001-jaeger
[2020-06-02 08:26:37,136][INFO ][container.run            ] Completed init script: 0001-jaeger
[2020-06-02 08:26:37,200][INFO ][container.run            ] Forcing the seeding of ACL documents
[2020-06-02 08:26:37,373][INFO ][container.run            ] Seeding the security ACL index.  Will wait up to 604800 seconds.
[2020-06-02 08:26:37,375][INFO ][container.run            ] Seeding the security ACL index.  Will wait up to 604800 seconds.
Open Distro Security Admin v6
Will connect to localhost:9300 ... done
Elasticsearch Version: 6.8.1
Open Distro Security Version: 0.10.0.3
Connected as CN=system.admin,OU=OpenShift,O=Logging
Contacting elasticsearch cluster 'elasticsearch' ...
Clustername: elasticsearch
Clusterstate: GREEN
Number of nodes: 1
Number of data nodes: 1
.security index does not exists, attempt to create it ... [2020-06-02T08:26:41,775][INFO ][o.e.c.m.MetaDataCreateIndexService] [elasticsearch-cdm-15wxz44f-1] [.security] creating index, cause [api], templates [], shards [1]/[0], mappings []
[2020-06-02T08:26:41,897][WARN ][c.a.o.s.c.ConfigurationLoader] [elasticsearch-cdm-15wxz44f-1] No data for config while retrieving configuration for [config, roles, rolesmapping, internalusers, actiongroups]  (index=.security)
[2020-06-02T08:26:41,898][WARN ][c.a.o.s.c.ConfigurationLoader] [elasticsearch-cdm-15wxz44f-1] No data for roles while retrieving configuration for [config, roles, rolesmapping, internalusers, actiongroups]  (index=.security)
[2020-06-02T08:26:41,898][WARN ][c.a.o.s.c.ConfigurationLoader] [elasticsearch-cdm-15wxz44f-1] No data for rolesmapping while retrieving configuration for [config, roles, rolesmapping, internalusers, actiongroups]  (index=.security)
[2020-06-02T08:26:41,898][WARN ][c.a.o.s.c.ConfigurationLoader] [elasticsearch-cdm-15wxz44f-1] No data for internalusers while retrieving configuration for [config, roles, rolesmapping, internalusers, actiongroups]  (index=.security)
[2020-06-02T08:26:41,898][WARN ][c.a.o.s.c.ConfigurationLoader] [elasticsearch-cdm-15wxz44f-1] No data for actiongroups while retrieving configuration for [config, roles, rolesmapping, internalusers, actiongroups]  (index=.security)
[2020-06-02T08:26:42,090][INFO ][o.e.c.r.a.AllocationService] [elasticsearch-cdm-15wxz44f-1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.security][0]] ...]).
done (0 replicas)
Populate config from /opt/app-root/src/sgconfig/
Will update 'security/config' with /opt/app-root/src/sgconfig/config.yml 
[2020-06-02T08:26:42,277][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-cdm-15wxz44f-1] [.security/W5sVZgbfRnKfWUM_EHxx1Q] create_mapping [security]
   SUCC: Configuration for 'config' created or updated
Will update 'security/roles' with /opt/app-root/src/sgconfig/roles.yml 
[2020-06-02T08:26:42,541][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-cdm-15wxz44f-1] [.security/W5sVZgbfRnKfWUM_EHxx1Q] update_mapping [security]
   SUCC: Configuration for 'roles' created or updated
Will update 'security/rolesmapping' with /opt/app-root/src/sgconfig/roles_mapping.yml 
[2020-06-02T08:26:42,594][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-cdm-15wxz44f-1] [.security/W5sVZgbfRnKfWUM_EHxx1Q] update_mapping [security]
   SUCC: Configuration for 'rolesmapping' created or updated
Will update 'security/internalusers' with /opt/app-root/src/sgconfig/internal_users.yml 
[2020-06-02T08:26:42,638][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-cdm-15wxz44f-1] [.security/W5sVZgbfRnKfWUM_EHxx1Q] update_mapping [security]
   SUCC: Configuration for 'internalusers' created or updated
Will update 'security/actiongroups' with /opt/app-root/src/sgconfig/action_groups.yml 
[2020-06-02T08:26:42,690][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-cdm-15wxz44f-1] [.security/W5sVZgbfRnKfWUM_EHxx1Q] update_mapping [security]
   SUCC: Configuration for 'actiongroups' created or updated
Done with success
[2020-06-02 08:26:47,052][INFO ][container.run            ] Seeded the security ACL index
[2020-06-02 08:26:47,057][INFO ][container.run            ] Adding index templates
[2020-06-02 08:26:47,184][INFO ][container.run            ] Create index template 'com.redhat.viaq-openshift-operations.template.json'
[2020-06-02T08:26:47,375][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [com.redhat.viaq-openshift-operations.template.json] for index patterns [infra-*, audit.infra-*]
{"acknowledged":true}[2020-06-02 08:26:47,494][INFO ][container.run            ] Create index template 'com.redhat.viaq-openshift-orphaned.template.json'
[2020-06-02T08:26:47,657][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [com.redhat.viaq-openshift-orphaned.template.json] for index patterns [.orphaned.*]
{"acknowledged":true}[2020-06-02 08:26:47,802][INFO ][container.run            ] Create index template 'com.redhat.viaq-openshift-project.template.json'
[2020-06-02T08:26:47,991][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [com.redhat.viaq-openshift-project.template.json] for index patterns [app-*]
{"acknowledged":true}[2020-06-02 08:26:48,138][INFO ][container.run            ] Create index template 'common.settings.kibana.template.json'
[2020-06-02T08:26:48,239][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [common.settings.kibana.template.json] for index patterns [.kibana*]
{"acknowledged":true}[2020-06-02 08:26:48,378][INFO ][container.run            ] Create index template 'common.settings.operations.orphaned.json'
[2020-06-02T08:26:48,494][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [common.settings.operations.orphaned.json] for index patterns [.orphaned*]
{"acknowledged":true}[2020-06-02 08:26:48,630][INFO ][container.run            ] Create index template 'common.settings.operations.template.json'
[2020-06-02T08:26:48,763][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [common.settings.operations.template.json] for index patterns [.operations*]
{"acknowledged":true}[2020-06-02 08:26:48,923][INFO ][container.run            ] Create index template 'common.settings.project.template.json'
[2020-06-02T08:26:49,040][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [common.settings.project.template.json] for index patterns [project*]
{"acknowledged":true}[2020-06-02 08:26:49,185][INFO ][container.run            ] Create index template 'jaeger-service.json'
[2020-06-02T08:26:49,330][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [jaeger-service.json] for index patterns [*jaeger-service-*]
{"acknowledged":true}[2020-06-02 08:26:49,472][INFO ][container.run            ] Create index template 'jaeger-span.json'
[2020-06-02T08:26:49,617][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [jaeger-span.json] for index patterns [*jaeger-span-*]
{"acknowledged":true}[2020-06-02 08:26:49,766][INFO ][container.run            ] Create index template 'org.ovirt.viaq-collectd.template.json'
[2020-06-02T08:26:49,983][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [org.ovirt.viaq-collectd.template.json] for index patterns [project.ovirt-metrics-*]
{"acknowledged":true}[2020-06-02 08:26:49,997][INFO ][container.run            ] Finished adding index templates
~
[2020-06-02T08:26:50,000][INFO ][c.a.o.s.c.IndexBaseConfigurationRepository] [elasticsearch-cdm-15wxz44f-1] Node 'elasticsearch-cdm-15wxz44f-1' initialized
[2020-06-02T08:26:59,019][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-app] for index patterns [app*]
[2020-06-02T08:26:59,217][INFO ][o.e.c.m.MetaDataCreateIndexService] [elasticsearch-cdm-15wxz44f-1] [app-000001] creating index, cause [api], templates [com.redhat.viaq-openshift-project.template.json, ocp-gen-app], shards [1]/[1], mappings [_doc]
[2020-06-02T08:26:59,376][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-infra] for index patterns [infra*]
[2020-06-02T08:26:59,491][INFO ][o.e.c.m.MetaDataCreateIndexService] [elasticsearch-cdm-15wxz44f-1] [infra-000001] creating index, cause [api], templates [com.redhat.viaq-openshift-operations.template.json, ocp-gen-infra], shards [1]/[1], mappings [_doc]
[2020-06-02T08:26:59,637][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-cdm-15wxz44f-1] [infra-000001/nRaPiA5zQRyhrguBuWeJnQ] update_mapping [_doc]
[2020-06-02T08:26:59,712][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-audit] for index patterns [audit*]
[2020-06-02T08:26:59,784][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-cdm-15wxz44f-1] [infra-000001/nRaPiA5zQRyhrguBuWeJnQ] update_mapping [_doc]
[2020-06-02T08:26:59,859][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-cdm-15wxz44f-1] [infra-000001/nRaPiA5zQRyhrguBuWeJnQ] update_mapping [_doc]
[2020-06-02T08:26:59,879][INFO ][o.e.c.m.MetaDataCreateIndexService] [elasticsearch-cdm-15wxz44f-1] [audit-000001] creating index, cause [api], templates [ocp-gen-audit], shards [1]/[1], mappings []
[2020-06-02T08:26:59,925][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-cdm-15wxz44f-1] [infra-000001/nRaPiA5zQRyhrguBuWeJnQ] update_mapping [_doc]
[2020-06-02T08:27:00,052][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-cdm-15wxz44f-1] [infra-000001/nRaPiA5zQRyhrguBuWeJnQ] update_mapping [_doc]
[2020-06-02T08:27:00,183][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-cdm-15wxz44f-1] [infra-000001/nRaPiA5zQRyhrguBuWeJnQ] update_mapping [_doc]
[2020-06-02T08:27:00,263][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-cdm-15wxz44f-1] [infra-000001/nRaPiA5zQRyhrguBuWeJnQ] update_mapping [_doc]
[2020-06-02T08:27:00,999][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-cdm-15wxz44f-1] [infra-000001/nRaPiA5zQRyhrguBuWeJnQ] update_mapping [_doc]
[2020-06-02T08:27:01,092][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [elasticsearch-cdm-15wxz44f-1] updating number_of_replicas to [0] for indices [app-000001]
[2020-06-02T08:27:01,177][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [elasticsearch-cdm-15wxz44f-1] updating number_of_replicas to [0] for indices [audit-000001]
[2020-06-02T08:27:01,260][INFO ][o.e.c.m.MetaDataUpdateSettingsService] [elasticsearch-cdm-15wxz44f-1] updating number_of_replicas to [0] for indices [infra-000001]
[2020-06-02T08:27:01,686][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-app] for index patterns [app*]
[2020-06-02T08:27:01,801][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-infra] for index patterns [infra*]
[2020-06-02T08:27:01,900][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-audit] for index patterns [audit*]
[2020-06-02T08:27:03,234][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-app] for index patterns [app*]
[2020-06-02T08:27:03,328][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-infra] for index patterns [infra*]
[2020-06-02T08:27:03,443][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-audit] for index patterns [audit*]
[2020-06-02T08:27:05,532][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-cdm-15wxz44f-1] [infra-000001/nRaPiA5zQRyhrguBuWeJnQ] update_mapping [_doc]
[2020-06-02T08:27:05,612][INFO ][o.e.c.m.MetaDataMappingService] [elasticsearch-cdm-15wxz44f-1] [infra-000001/nRaPiA5zQRyhrguBuWeJnQ] update_mapping [_doc]
[2020-06-02T08:27:31,155][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-app] for index patterns [app*]
[2020-06-02T08:27:31,265][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-infra] for index patterns [infra*]
[2020-06-02T08:27:31,346][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-audit] for index patterns [audit*]
[2020-06-02T08:28:02,562][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-app] for index patterns [app*]
[2020-06-02T08:28:02,696][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-infra] for index patterns [infra*]
[2020-06-02T08:28:02,814][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-audit] for index patterns [audit*]
[2020-06-02T08:28:34,394][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-app] for index patterns [app*]
[2020-06-02T08:28:34,512][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-infra] for index patterns [infra*]
[2020-06-02T08:28:34,624][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-audit] for index patterns [audit*]
[2020-06-02T08:29:05,796][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-app] for index patterns [app*]
[2020-06-02T08:29:05,884][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-infra] for index patterns [infra*]
[2020-06-02T08:29:06,001][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-audit] for index patterns [audit*]
[2020-06-02T08:29:37,101][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-app] for index patterns [app*]
[2020-06-02T08:29:37,182][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-infra] for index patterns [infra*]
[2020-06-02T08:29:37,302][INFO ][o.e.c.m.MetaDataIndexTemplateService] [elasticsearch-cdm-15wxz44f-1] adding template [ocp-gen-audit] for index patterns [audit*]


Version-Release number of selected component (if applicable):
$ oc get csv
NAME                                        DISPLAY                  VERSION              REPLACES   PHASE
clusterlogging.4.5.0-202005302117           Cluster Logging          4.5.0-202005302117              Succeeded
elasticsearch-operator.4.5.0-202005301517   Elasticsearch Operator   4.5.0-202005301517              Succeeded


How reproducible:
Sometimes

Steps to Reproduce:
1. subscribe EO and CLO
2. create clusterlogging instance with https://raw.githubusercontent.com/openshift/verification-tests/master/testdata/logging/clusterlogging/example_indexmanagement.yaml
3. check pods

Actual results:


Expected results:


Additional info:

Comment 1 Qiaoling Tang 2020-06-03 04:09:28 UTC
This issue happens frequently when running automation testing. Increasing the severity.

Comment 2 Jeff Cantrill 2020-06-03 19:15:23 UTC
Per slack comment from @Vimal, unable to reproduce. moving back to ON_QA

Comment 3 Qiaoling Tang 2020-06-04 01:15:59 UTC
I found a method to 100% reproduce this issue:

1. subscribe EO and CLO
2. create clusterlogging instance
3. wait until all pod are running, delete the clusterlogging instance
4. create clusterlogging instance

Repeating step3 ~ step4, then the issue happens, and I found finally the kibana could be created, but it takes more than 30 minutes. This issue affects a lot in QE's CI automation testing, it makes a lot of failures.

Comment 4 Jeff Cantrill 2020-06-04 01:40:49 UTC
(In reply to Qiaoling Tang from comment #3)
> I found a method to 100% reproduce this issue:
> 
> 1. subscribe EO and CLO
> 2. create clusterlogging instance
> 3. wait until all pod are running, delete the clusterlogging instance
> 4. create clusterlogging instance

How long after deleting do you create the new clusterlogging instance? Are the pods still terminating from the previous instance?

> 
> Repeating step3 ~ step4, then the issue happens, and I found finally the
> kibana could be created, but it takes more than 30 minutes. This issue
> affects a lot in QE's CI automation testing, it makes a lot of failures.

I wonder if maybe there is an artifact in status that we are relying on that is not being cleared:

{"level":"error","ts":1591086377.2394996,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"kibana-controller","request":"openshift-logging/kibana","error":"Did not receive hashvalue for trusted CA value","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-

Comment 5 Qiaoling Tang 2020-06-04 02:04:48 UTC
(In reply to Jeff Cantrill from comment #4)
> (In reply to Qiaoling Tang from comment #3)
> > I found a method to 100% reproduce this issue:
> > 
> > 1. subscribe EO and CLO
> > 2. create clusterlogging instance
> > 3. wait until all pod are running, delete the clusterlogging instance
> > 4. create clusterlogging instance
> 
> How long after deleting do you create the new clusterlogging instance? Are
> the pods still terminating from the previous instance?
> 

I wait until all the EFK pods are terminated and disappeared, then I recreate the clusterlogging instance. 

Besides, when hit this issue, delete the EO, then wait a while for the EO to be recreated, the kibana could be created by the new EO.

I didn't hit this issue when I ran automation with the images from quay.io/openshift/origin-elasticsearch-operator last week.

Comment 8 Qiaoling Tang 2020-06-08 02:42:29 UTC
Tested with quay.io/openshift/origin-elasticsearch-operator:4.6.0, imageID: quay.io/openshift/origin-elasticsearch-operator@sha256:240e7cdc527a1ac08f6b1853eac9a8e73ee7a19d2af18f1a37eef751fda97474, no this issue.

Move to VERIFIED.

Comment 10 errata-xmlrpc 2020-10-27 16:03:47 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.6 GA Images), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:4196


Note You need to log in before you can comment on or make changes to this bug.