Bug 1811867 - In IPv6 bare metal deployment elasticsearch binds on IPv4 loopback address instead of the cluster IPv6 address
Summary: In IPv6 bare metal deployment elasticsearch binds on IPv4 loopback address in...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 4.3.z
Hardware: Unspecified
OS: Unspecified
unspecified
urgent
Target Milestone: ---
: 4.5.0
Assignee: Jeff Cantrill
QA Contact: Anping Li
URL:
Whiteboard:
Depends On:
Blocks: 1771572 1812912
TreeView+ depends on / blocked
 
Reported: 2020-03-10 01:59 UTC by Marius Cornea
Modified: 2020-07-13 17:19 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Cause: Consequence: Fix: Use the downward API to set the binding and publish host for elasticsearch Result: ES is able to bind to the network interface
Clone Of:
: 1812912 (view as bug list)
Environment:
Last Closed: 2020-07-13 17:19:21 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
elasticsearch.yml (1.89 KB, text/plain)
2020-03-10 02:01 UTC, Marius Cornea
no flags Details
elasticsearch-master-0.log (3.39 MB, text/plain)
2020-03-10 02:02 UTC, Marius Cornea
no flags Details
elasticsearch-master-1.log (99.20 KB, text/plain)
2020-03-10 02:03 UTC, Marius Cornea
no flags Details
elasticsearch-master-2.log (355.48 KB, text/plain)
2020-03-10 02:03 UTC, Marius Cornea
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github openshift elasticsearch-operator pull 270 0 None closed Bug 1811867: Fix binding to ipv6 2021-02-05 15:26:07 UTC

Description Marius Cornea 2020-03-10 01:59:57 UTC
Description of problem:

In IPv6 bare metal deployment elasticsearch binds on IPv4 loopback address instead of the cluster IPv6 address and the elasticsearch cluster fails to start:

[kni@provisionhost-0 ~]$ oc get pods | grep elasticsearch
elasticsearch-cdm-crlpxg50-1-5979dc6bd9-x9ffj   1/2     Running   0          49m
elasticsearch-cdm-crlpxg50-2-69b54b6d56-jbg74   1/2     Running   0          48m
elasticsearch-cdm-crlpxg50-3-6d9db699b8-p49dx   1/2     Running   0          47m


oc logs elasticsearch-cdm-crlpxg50-1-5979dc6bd9-x9ffj -c elasticsearch
[...]
[2020-03-10 01:02:42,144][ERROR][container.run            ] Timed out waiting for Elasticsearch to be ready
HTTP/1.1 503 Service Unavailable
content-type: application/json; charset=UTF-8
content-length: 331


Looking at elasticsearch.log for all 3 nodes we can see the publish_address is set to the IPv4 loopback address instead of the pods IPv6 address:

[root@sealusa2 ~]# grep publish_address /srv/nfs/pv00/elasticsearch/logs/elasticsearch.log 
[2020-03-10T00:16:19,586][INFO ][o.e.t.TransportService   ] [elasticsearch-cdm-k4bowbv0-1] publish_address {127.0.0.1:9300}, bound_addresses {[::]:9300}
[2020-03-10T00:16:49,658][INFO ][c.f.s.h.SearchGuardHttpServerTransport] [elasticsearch-cdm-k4bowbv0-1] publish_address {127.0.0.1:9200}, bound_addresses {[::]:9200}
Binary file /srv/nfs/pv00/elasticsearch/logs/elasticsearch.log matches
[root@sealusa2 ~]# grep publish_address /srv/nfs/pv01/elasticsearch/logs/elasticsearch.log 
[2020-03-10T00:57:50,025][INFO ][o.e.t.TransportService   ] [elasticsearch-cdm-crlpxg50-2] publish_address {127.0.0.1:9300}, bound_addresses {[::]:9300}
[2020-03-10T00:58:20,096][INFO ][c.f.s.h.SearchGuardHttpServerTransport] [elasticsearch-cdm-crlpxg50-2] publish_address {127.0.0.1:9200}, bound_addresses {[::]:9200}
[root@sealusa2 ~]# grep publish_address /srv/nfs/pv02/elasticsearch/logs/elasticsearch.log 
[2020-03-10T00:58:51,012][INFO ][o.e.t.TransportService   ] [elasticsearch-cdm-crlpxg50-3] publish_address {127.0.0.1:9300}, bound_addresses {[::]:9300}
[2020-03-10T00:59:21,087][INFO ][c.f.s.h.SearchGuardHttpServerTransport] [elasticsearch-cdm-crlpxg50-3] publish_address {127.0.0.1:9200}, bound_addresses {[::]:9200}


Also checking /etc/elasticsearch/elasticsearch.yml inside the elasticsearch container we can see:

network:
  host: 0.0.0.0


Version-Release number of selected component (if applicable):
4.3.0-0.nightly-2020-03-09-172027

How reproducible:
100%

Steps to Reproduce:
1. Deploy bare metal IPI with IPv6 control plane: 3 x master nodes + 2 x worker nodes
2. Follow deployment procedure @ https://access.redhat.com/documentation/en-us/openshift_container_platform/4.3/html-single/logging/index#cluster-logging-deploying

Actual results:
elasticsearch cluster doesn't start because pods are using the IPv4 loopback address instead of the pod's cluster IPv6 address 

Expected results:
elasticsearch binds on the pod's IPv6 address and cluster starts

Additional info:
Attaching elasticsearch.log and /etc/elasticsearch/elasticsearch.yml from the elasticsearch container.

Comment 1 Marius Cornea 2020-03-10 02:01:09 UTC
Created attachment 1668798 [details]
elasticsearch.yml

Comment 2 Marius Cornea 2020-03-10 02:02:35 UTC
Created attachment 1668799 [details]
elasticsearch-master-0.log

Comment 3 Marius Cornea 2020-03-10 02:03:00 UTC
Created attachment 1668800 [details]
elasticsearch-master-1.log

Comment 4 Marius Cornea 2020-03-10 02:03:24 UTC
Created attachment 1668801 [details]
elasticsearch-master-2.log

Comment 5 Marius Cornea 2020-03-10 02:16:29 UTC
Images were mirrored by following the restricted OLM procedure @ https://access.redhat.com/documentation/en-us/openshift_container_platform/4.3/html-single/operators/index#olm-restricted-networks-operatorhub_olm-restricted-networks

grep elasticsearch olm-mirror-dir/olm-manifests/imageContentSourcePolicy.yaml 
    - registry.ocp-edge-cluster.qe.lab.redhat.com:5000/openshift/openshift4/ose-logging-elasticsearch5
    source: registry.redhat.io/openshift4/ose-logging-elasticsearch5
    - registry.ocp-edge-cluster.qe.lab.redhat.com:5000/openshift/openshift4/ose-elasticsearch-operator
    source: registry.redhat.io/openshift4/ose-elasticsearch-operator
    - registry.ocp-edge-cluster.qe.lab.redhat.com:5000/openshift/openshift4/ose-logging-elasticsearch5
    source: registry.redhat.io/openshift4/ose-logging-elasticsearch5
    - registry.ocp-edge-cluster.qe.lab.redhat.com:5000/openshift/openshift4/ose-elasticsearch-operator
    source: registry.redhat.io/openshift4/ose-elasticsearch-operator
    - registry.ocp-edge-cluster.qe.lab.redhat.com:5000/openshift/openshift4/ose-elasticsearch-operator
    source: registry.redhat.io/openshift4/ose-elasticsearch-operator
    - registry.ocp-edge-cluster.qe.lab.redhat.com:5000/openshift/openshift4/ose-elasticsearch-operator
    source: registry.redhat.io/openshift4/ose-elasticsearch-operator
    - registry.ocp-edge-cluster.qe.lab.redhat.com:5000/openshift/openshift4/ose-logging-elasticsearch5
    source: registry.redhat.io/openshift4/ose-logging-elasticsearch5
    - registry.ocp-edge-cluster.qe.lab.redhat.com:5000/openshift/openshift4/ose-logging-elasticsearch5
    source: registry.redhat.io/openshift4/ose-logging-elasticsearch5


oc status
In project openshift-logging on server https://api.ocp-edge-cluster.qe.lab.redhat.com:6443

svc/elasticsearch-metrics - fd02::187b:60000 -> metrics
svc/elasticsearch - fd02::77c0:9200 -> restapi
  deployment/elasticsearch-cdm-crlpxg50-1 deploys registry.redhat.io/openshift4/ose-logging-elasticsearch5@sha256:c360ab6acbac3d10989a4e8b0054e277e2584c737f8371c48a054d314fd1e94b,registry.redhat.io/openshift4/ose-oauth-proxy@sha256:e9ed3a91872ad17ba511cb8e1bd91c764298e59492b2ed89012b20ea8071445b
    deployment #1 running for about an hour - 0/1 pods
  deployment/elasticsearch-cdm-crlpxg50-3 deploys registry.redhat.io/openshift4/ose-logging-elasticsearch5@sha256:c360ab6acbac3d10989a4e8b0054e277e2584c737f8371c48a054d314fd1e94b,registry.redhat.io/openshift4/ose-oauth-proxy@sha256:e9ed3a91872ad17ba511cb8e1bd91c764298e59492b2ed89012b20ea8071445b
    deployment #1 running for about an hour - 0/1 pods
  deployment/elasticsearch-cdm-crlpxg50-2 deploys registry.redhat.io/openshift4/ose-logging-elasticsearch5@sha256:c360ab6acbac3d10989a4e8b0054e277e2584c737f8371c48a054d314fd1e94b,registry.redhat.io/openshift4/ose-oauth-proxy@sha256:e9ed3a91872ad17ba511cb8e1bd91c764298e59492b2ed89012b20ea8071445b
    deployment #1 running for about an hour - 0/1 pods

svc/elasticsearch-cluster - fd02::bfbb:9300 -> cluster
  deployment/elasticsearch-cdm-crlpxg50-1 deploys registry.redhat.io/openshift4/ose-logging-elasticsearch5@sha256:c360ab6acbac3d10989a4e8b0054e277e2584c737f8371c48a054d314fd1e94b,registry.redhat.io/openshift4/ose-oauth-proxy@sha256:e9ed3a91872ad17ba511cb8e1bd91c764298e59492b2ed89012b20ea8071445b
    deployment #1 running for about an hour - 0/1 pods
  deployment/elasticsearch-cdm-crlpxg50-3 deploys registry.redhat.io/openshift4/ose-logging-elasticsearch5@sha256:c360ab6acbac3d10989a4e8b0054e277e2584c737f8371c48a054d314fd1e94b,registry.redhat.io/openshift4/ose-oauth-proxy@sha256:e9ed3a91872ad17ba511cb8e1bd91c764298e59492b2ed89012b20ea8071445b
    deployment #1 running for about an hour - 0/1 pods
  deployment/elasticsearch-cdm-crlpxg50-2 deploys registry.redhat.io/openshift4/ose-logging-elasticsearch5@sha256:c360ab6acbac3d10989a4e8b0054e277e2584c737f8371c48a054d314fd1e94b,registry.redhat.io/openshift4/ose-oauth-proxy@sha256:e9ed3a91872ad17ba511cb8e1bd91c764298e59492b2ed89012b20ea8071445b
    deployment #1 running for about an hour - 0/1 pods

svc/fluentd - fd02::5115:24231 -> metrics
  daemonset/fluentd manages registry.redhat.io/openshift4/ose-logging-fluentd@sha256:cbd9e582cd659022e1e9e3008475f393f9bd1c54fa246087aeb8eb6f9b0c9046
    generation #2 running for about an hour - 5 pods

https://kibana-openshift-logging.apps.ocp-edge-cluster.qe.lab.redhat.com (reencrypt) (svc/kibana)
  deployment/kibana deploys registry.redhat.io/openshift4/ose-logging-kibana5@sha256:8d2bf2c67be884cb8adcadc1d5824fe784af81d2411e74c6490199f8483110af,registry.redhat.io/openshift4/ose-oauth-proxy@sha256:e9ed3a91872ad17ba511cb8e1bd91c764298e59492b2ed89012b20ea8071445b
    deployment #3 running for about an hour - 1 pod
    deployment #2 deployed about an hour ago
    deployment #1 deployed about an hour ago

deployment/cluster-logging-operator deploys registry.redhat.io/openshift4/ose-cluster-logging-operator@sha256:cc5ca9dddeff2478fe2dd81d81758a61edf3099ba428d4fa5c21c51d460ca535
  deployment #1 running for 4 hours - 1 pod


6 warnings, 6 infos identified, use 'oc status --suggest' to see details.

Comment 6 Marius Cornea 2020-03-10 02:48:55 UTC
oc get  csv/elasticsearch-operator.4.3.1-202002032140 
NAME                                        DISPLAY                  VERSION              REPLACES   PHASE
elasticsearch-operator.4.3.1-202002032140   Elasticsearch Operator   4.3.1-202002032140              Succeeded
[kni@provisionhost-0 ~]$ oc get csv/elasticsearch-operator.4.3.1-202002032140 -o yaml
apiVersion: operators.coreos.com/v1alpha1
kind: ClusterServiceVersion
metadata:
  annotations:
    alm-examples: |-
      [
          {
              "apiVersion": "logging.openshift.io/v1",
              "kind": "Elasticsearch",
              "metadata": {
                "name": "elasticsearch"
              },
              "spec": {
                "managementState": "Managed",
                "nodeSpec": {
                  "image": "registry.redhat.io/openshift4/ose-logging-elasticsearch5@sha256:c360ab6acbac3d10989a4e8b0054e277e2584c737f8371c48a054d314fd1e94b",
                  "resources": {
                    "limits": {
                      "memory": "1Gi"
                    },
                    "requests": {
                      "memory": "512Mi"
                    }
                  }
                },
                "redundancyPolicy": "SingleRedundancy",
                "nodes": [
                  {
                      "nodeCount": 1,
                      "roles": ["client","data","master"]
                  }
                ]
              }
          }
      ]
    capabilities: Seamless Upgrades
    categories: OpenShift Optional, Logging & Tracing
    certified: "false"
    containerImage: registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:b604641f95c9762ff9b1c9d550cec908d9caab3cc333120e7cf60a55539b8149
    createdAt: "2019-02-20T08:00:00Z"
    description: |-
      The Elasticsearch Operator for OKD provides a means for configuring and managing an Elasticsearch cluster for tracing and cluster logging.
      ## Prerequisites and Requirements
      ### Elasticsearch Operator Namespace
      The Elasticsearch Operator must be deployed to the global operator group namespace
      ### Memory Considerations
      Elasticsearch is a memory intensive application.  The initial
      set of OKD nodes may not be large enough to support the Elasticsearch cluster.  Additional OKD nodes must be added
      to the OKD cluster if you desire to run with the recommended (or better) memory. Each ES node can operate with a
      lower memory setting though this is not recommended for production deployments.
    olm.operatorGroup: openshift-operators-redhat
    olm.operatorNamespace: openshift-operators-redhat
    olm.skipRange: '>=4.2.0 <4.3.0'
    support: AOS Cluster Logging, Jaeger
  creationTimestamp: "2020-03-09T22:45:13Z"
  generation: 1
  labels:
    olm.api.e43efcaa45c9f8d0: provided
    olm.copiedFrom: openshift-operators-redhat
  name: elasticsearch-operator.4.3.1-202002032140
  namespace: openshift-logging
  resourceVersion: "124471"
  selfLink: /apis/operators.coreos.com/v1alpha1/namespaces/openshift-logging/clusterserviceversions/elasticsearch-operator.4.3.1-202002032140
  uid: e5829c98-1994-4e15-ad00-f4e94d42c076
spec:
  apiservicedefinitions: {}
  customresourcedefinitions:
    owned:
    - description: An Elasticsearch cluster instance
      displayName: Elasticsearch
      kind: Elasticsearch
      name: elasticsearches.logging.openshift.io
      resources:
      - kind: Deployment
        name: ""
        version: v1
      - kind: StatefulSet
        name: ""
        version: v1
      - kind: ReplicaSet
        name: ""
        version: v1
      - kind: Pod
        name: ""
        version: v1
      - kind: ConfigMap
        name: ""
        version: v1
      - kind: Service
        name: ""
        version: v1
      - kind: Route
        name: ""
        version: v1
      specDescriptors:
      - description: Limits describes the minimum/maximum amount of compute resources
          required/allowed
        displayName: Resource Requirements
        path: nodeSpec.resources
        x-descriptors:
        - urn:alm:descriptor:com.tectonic.ui:resourceRequirements
      statusDescriptors:
      - description: The current Status of the Elasticsearch Cluster
        displayName: Status
        path: cluster.status
        x-descriptors:
        - urn:alm:descriptor:io.kubernetes.phase
      - description: The number of Active Primary Shards for the Elasticsearch Cluster
        displayName: Active Primary Shards
        path: cluster.activePrimShards
        x-descriptors:
        - urn:alm:descriptor:text
      - description: The number of Active Shards for the Elasticsearch Cluster
        displayName: Active Shards
        path: cluster.activeShards
        x-descriptors:
        - urn:alm:descriptor:text
      - description: The number of Initializing Shards for the Elasticsearch Cluster
        displayName: Initializing Shards
        path: cluster.initializingShards
        x-descriptors:
        - urn:alm:descriptor:text
      - description: The number of Data Nodes for the Elasticsearch Cluster
        displayName: Number of Data Nodes
        path: cluster.numDataNodes
        x-descriptors:
        - urn:alm:descriptor:text
      - description: The number of Nodes for the Elasticsearch Cluster
        displayName: Number of Nodes
        path: cluster.numNodes
        x-descriptors:
        - urn:alm:descriptor:text
      - description: The number of Relocating Shards for the Elasticsearch Cluster
        displayName: Relocating Shards
        path: cluster.relocatingShards
        x-descriptors:
        - urn:alm:descriptor:text
      - description: The number of Unassigned Shards for the Elasticsearch Cluster
        displayName: Unassigned Shards
        path: cluster.unassignedShards
        x-descriptors:
        - urn:alm:descriptor:text
      - description: The status for each of the Elasticsearch pods with the Client
          role
        displayName: Elasticsearch Client Status
        path: pods.client
        x-descriptors:
        - urn:alm:descriptor:com.tectonic.ui:podStatuses
      - description: The status for each of the Elasticsearch pods with the Data role
        displayName: Elasticsearch Data Status
        path: pods.data
        x-descriptors:
        - urn:alm:descriptor:com.tectonic.ui:podStatuses
      - description: The status for each of the Elasticsearch pods with the Master
          role
        displayName: Elasticsearch Master Status
        path: pods.master
        x-descriptors:
        - urn:alm:descriptor:com.tectonic.ui:podStatuses
      version: v1
  description: |
    The Elasticsearch Operator for OKD provides a means for configuring and managing an Elasticsearch cluster for use in tracing and cluster logging.
    This operator only supports OKD Cluster Logging and Jaeger.  It is tightly coupled to each and is not currently capable of
    being used as a general purpose manager of Elasticsearch clusters running on OKD.

    It is recommended this operator be deployed to the **openshift-operators** namespace to properly support the Cluster Logging and Jaeger use cases.

    Once installed, the operator provides the following features:
    * **Create/Destroy**: Deploy an Elasticsearch cluster to the same namespace in which the Elasticsearch custom resource is created.
  displayName: Elasticsearch Operator
  install:
    spec:
      clusterPermissions:
      - rules:
        - apiGroups:
          - logging.openshift.io
          resources:
          - '*'
          verbs:
          - '*'
        - apiGroups:
          - ""
          resources:
          - pods
          - pods/exec
          - services
          - endpoints
          - persistentvolumeclaims
          - events
          - configmaps
          - secrets
          - serviceaccounts
          verbs:
          - '*'
        - apiGroups:
          - apps
          resources:
          - deployments
          - daemonsets
          - replicasets
          - statefulsets
          verbs:
          - '*'
        - apiGroups:
          - monitoring.coreos.com
          resources:
          - prometheusrules
          - servicemonitors
          verbs:
          - '*'
        - apiGroups:
          - rbac.authorization.k8s.io
          resources:
          - clusterroles
          - clusterrolebindings
          verbs:
          - '*'
        - nonResourceURLs:
          - /metrics
          verbs:
          - get
        - apiGroups:
          - authentication.k8s.io
          resources:
          - tokenreviews
          - subjectaccessreviews
          verbs:
          - create
        - apiGroups:
          - authorization.k8s.io
          resources:
          - subjectaccessreviews
          verbs:
          - create
        serviceAccountName: elasticsearch-operator
      deployments:
      - name: elasticsearch-operator
        spec:
          replicas: 1
          selector:
            matchLabels:
              name: elasticsearch-operator
          strategy: {}
          template:
            metadata:
              creationTimestamp: null
              labels:
                name: elasticsearch-operator
            spec:
              containers:
              - command:
                - elasticsearch-operator
                env:
                - name: WATCH_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.annotations['olm.targetNamespaces']
                - name: POD_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.name
                - name: OPERATOR_NAME
                  value: elasticsearch-operator
                - name: PROXY_IMAGE
                  value: registry.redhat.io/openshift4/ose-oauth-proxy@sha256:e9ed3a91872ad17ba511cb8e1bd91c764298e59492b2ed89012b20ea8071445b
                - name: ELASTICSEARCH_IMAGE
                  value: registry.redhat.io/openshift4/ose-logging-elasticsearch5@sha256:c360ab6acbac3d10989a4e8b0054e277e2584c737f8371c48a054d314fd1e94b
                image: registry.redhat.io/openshift4/ose-elasticsearch-operator@sha256:b604641f95c9762ff9b1c9d550cec908d9caab3cc333120e7cf60a55539b8149
                imagePullPolicy: IfNotPresent
                name: elasticsearch-operator
                ports:
                - containerPort: 60000
                  name: metrics
                resources: {}
              serviceAccountName: elasticsearch-operator
    strategy: deployment
  installModes:
  - supported: true
    type: OwnNamespace
  - supported: false
    type: SingleNamespace
  - supported: false
    type: MultiNamespace
  - supported: true
    type: AllNamespaces
  keywords:
  - elasticsearch
  - jaeger
  links:
  - name: Elastic
    url: https://www.elastic.co/
  - name: Elasticsearch Operator
    url: https://github.com/openshift/elasticsearch-operator
  maintainers:
  - email: aos-logging
    name: Red Hat, AOS Logging
  minKubeVersion: 1.14.0
  provider:
    name: Red Hat, Inc
  version: 4.3.1-202002032140
status:
  conditions:
  - lastTransitionTime: "2020-03-09T22:44:58Z"
    lastUpdateTime: "2020-03-09T22:44:58Z"
    message: requirements not yet checked
    phase: Pending
    reason: RequirementsUnknown
  - lastTransitionTime: "2020-03-09T22:44:58Z"
    lastUpdateTime: "2020-03-09T22:44:58Z"
    message: one or more requirements couldn't be found
    phase: Pending
    reason: RequirementsNotMet
  - lastTransitionTime: "2020-03-09T22:44:59Z"
    lastUpdateTime: "2020-03-09T22:44:59Z"
    message: all requirements found, attempting install
    phase: InstallReady
    reason: AllRequirementsMet
  - lastTransitionTime: "2020-03-09T22:44:59Z"
    lastUpdateTime: "2020-03-09T22:44:59Z"
    message: waiting for install components to report healthy
    phase: Installing
    reason: InstallSucceeded
  - lastTransitionTime: "2020-03-09T22:44:59Z"
    lastUpdateTime: "2020-03-09T22:45:00Z"
    message: |
      installing: Waiting: waiting for deployment elasticsearch-operator to become ready: Waiting for rollout to finish: 0 of 1 updated replicas are available...
    phase: Installing
    reason: InstallWaiting
  - lastTransitionTime: "2020-03-09T22:45:04Z"
    lastUpdateTime: "2020-03-09T22:45:04Z"
    message: install strategy completed with no errors
    phase: Succeeded
    reason: InstallSucceeded
  - lastTransitionTime: "2020-03-09T23:04:10Z"
    lastUpdateTime: "2020-03-09T23:04:10Z"
    message: |
      installing: Waiting: waiting for deployment elasticsearch-operator to become ready: Waiting for rollout to finish: 0 of 1 updated replicas are available...
    phase: Failed
    reason: ComponentUnhealthy
  - lastTransitionTime: "2020-03-09T23:04:10Z"
    lastUpdateTime: "2020-03-09T23:04:10Z"
    message: |
      installing: Waiting: waiting for deployment elasticsearch-operator to become ready: Waiting for rollout to finish: 0 of 1 updated replicas are available...
    phase: Pending
    reason: NeedsReinstall
  - lastTransitionTime: "2020-03-09T23:04:10Z"
    lastUpdateTime: "2020-03-09T23:04:10Z"
    message: all requirements found, attempting install
    phase: InstallReady
    reason: AllRequirementsMet
  - lastTransitionTime: "2020-03-09T23:04:10Z"
    lastUpdateTime: "2020-03-09T23:04:10Z"
    message: waiting for install components to report healthy
    phase: Installing
    reason: InstallSucceeded
  - lastTransitionTime: "2020-03-09T23:04:10Z"
    lastUpdateTime: "2020-03-09T23:04:10Z"
    message: |
      installing: Waiting: waiting for deployment elasticsearch-operator to become ready: Waiting for rollout to finish: 0 of 1 updated replicas are available...
    phase: Installing
    reason: InstallWaiting
  - lastTransitionTime: "2020-03-09T23:04:22Z"
    lastUpdateTime: "2020-03-09T23:04:22Z"
    message: install strategy completed with no errors
    phase: Succeeded
    reason: InstallSucceeded
  lastTransitionTime: "2020-03-09T23:04:22Z"
  lastUpdateTime: "2020-03-10T02:47:51Z"
  message: The operator is running in openshift-operators-redhat but is managing this
    namespace
  phase: Succeeded
  reason: Copied
  requirementStatus:
  - group: operators.coreos.com
    kind: ClusterServiceVersion
    message: CSV minKubeVersion (1.14.0) less than server version (v1.16.2)
    name: elasticsearch-operator.4.3.1-202002032140
    status: Present
    version: v1alpha1
  - group: apiextensions.k8s.io
    kind: CustomResourceDefinition
    message: CRD is present and Established condition is true
    name: elasticsearches.logging.openshift.io
    status: Present
    uuid: 06a4bf95-2b6e-4c8b-aaa9-f0ab739e9e46
    version: v1beta1
  - dependents:
    - group: rbac.authorization.k8s.io
      kind: PolicyRule
      message: cluster rule:{"verbs":["*"],"apiGroups":["logging.openshift.io"],"resources":["*"]}
      status: Satisfied
      version: v1beta1
    - group: rbac.authorization.k8s.io
      kind: PolicyRule
      message: cluster rule:{"verbs":["*"],"apiGroups":[""],"resources":["pods","pods/exec","services","endpoints","persistentvolumeclaims","events","configmaps","secrets","serviceaccounts"]}
      status: Satisfied
      version: v1beta1
    - group: rbac.authorization.k8s.io
      kind: PolicyRule
      message: cluster rule:{"verbs":["*"],"apiGroups":["apps"],"resources":["deployments","daemonsets","replicasets","statefulsets"]}
      status: Satisfied
      version: v1beta1
    - group: rbac.authorization.k8s.io
      kind: PolicyRule
      message: cluster rule:{"verbs":["*"],"apiGroups":["monitoring.coreos.com"],"resources":["prometheusrules","servicemonitors"]}
      status: Satisfied
      version: v1beta1
    - group: rbac.authorization.k8s.io
      kind: PolicyRule
      message: cluster rule:{"verbs":["*"],"apiGroups":["rbac.authorization.k8s.io"],"resources":["clusterroles","clusterrolebindings"]}
      status: Satisfied
      version: v1beta1
    - group: rbac.authorization.k8s.io
      kind: PolicyRule
      message: cluster rule:{"verbs":["get"],"nonResourceURLs":["/metrics"]}
      status: Satisfied
      version: v1beta1
    - group: rbac.authorization.k8s.io
      kind: PolicyRule
      message: cluster rule:{"verbs":["create"],"apiGroups":["authentication.k8s.io"],"resources":["tokenreviews","subjectaccessreviews"]}
      status: Satisfied
      version: v1beta1
    - group: rbac.authorization.k8s.io
      kind: PolicyRule
      message: cluster rule:{"verbs":["create"],"apiGroups":["authorization.k8s.io"],"resources":["subjectaccessreviews"]}
      status: Satisfied
      version: v1beta1
    group: ""
    kind: ServiceAccount
    message: ""
    name: elasticsearch-operator
    status: Present
    version: v1

Comment 7 Jeff Cantrill 2020-03-16 18:47:16 UTC
Can we try this:

1. Deploy cluster logging
1. scale down CVO
2. scale down OLM
3. Set ClusterLogging instance to "Unmanaged"
4. Set elasticsearch elasticsearch to "Unmanaged"
5. Edit configmap elasticsearch and set network.host to "en0" from "0.0.0.0"
6. Delete elasticsearch pod(s) to force restart

Comment 10 Anping Li 2020-03-26 15:45:00 UTC
To unblocked the 4.4, verified using internal builds.

quay.io/openshift/origin-cluster-logging-operator:latest
quay.io/openshift/origin-elasticsearch-operator:latest
quay.io/openshift/origin-elasticsearch-proxy:latest
registry.svc.ci.openshift.org/origin/4.5:logging-curator5
registry.svc.ci.openshift.org/origin/4.5:logging-elasticsearch6
registry.svc.ci.openshift.org/origin/4.5:logging-fluentd
registry.svc.ci.openshift.org/origin/4.5:logging-kibana6
registry.svc.ci.openshift.org/origin/4.5:oauth-proxy

Comment 12 errata-xmlrpc 2020-07-13 17:19:21 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.