Bug 1809511 - The new index isn't correct after upgrading logging from 4.4 to 4.5
Summary: The new index isn't correct after upgrading logging from 4.4 to 4.5
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: unspecified
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.5.0
Assignee: Jeff Cantrill
QA Contact: Qiaoling Tang
URL:
Whiteboard:
Depends On: 1827032
Blocks: 1882495
TreeView+ depends on / blocked
 
Reported: 2020-03-03 10:04 UTC by Qiaoling Tang
Modified: 2023-12-15 17:26 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1882495 (view as bug list)
Environment:
Last Closed: 2020-08-04 18:03:25 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift elasticsearch-operator pull 252 0 None closed Bug 1809511: block creation of indices with the write suffix for index management 2020-12-23 13:12:12 UTC
Github openshift elasticsearch-operator pull 274 0 None closed Revert "Bug 1809511: Block creation of indices with the write suffix … 2020-12-23 13:12:46 UTC
Github openshift elasticsearch-operator pull 286 0 None closed Bug 1809511: block autocreation of indices with 'write' suffix 2020-12-23 13:12:14 UTC
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-08-04 18:03:26 UTC

Comment 7 Qiaoling Tang 2020-04-23 05:40:47 UTC
This bug is blocked by https://bugzilla.redhat.com/show_bug.cgi?id=1827032

Comment 8 Qiaoling Tang 2020-05-13 03:36:48 UTC
Tested with images from 4.5.0-0.ci-2020-05-12-205117

$ oc exec elasticsearch-cdm-m2j2lxw9-1-596649ffc8-nrz5k -- indices
Defaulting container name to elasticsearch.
Use 'oc describe pod/elasticsearch-cdm-m2j2lxw9-1-596649ffc8-nrz5k -n openshift-logging' to see all of the containers in this pod.
Wed May 13 02:07:22 UTC 2020
health status index                                                          uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   infra-000001                                                   bUZeHMtHShuRwemjAk067Q   3   1     322228            0        818            397
green  open   audit-000001                                                   u87iPiYlSPWfRrNnoi0CwA   3   1          0            0          0              0
green  open   project.qitang.5682515f-a751-4bb1-a98c-a1ca0b381376.2020.05.13 LmyeqjBQTuOVO6-HEr03YA   3   1       2741            0          3              1
green  open   .operations.2020.05.13                                         Ob9V0iMiTQ6uTnuu5qOSGg   3   1    2124104            0       4594           2288
green  open   .kibana.a5f01f00ae88a880fd91ed1dbace3dff08f5c0b2               nJxuXyS4RiOFmzWmSvzI6Q   1   1          0            0          0              0
green  open   .security                                                      cWRUcXT-TcmIzaPFOiowsw   1   1          5            0          0              0
green  open   .kibana.647a750f1787408bf50088234ec0edd5a6a9b2ac               CvSIX5rNRVSCMFEIH9hyeQ   1   1          2            0          0              0
green  open   .searchguard                                                   UC4ukmZvTc61M5HNPLiqyg   1   1          5           52          0              0
green  open   app-000001                                                     iJwFUbNATx-R73r2YFITTg   3   1        233            0          1              0
green  open   .kibana                                                        1LLeP9ZRROKU0WyFaGfQfw   1   1          1            0          0              0

$ oc exec elasticsearch-cdm-m2j2lxw9-1-596649ffc8-nrz5k -- es_util --query=*/_alias |jq
Defaulting container name to elasticsearch.
Use 'oc describe pod/elasticsearch-cdm-m2j2lxw9-1-596649ffc8-nrz5k -n openshift-logging' to see all of the containers in this pod.
{
  "project.qitang.5682515f-a751-4bb1-a98c-a1ca0b381376.2020.05.13": {
    "aliases": {
      ".all": {},
      "app": {}
    }
  },
  "app-000001": {
    "aliases": {
      "app": {},
      "app-write": {},
      "logs-app": {}
    }
  },
  ".searchguard": {
    "aliases": {}
  },
  ".security": {
    "aliases": {}
  },
  ".kibana.a5f01f00ae88a880fd91ed1dbace3dff08f5c0b2": {
    "aliases": {}
  },
  ".operations.2020.05.13": {
    "aliases": {
      ".all": {},
      "infra": {}
    }
  },
  "audit-000001": {
    "aliases": {
      "audit": {},
      "audit-write": {},
      "logs-audit": {}
    }
  },
  "infra-000001": {
    "aliases": {
      "infra": {},
      "infra-write": {},
      "logs-infra": {}
    }
  },
  ".kibana": {
    "aliases": {}
  },
  ".kibana.647a750f1787408bf50088234ec0edd5a6a9b2ac": {
    "aliases": {}
  }
}



The alias of index app-000001, infra-000001 and audit-000001 are different from a fresh deployment of logging 4.5:

In a fresh deployment, the alias look like:

$ oc exec elasticsearch-cdm-c3scwxvp-1-5cd78c684d-p5sxk  -- es_util --query=*/_alias |jq
Defaulting container name to elasticsearch.
Use 'oc describe pod/elasticsearch-cdm-c3scwxvp-1-5cd78c684d-p5sxk -n openshift-logging' to see all of the containers in this pod.
{
  "infra-000003": {
    "aliases": {
      ".all": {},
      "infra": {},
      "infra-write": {
        "is_write_index": true
      },
      "logs-infra": {}
    }
  },
  ".security": {
    "aliases": {}
  },
  "infra-000002": {
    "aliases": {
      ".all": {},
      "infra": {},
      "infra-write": {
        "is_write_index": false
      },
      "logs-infra": {}
    }
  },
  "audit-000001": {
    "aliases": {
      "audit": {},
      "audit-write": {
        "is_write_index": true
      },
      "logs-audit": {}
    }
  },
  "infra-000001": {
    "aliases": {
      ".all": {},
      "infra": {},
      "infra-write": {
        "is_write_index": false
      },
      "logs-infra": {}
    }
  },
  ".kibana_1": {
    "aliases": {
      ".kibana": {}
    }
  },
  "app-000001": {
    "aliases": {
      ".all": {},
      "app": {},
      "app-write": {
        "is_write_index": true
      },
      "logs-app": {}
    }
  }
}


Is this an issue?

Comment 9 Qiaoling Tang 2020-05-13 05:41:15 UTC
The steps in comment 8 were:
1. deploy logging 4.4 on 4.4 cluster
2. upgrade cluster to 4.5
3. upgrade logging to 4.5


I also tried to deploy logging 4.4 on 4.5 cluster, then upgrade to 4.5, sometimes the ES upgrade pending due to some shards couldn't be allocated, and the xxx-write appeared.

$ oc get pod
NAME                                            READY   STATUS      RESTARTS   AGE
cluster-logging-operator-d857bb6db-2c5dm        1/1     Running     0          6m
curator-1589347800-65xpz                        0/1     Completed   0          6m8s
elasticsearch-cdm-ueugy23c-1-7b8c96466f-w4h4x   2/2     Running     0          5m7s
elasticsearch-cdm-ueugy23c-2-5c89754978-hxv2z   2/2     Running     0          11m
elasticsearch-cdm-ueugy23c-3-7dc85f54-92lsx     2/2     Running     0          10m
fluentd-6v7fg                                   1/1     Running     0          4m35s
fluentd-9pnq6                                   1/1     Running     0          3m55s
fluentd-mfh9h                                   1/1     Running     0          4m20s
fluentd-pr99c                                   1/1     Running     0          5m41s
fluentd-s8q2p                                   1/1     Running     0          4m51s
fluentd-slffp                                   1/1     Running     0          5m17s
kibana-7df8756998-brvd4                         2/2     Running     0          5m26s

$ oc exec elasticsearch-cdm-ueugy23c-1-7b8c96466f-w4h4x -- indices
Defaulting container name to elasticsearch.
Use 'oc describe pod/elasticsearch-cdm-ueugy23c-1-7b8c96466f-w4h4x -n openshift-logging' to see all of the containers in this pod.
Wed May 13 05:35:58 UTC 2020
health status index                                                          uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .searchguard                                                   1OAiG4eQSOyz5GL74oJfjw   1   1          0            0          0              0
green  open   infra-write                                                    NhSZT16YR6uTciqvKMG9dw   5   1          0            0          0              0
green  open   project.qitang.68a63a9e-b51d-47b8-b813-6d01a1182050.2020.05.13 lmLUc02aT06XRdsAnGK1cQ   3   1        314            0          1              0
green  open   .kibana                                                        2YZYX4o3RO27sRpq0AWEnA   1   1          1            0          0              0
green  open   .operations.2020.05.13                                         xBIFKH6nR_auqezlRPhMFw   3   1     361888            0        791            397
yellow open   app-000001                                                     o0otZf1JSTSDB68UB-D2AQ   3   1          0            0          0              0
green  open   .kibana.647a750f1787408bf50088234ec0edd5a6a9b2ac               6ioURtm4Q1ylH2-qfcY43w   1   1          0            0          0              0


EO logs:
$ oc logs -n openshift-operators-redhat elasticsearch-operator-f495dc9c5-9r6hd 
{"level":"info","ts":1589347847.5847127,"logger":"cmd","msg":"Go Version: go1.13.8"}
{"level":"info","ts":1589347847.5847335,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"}
{"level":"info","ts":1589347847.584738,"logger":"cmd","msg":"Version of operator-sdk: v0.8.2"}
{"level":"info","ts":1589347847.5856504,"logger":"leader","msg":"Trying to become the leader."}
{"level":"info","ts":1589347847.7435079,"logger":"leader","msg":"Found existing lock","LockOwner":"elasticsearch-operator-549f7dcfbc-rb6tg"}
{"level":"info","ts":1589347847.7563124,"logger":"leader","msg":"Not the leader. Waiting."}
{"level":"info","ts":1589347848.8197517,"logger":"leader","msg":"Not the leader. Waiting."}
{"level":"info","ts":1589347850.9810865,"logger":"leader","msg":"Became the leader."}
{"level":"info","ts":1589347851.08461,"logger":"cmd","msg":"Registering Components."}
{"level":"info","ts":1589347851.08526,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"kibana-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1589347851.085431,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"elasticsearch-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1589347851.0856535,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"proxyconfig-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1589347851.0857975,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"kibanasecret-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1589347851.085977,"logger":"kubebuilder.controller","msg":"Starting EventSource","controller":"trustedcabundle-controller","source":"kind source: /, Kind="}
{"level":"info","ts":1589347851.2115493,"logger":"metrics","msg":"Metrics Service object created","Service.Name":"elasticsearch-operator","Service.Namespace":"openshift-operators-redhat"}
{"level":"info","ts":1589347851.211579,"logger":"cmd","msg":"This operator no longer honors the image specified by the custom resources so that it is able to properly coordinate the configuration with the image."}
{"level":"info","ts":1589347851.211611,"logger":"cmd","msg":"Starting the Cmd."}
{"level":"info","ts":1589347852.311824,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"proxyconfig-controller"}
{"level":"info","ts":1589347852.311827,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"kibanasecret-controller"}
{"level":"info","ts":1589347852.3118062,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"trustedcabundle-controller"}
{"level":"info","ts":1589347852.3118715,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"kibana-controller"}
{"level":"info","ts":1589347852.3118157,"logger":"kubebuilder.controller","msg":"Starting Controller","controller":"elasticsearch-controller"}
{"level":"info","ts":1589347852.4119809,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"proxyconfig-controller","worker count":1}
{"level":"info","ts":1589347852.411983,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"elasticsearch-controller","worker count":1}
{"level":"info","ts":1589347852.4120457,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"trustedcabundle-controller","worker count":1}
{"level":"info","ts":1589347852.4119809,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"kibanasecret-controller","worker count":1}
{"level":"info","ts":1589347852.4121447,"logger":"kubebuilder.controller","msg":"Starting workers","controller":"kibana-controller","worker count":1}
time="2020-05-13T05:30:52Z" level=error msg="Error updating &TypeMeta{Kind:Deployment,APIVersion:apps/v1,}: Operation cannot be fulfilled on deployments.apps \"kibana\": the object has been modified; please apply your changes to the latest version and try again"
time="2020-05-13T05:30:52Z" level=info msg="Updating status of Kibana"
time="2020-05-13T05:30:52Z" level=info msg="Kibana status successfully updated"
time="2020-05-13T05:30:53Z" level=info msg="Updating status of Kibana"
time="2020-05-13T05:30:53Z" level=info msg="Kibana status successfully updated"
time="2020-05-13T05:30:53Z" level=info msg="Updating status of Kibana"
time="2020-05-13T05:30:53Z" level=info msg="Kibana status successfully updated"
time="2020-05-13T05:30:53Z" level=info msg="Updating status of Kibana"
time="2020-05-13T05:30:53Z" level=info msg="Kibana status successfully updated"
time="2020-05-13T05:30:54Z" level=info msg="Waiting for cluster to complete recovery: red / green"
time="2020-05-13T05:30:56Z" level=warning msg="Unable to perform synchronized flush: Failed to flush 5 shards in preparation for cluster restart"
time="2020-05-13T05:31:23Z" level=info msg="Updating status of Kibana"
time="2020-05-13T05:31:23Z" level=info msg="Kibana status successfully updated"
time="2020-05-13T05:31:23Z" level=info msg="Kibana status successfully updated"
time="2020-05-13T05:31:33Z" level=info msg="Waiting for cluster to complete recovery: yellow / green"
time="2020-05-13T05:31:33Z" level=info msg="Waiting for cluster to be fully recovered before upgrading elasticsearch-cdm-ueugy23c-2: yellow / green"
time="2020-05-13T05:31:33Z" level=warning msg="Error occurred while updating node elasticsearch-cdm-ueugy23c-2: Cluster not in green state before beginning upgrade: yellow"
time="2020-05-13T05:31:33Z" level=info msg="Waiting for cluster to be fully recovered before upgrading elasticsearch-cdm-ueugy23c-3: yellow / green"
time="2020-05-13T05:31:33Z" level=warning msg="Error occurred while updating node elasticsearch-cdm-ueugy23c-3: Cluster not in green state before beginning upgrade: yellow"
time="2020-05-13T05:31:35Z" level=error msg="Error intializing index for mapping infra: There was an error creating index infra-000001. Error code: true, map[error:map[index:infra-write index_uuid:NhSZT16YR6uTciqvKMG9dw reason:Invalid alias name [infra-write], an index exists with the same name as the alias root_cause:[map[index:infra-write index_uuid:NhSZT16YR6uTciqvKMG9dw reason:Invalid alias name [infra-write], an index exists with the same name as the alias type:invalid_alias_name_exception]] type:invalid_alias_name_exception] status:400]"
{"level":"error","ts":1589347895.5920353,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"elasticsearch-controller","request":"openshift-logging/elasticsearch","error":"Failed to reconcile IndexMangement for Elasticsearch cluster: There was an error creating index infra-000001. Error code: true, map[error:map[index:infra-write index_uuid:NhSZT16YR6uTciqvKMG9dw reason:Invalid alias name [infra-write], an index exists with the same name as the alias root_cause:[map[index:infra-write index_uuid:NhSZT16YR6uTciqvKMG9dw reason:Invalid alias name [infra-write], an index exists with the same name as the alias type:invalid_alias_name_exception]] type:invalid_alias_name_exception] status:400]","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2020-05-13T05:31:36Z" level=info msg="Waiting for cluster to complete recovery: yellow / green"
time="2020-05-13T05:31:37Z" level=error msg="Error intializing index for mapping infra: There was an error creating index infra-000001. Error code: true, map[error:map[index:infra-write index_uuid:NhSZT16YR6uTciqvKMG9dw reason:Invalid alias name [infra-write], an index exists with the same name as the alias root_cause:[map[index:infra-write index_uuid:NhSZT16YR6uTciqvKMG9dw reason:Invalid alias name [infra-write], an index exists with the same name as the alias type:invalid_alias_name_exception]] type:invalid_alias_name_exception] status:400]"
{"level":"error","ts":1589347897.9125047,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"elasticsearch-controller","request":"openshift-logging/elasticsearch","error":"Failed to reconcile IndexMangement for Elasticsearch cluster: There was an error creating index infra-000001. Error code: true, map[error:map[index:infra-write index_uuid:NhSZT16YR6uTciqvKMG9dw reason:Invalid alias name [infra-write], an index exists with the same name as the alias root_cause:[map[index:infra-write index_uuid:NhSZT16YR6uTciqvKMG9dw reason:Invalid alias name [infra-write], an index exists with the same name as the alias type:invalid_alias_name_exception]] type:invalid_alias_name_exception] status:400]","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2020-05-13T05:31:39Z" level=info msg="Waiting for cluster to complete recovery: yellow / green"
time="2020-05-13T05:31:39Z" level=warning msg="Unable to list existing templates in order to reconcile stale ones: There was an error retrieving list of templates. Error code: true, map[error:map[reason:no permissions for [indices:admin/template/get] and User [name=_sg_internal, roles=[]] root_cause:[map[reason:no permissions for [indices:admin/template/get] and User [name=_sg_internal, roles=[]] type:security_exception]] type:security_exception] status:403]"
time="2020-05-13T05:31:40Z" level=error msg="Error intializing index for mapping infra: There was an error retrieving list of indices aliased to infra-write. Error code: true, map[error:map[reason:no permissions for [indices:admin/aliases/get] and User [name=_sg_internal, roles=[]] root_cause:[map[reason:no permissions for [indices:admin/aliases/get] and User [name=_sg_internal, roles=[]] type:security_exception]] type:security_exception] status:403]"
{"level":"error","ts":1589347900.1890833,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"elasticsearch-controller","request":"openshift-logging/elasticsearch","error":"Failed to reconcile IndexMangement for Elasticsearch cluster: There was an error retrieving list of indices aliased to infra-write. Error code: true, map[error:map[reason:no permissions for [indices:admin/aliases/get] and User [name=_sg_internal, roles=[]] root_cause:[map[reason:no permissions for [indices:admin/aliases/get] and User [name=_sg_internal, roles=[]] type:security_exception]] type:security_exception] status:403]","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2020-05-13T05:31:41Z" level=info msg="Waiting for cluster to complete recovery: yellow / green"
time="2020-05-13T05:31:42Z" level=error msg="Error intializing index for mapping infra: There was an error creating index infra-000001. Error code: true, map[error:map[index:infra-write index_uuid:NhSZT16YR6uTciqvKMG9dw reason:Invalid alias name [infra-write], an index exists with the same name as the alias root_cause:[map[index:infra-write index_uuid:NhSZT16YR6uTciqvKMG9dw reason:Invalid alias name [infra-write], an index exists with the same name as the alias type:invalid_alias_name_exception]] type:invalid_alias_name_exception] status:400]"
{"level":"error","ts":1589347902.222509,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"elasticsearch-controller","request":"openshift-logging/elasticsearch","error":"Failed to reconcile IndexMangement for Elasticsearch cluster: There was an error creating index infra-000001. Error code: true, map[error:map[index:infra-write index_uuid:NhSZT16YR6uTciqvKMG9dw reason:Invalid alias name [infra-write], an index exists with the same name as the alias root_cause:[map[index:infra-write index_uuid:NhSZT16YR6uTciqvKMG9dw reason:Invalid alias name [infra-write], an index exists with the same name as the alias type:invalid_alias_name_exception]] type:invalid_alias_name_exception] status:400]","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2020-05-13T05:31:43Z" level=info msg="Waiting for cluster to complete recovery: yellow / green"
time="2020-05-13T05:31:45Z" level=error msg="Error intializing index for mapping infra: There was an error retrieving list of indices aliased to infra-write. Error code: true, map[error:map[reason:no permissions for [indices:admin/aliases/get] and User [name=_sg_internal, roles=[]] root_cause:[map[reason:no permissions for [indices:admin/aliases/get] and User [name=_sg_internal, roles=[]] type:security_exception]] type:security_exception] status:403]"
{"level":"error","ts":1589347905.2952812,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"elasticsearch-controller","request":"openshift-logging/elasticsearch","error":"Failed to reconcile IndexMangement for Elasticsearch cluster: There was an error retrieving list of indices aliased to infra-write. Error code: true, map[error:map[reason:no permissions for [indices:admin/aliases/get] and User [name=_sg_internal, roles=[]] root_cause:[map[reason:no permissions for [indices:admin/aliases/get] and User [name=_sg_internal, roles=[]] type:security_exception]] type:security_exception] status:403]","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2020-05-13T05:31:46Z" level=info msg="Waiting for cluster to complete recovery: yellow / green"
time="2020-05-13T05:31:47Z" level=error msg="Error intializing index for mapping infra: There was an error creating index infra-000001. Error code: true, map[error:map[reason:no permissions for [indices:admin/create] and User [name=_sg_internal, roles=[]] root_cause:[map[reason:no permissions for [indices:admin/create] and User [name=_sg_internal, roles=[]] type:security_exception]] type:security_exception] status:403]"
{"level":"error","ts":1589347907.8811412,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"elasticsearch-controller","request":"openshift-logging/elasticsearch","error":"Failed to reconcile IndexMangement for Elasticsearch cluster: There was an error creating index infra-000001. Error code: true, map[error:map[reason:no permissions for [indices:admin/create] and User [name=_sg_internal, roles=[]] root_cause:[map[reason:no permissions for [indices:admin/create] and User [name=_sg_internal, roles=[]] type:security_exception]] type:security_exception] status:403]","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}
time="2020-05-13T05:31:49Z" level=info msg="Waiting for cluster to complete recovery: yellow / green"
time="2020-05-13T05:31:49Z" level=error msg="Error intializing index for mapping infra: There was an error creating index infra-000001. Error code: true, map[error:map[index:infra-write index_uuid:NhSZT16YR6uTciqvKMG9dw reason:Invalid alias name [infra-write], an index exists with the same name as the alias root_cause:[map[index:infra-write index_uuid:NhSZT16YR6uTciqvKMG9dw reason:Invalid alias name [infra-write], an index exists with the same name as the alias type:invalid_alias_name_exception]] type:invalid_alias_name_exception] status:400]"
{"level":"error","ts":1589347909.876944,"logger":"kubebuilder.controller","msg":"Reconciler error","controller":"elasticsearch-controller","request":"openshift-logging/elasticsearch","error":"Failed to reconcile IndexMangement for Elasticsearch cluster: There was an error creating index infra-000001. Error code: true, map[error:map[index:infra-write index_uuid:NhSZT16YR6uTciqvKMG9dw reason:Invalid alias name [infra-write], an index exists with the same name as the alias root_cause:[map[index:infra-write index_uuid:NhSZT16YR6uTciqvKMG9dw reason:Invalid alias name [infra-write], an index exists with the same name as the alias type:invalid_alias_name_exception]] type:invalid_alias_name_exception] status:400]","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/github.com/go-logr/zapr/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:217\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/sigs.k8s.io/controller-runtime/pkg/internal/controller/controller.go:158\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/src/github.com/openshift/elasticsearch-operator/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88"}

Comment 10 Qiaoling Tang 2020-05-13 05:45:04 UTC
elasticsearch status:
  status:
    cluster:
      activePrimaryShards: 17
      activeShards: 32
      initializingShards: 0
      numDataNodes: 3
      numNodes: 3
      pendingTasks: 0
      relocatingShards: 0
      status: yellow
      unassignedShards: 2
    clusterHealth: ""
    conditions: []
    nodes:
    - deploymentName: elasticsearch-cdm-ueugy23c-1
      upgradeStatus:
        scheduledUpgrade: "True"
        underUpgrade: "True"
        upgradePhase: recoveringData
    - deploymentName: elasticsearch-cdm-ueugy23c-2
      upgradeStatus:
        scheduledUpgrade: "True"
        upgradePhase: controllerUpdated
    - deploymentName: elasticsearch-cdm-ueugy23c-3
      upgradeStatus:
        scheduledUpgrade: "True"
        upgradePhase: controllerUpdated
    pods:
      client:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-ueugy23c-1-7b8c96466f-w4h4x
        - elasticsearch-cdm-ueugy23c-2-5c89754978-hxv2z
        - elasticsearch-cdm-ueugy23c-3-7dc85f54-92lsx
      data:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-ueugy23c-1-7b8c96466f-w4h4x
        - elasticsearch-cdm-ueugy23c-2-5c89754978-hxv2z
        - elasticsearch-cdm-ueugy23c-3-7dc85f54-92lsx
      master:
        failed: []
        notReady: []
        ready:
        - elasticsearch-cdm-ueugy23c-1-7b8c96466f-w4h4x
        - elasticsearch-cdm-ueugy23c-2-5c89754978-hxv2z
        - elasticsearch-cdm-ueugy23c-3-7dc85f54-92lsx
    shardAllocationEnabled: all

Comment 11 Periklis Tsirakidis 2020-05-22 07:30:01 UTC
@Qiaoling Tang

This is still on needsinfo. Does team-logging need to provide something? Or are you waiting on something else?

Comment 12 Qiaoling Tang 2020-05-25 02:17:23 UTC
No, this should be fixed now, move to Verified.

Comment 14 errata-xmlrpc 2020-08-04 18:03:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (OpenShift Container Platform 4.5 image release advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.