The group user can see the app index in kibana on elasticsearch-operator.4.5.0-202102030632.p0.
No openshift proejct: 151
Active app: 1
ES: resource
logStore:
elasticsearch:
nodeCount: 3
redundancyPolicy: SingleRedundancy
resources:
limits:
cpu: 1
memory: 4Gi
requests:
cpu: 1
memory: 4Gi
storage:
size: 200G
storageClassName: gp2
I hit resouce limitaton in testing. the fluend cann't send some logs to ES before I reduce the ES Memory from 8Gi to 4Gi. I will verify this again on a larger cluster.
2021-02-03 15:56:28 +0000 [warn]: [clo_default_output_es] failed to flush the buffer. retry_time=7 next_retry_seconds=2021-02-03 15:57:35 +0000 chunk="5ba709e712fcc043e6d839acf0061e3f" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc.cluster.local\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): [500] {\"code\":500,\"message\":\"Internal Error\",\"error\":{}}\n"
2021-02-03 15:56:28 +0000 [warn]: suppressed same stacktrace
The problem was reproduced elasticsearch-operator.4.5.0-202101230744.p0 in cluster below. and verified on using elasticsearch-operator.4.5.0-202102031005.p0
Cluster Info:
AWS Clusters
Master: 3* M3.xlarge
Worker: 4* M4.X4Large
ES: nodeCount: 3
Elasticsearch: "cpu": "1","memory": "16Gi"
Proxy: "cpu": "100m","memory": "256Mi"
500 Group and Non-Openshift Projeccts..
5 Applicaitons in 5 Projects
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (OpenShift Container Platform 4.5.31 extras update), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2021:0315
The group user can see the app index in kibana on elasticsearch-operator.4.5.0-202102030632.p0. No openshift proejct: 151 Active app: 1 ES: resource logStore: elasticsearch: nodeCount: 3 redundancyPolicy: SingleRedundancy resources: limits: cpu: 1 memory: 4Gi requests: cpu: 1 memory: 4Gi storage: size: 200G storageClassName: gp2 I hit resouce limitaton in testing. the fluend cann't send some logs to ES before I reduce the ES Memory from 8Gi to 4Gi. I will verify this again on a larger cluster. 2021-02-03 15:56:28 +0000 [warn]: [clo_default_output_es] failed to flush the buffer. retry_time=7 next_retry_seconds=2021-02-03 15:57:35 +0000 chunk="5ba709e712fcc043e6d839acf0061e3f" error_class=Fluent::Plugin::ElasticsearchOutput::RecoverableRequestFailure error="could not push logs to Elasticsearch cluster ({:host=>\"elasticsearch.openshift-logging.svc.cluster.local\", :port=>9200, :scheme=>\"https\", :user=>\"fluentd\", :password=>\"obfuscated\"}): [500] {\"code\":500,\"message\":\"Internal Error\",\"error\":{}}\n" 2021-02-03 15:56:28 +0000 [warn]: suppressed same stacktrace