Bug 1420217
Summary: | Encounter ConnectException and some other exceptions in ES log after deploying logging 3.5.0 stacks | ||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Junqi Zhao <juzhao> | ||||||||||||||||||||||||||||||||||||||
Component: | Logging | Assignee: | Jeff Cantrill <jcantril> | ||||||||||||||||||||||||||||||||||||||
Status: | CLOSED ERRATA | QA Contact: | Xia Zhao <xiazhao> | ||||||||||||||||||||||||||||||||||||||
Severity: | low | Docs Contact: | |||||||||||||||||||||||||||||||||||||||
Priority: | medium | ||||||||||||||||||||||||||||||||||||||||
Version: | 3.5.0 | CC: | aos-bugs, jcantril, juzhao, lmeyer, pportant, pweil, rmeggins, smunilla, xiazhao, xtian | ||||||||||||||||||||||||||||||||||||||
Target Milestone: | --- | Keywords: | Reopened | ||||||||||||||||||||||||||||||||||||||
Target Release: | --- | ||||||||||||||||||||||||||||||||||||||||
Hardware: | Unspecified | ||||||||||||||||||||||||||||||||||||||||
OS: | Unspecified | ||||||||||||||||||||||||||||||||||||||||
Whiteboard: | |||||||||||||||||||||||||||||||||||||||||
Fixed In Version: | Doc Type: | No Doc Update | |||||||||||||||||||||||||||||||||||||||
Doc Text: |
undefined
|
Story Points: | --- | ||||||||||||||||||||||||||||||||||||||
Clone Of: | Environment: | ||||||||||||||||||||||||||||||||||||||||
Last Closed: | 2017-08-10 05:17:28 UTC | Type: | Bug | ||||||||||||||||||||||||||||||||||||||
Regression: | --- | Mount Type: | --- | ||||||||||||||||||||||||||||||||||||||
Documentation: | --- | CRM: | |||||||||||||||||||||||||||||||||||||||
Verified Versions: | Category: | --- | |||||||||||||||||||||||||||||||||||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||||||||||||||||||||||||||||||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||||||||||||||||||||||||||||||||||
Embargoed: | |||||||||||||||||||||||||||||||||||||||||
Attachments: |
|
Description
Junqi Zhao
2017-02-08 08:37:32 UTC
Created attachment 1248559 [details]
ES_LOG_journald.txt is es log upon journald logging driver
Is this a typical size node you normal test with? The amount of allocated memory seems significantly small at 488m of max heap. Tested on GCE, vm_type is n1-standard-2,memory is 7.5 GB, and this error don't happen with logging 3.4 Closing as I believe this is related to lack of memory resources. Jeff, There are two scenarios in this defect. 1. use json-file as logging driver, and deployed logging with ansible, ConnectException, SSLException, OutOfMemoryError Exception in ES log. first, ConnectException shows, a few minutes later,SSLException shows, the ssl connection closed, and last, "java.lang.OutOfMemoryError: Java heap space" shows. 2. use journald as logging driver, and deployed logging with ansible, first, ConnectException shows, a few minutes later,CircuitBreakingException show Based on your comments, I think you are talking about scenario 1, and machine memory is 7.5 GB, how can we increase heap size for ES, I always think the heap size is allocated by ES, so if customer come across this error, how to increase heap size by themselves. for scenario 2, do we know why ConnectException and CircuitBreakingException happen? In both ES log entries, at the beginning, ES emits a like saying how much HEAP is in use: 2017-02-08 06:31:12,271][INFO ][env] [Saint Elmo] heap size [471.7mb], compressed ordinary object pointers [true] I see this in both cases. Something is telling ES to use a tiny amount of memory. In a 7.5 GB VM instance, I would think giving ES 3.5 GB of HEAP, maybe 4 GB of HEAP, would be okay, but not great. If you can, run the VM with 16 GB of RAM at least so that it can use about 8 GB of HEAP for normal operation. Watch for Field Data size growth and know that if you start seeing "monitor.jvm" log messages from Elasticsearch you are likely to be running out of HEAP soon. Once you see Java OOM messages, usually Elasticsearch is toast. Peter, I tested on a 7.5 GB VM instance when this defect is filed, in both ES log entries, we can find: ES_JAVA_OPTS: '-Dmapper.allow_dots_in_name=true -Xms128M -Xmx488m' If we want to increase the heap size, I think we can change -Xmx to a bigger number. There are two questions I am confused: 1. Is -Xmx488m defined by the program? if so, we can not change this value if logging already deployed. 2. These error don't happen with logging 3.4, although -Xmx is a little bigger than logging 3.5 logging 3.4 ES log: Comparing the specificed RAM to the maximum recommended for ElasticSearch... Inspecting the maximum RAM available... ES_JAVA_OPTS: '-Dmapper.allow_dots_in_name=true -Xms128M -Xmx512m' Checking if Elasticsearch is ready on https://localhost:9200 ...[2017-02-10 06:39:10,829][INFO ][node ] [Skull the Slayer] version[2.4.1], pid[1], build[945a6e0/2016-11-17T20:39:42Z] [2017-02-10 06:39:10,940][INFO ][node ] [Skull the Slayer] initializing ... ...[2017-02-10 06:39:14,360][INFO ][plugins ] [Skull the Slayer] modules [reindex, lang-expression, lang-groovy], plugins [search-guard-ssl, openshift-elasticsearch, cloud-kubernetes, search-guard-2], sites [] .[2017-02-10 06:39:14,496][INFO ][env ] [Skull the Slayer] using [1] data paths, mounts [[/elasticsearch/persistent (/dev/mapper/rhel-root)]], net usable_space [7.8gb], net total_space [9.9gb], spins? [possibly], types [xfs] [2017-02-10 06:39:14,496][INFO ][env ] [Skull the Slayer] heap size [495.3mb], compressed ordinary object pointers [true] Amount of memory is calculated based on: https://github.com/openshift/origin-aggregated-logging/blob/master/elasticsearch/run.sh#L27 Feel free to reopen this issue if you feel its a problem. The way memory is allocated has not changed between 3.4 and 3.5. I think it would help to attach compariable DC, RC, or Pod JSON so we could see from what values it is making this calculation. Since we moved to ansible in 3.4, maybe we are passing a value incorrectly. So in the attached logs, I see the following: "Setting the maximum allowable RAM to 976m which is the largest amount available" This is emitted at line: https://github.com/openshift/origin-aggregated-logging/blob/master/elasticsearch/run.sh#L51 Something has set the cgroup limit to 978MB, even though the configuration is probably for more. due to the defect https://bugzilla.redhat.com/show_bug.cgi?id=1421563, only ConnectException shows in ES log now, same exception when this defect was field, since ConnectException not fixed, I have to re-open this defect. Error trace: [2017-02-13 04:18:52,878][INFO ][transport ] [Mister Machine] Using [com.floragunn.searchguard.ssl.transport.SearchGuardSSLNettyTransport] as transport, overridden by [search-guard-ssl] [2017-02-13 04:18:52,994][INFO ][client.transport ] [Mister Machine] failed to connect to node [{#transport#-1}{127.0.0.1}{127.0.0.1:9300}], removed from nodes list ConnectTransportException[[][127.0.0.1:9300] connect_timeout[30s]]; nested: ConnectException[Connection refused: localhost/127.0.0.1:9300]; at org.elasticsearch.transport.netty.NettyTransport.connectToChannelsLight(NettyTransport.java:967) at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:933) at org.elasticsearch.transport.netty.NettyTransport.connectToNodeLight(NettyTransport.java:906) at org.elasticsearch.transport.TransportService.connectToNodeLight(TransportService.java:267) at org.elasticsearch.client.transport.TransportClientNodesService$SimpleNodeSampler.doSample(TransportClientNodesService.java:390) at org.elasticsearch.client.transport.TransportClientNodesService$NodeSampler.sample(TransportClientNodesService.java:336) at org.elasticsearch.client.transport.TransportClientNodesService.addTransportAddresses(TransportClientNodesService.java:187) at org.elasticsearch.client.transport.TransportClient.addTransportAddress(TransportClient.java:243) at io.fabric8.elasticsearch.plugin.acl.DynamicACLFilter.<init>(DynamicACLFilter.java:166) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.elasticsearch.common.inject.DefaultConstructionProxyFactory$1.newInstance(DefaultConstructionProxyFactory.java:50) at org.elasticsearch.common.inject.ConstructorInjector.construct(ConstructorInjector.java:86) at org.elasticsearch.common.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:104) at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:47) at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:886) at org.elasticsearch.common.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:43) at org.elasticsearch.common.inject.Scopes$1$1.get(Scopes.java:59) at org.elasticsearch.common.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:46) at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:201) at org.elasticsearch.common.inject.InjectorBuilder$1.call(InjectorBuilder.java:193) at org.elasticsearch.common.inject.InjectorImpl.callInContext(InjectorImpl.java:879) at org.elasticsearch.common.inject.InjectorBuilder.loadEagerSingletons(InjectorBuilder.java:193) at org.elasticsearch.common.inject.InjectorBuilder.injectDynamically(InjectorBuilder.java:175) at org.elasticsearch.common.inject.InjectorBuilder.build(InjectorBuilder.java:110) at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:93) at org.elasticsearch.common.inject.Guice.createInjector(Guice.java:70) at org.elasticsearch.common.inject.ModulesBuilder.createInjector(ModulesBuilder.java:46) at org.elasticsearch.node.Node.<init>(Node.java:213) at org.elasticsearch.node.Node.<init>(Node.java:140) at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:143) at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:194) at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:286) at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:45) Caused by: java.net.ConnectException: Connection refused: localhost/127.0.0.1:9300 at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.jboss.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152) at org.jboss.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105) at org.jboss.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) at org.jboss.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) fully ES log is attached. Created attachment 1249753 [details]
ConnectException In ES Log
I have seen connection issues like this when the cluster is coming up, or when one of the es pods is stopped while some of the other es nodes/pods in the cluster are still running. I think this is a transient error, and should go away, and at any rate should not affect data delivery. However, if you can correlate this error with missing data or other problems, that would be a cause for concern. The log still shows that this instance is running ES with a Java heap set to about 500 MB. That is too small to be of any use or supportable. Can you reproduce this with a Java HEAP of at least 4 GB, perhaps even 8 GB? @Peter, Test on AWS with vm_type: m4.xlarge, memory is 16GB, ES log still shows the -Xmx512m. checked ES dc, info: resources: limits: memory: 1Gi requests: memory: 512Mi from https://github.com/openshift/origin-aggregated-logging/blob/master/elasticsearch/run.sh#L24 the amount of RAM allocated should be half of available instance RAM. I understand why the system set -Xmx to 512m, if we want to change the heap size, we should change the memory limits in ES dc and then scale it up. since the defect https://bugzilla.redhat.com/show_bug.cgi?id=1421563, I will try to verify this defect later. And I want to emphasize here, this defect does not happen with logging 3.4, and we never changed the memory limits in ES dc. Attached ES pod log and ES dc log Created attachment 1250122 [details]
es pod log
Created attachment 1250123 [details]
es dc log
Created attachment 1250124 [details]
es pod log
Not sure if ansible changed this or changes this, but here is the old deployer/templates/es.yaml dc template: resources: limits: memory: "${INSTANCE_RAM}i" requests: memory: "512Mi" What is the INSTANCE_RAM env. var setting for the es pod? oc env pod $espod --list The INSTANCE_RAM setting comes from the ES_INSTANCE_RAM parameter in the old deployer. The INSTANCE_RAM by default is 512G in the es Dockerfile. current ansible code has this in es.j2 limits: memory: "{{es_memory_limit}}" {% if es_cpu_limit is defined and es_cpu_limit is not none %} cpu: "{{es_cpu_limit}}" {% endif %} requests: memory: "512Mi" with ansible this value comes from the variable openshift_logging_es_memory_limit which has a default value of 1024Mi So, to be compatible with 3.4, the new ansible code should set the default as 8Gi. Default 3.5 to be same as 3.4 https://github.com/openshift/openshift-ansible/pull/3381 Commits pushed to master at https://github.com/openshift/openshift-ansible https://github.com/openshift/openshift-ansible/commit/de9b80326b8a9e2c600cdaa5d25fcebf38f840a3 bug 1420217. Default ES memory to be compariable to 3.4 deployer https://github.com/openshift/openshift-ansible/commit/43330fbd7cb90491d63e54430394a309f7a41f73 Merge pull request #3381 from jcantrill/bz_1420217_default_es_memory bug 1420217. Default ES memory to be compariable to 3.4 deployer Is this issue resolved or do you have additional logs you can provide? This issue is not resolved, since the defect BZ #1420219, I don't have additional logs to provide now. Can you reproduce and provide additional logs or close this issue if it is resolved? @Jeff Besides ES pod log, what kind of other logs do you want me to provide, then I can provide all to you. Have you reproduced this problem with at least 8 GB of instance RAM, or even 16 GB of instance RAM for both the ES pod and the ES ops pods? It is not clear to me reading through all the comments that it was demonstrated how the problem was reproduced in those environments, where the logs from those reproducers were provided showing the same failure as originally reported. @Jeff @Peter Deployed logging stack via ansible with 8GB RAM machine, attached ES dc,rc, pod log, and I think the java.net.ConnectException does not have something to do with memory size. ansible inventory file [oo_first_master] $master ansible_user=root ansible_ssh_user=root ansible_ssh_private_key_file="./libra-new.pem" openshift_public_hostname=$master [oo_first_master:vars] deployment_type=openshift-enterprise openshift_release=v3.5.0 openshift_logging_install_logging=true openshift_logging_kibana_hostname=kibana.$sub-domain openshift_logging_kibana_ops_hostname=kibana-ops.$sub-domain public_master_url=https://$master:$port openshift_logging_fluentd_hosts=$node openshift_logging_fluentd_use_journal=true openshift_logging_use_ops=false openshift_logging_image_prefix=registry.ops.openshift.com/openshift3/ openshift_logging_image_version=3.5.0 openshift_logging_namespace=$namespace Please check the attached logs, if you want me to provide more info, please let me know Created attachment 1256773 [details]
es rc info - 20170223
Created attachment 1256776 [details]
es dc info - 20170223
Created attachment 1256777 [details]
es pod log - 20170223
(In reply to Junqi Zhao from comment #33) > @Jeff @Peter > Deployed logging stack via ansible with 8GB RAM machine, attached ES dc,rc, > pod log, and I think the java.net.ConnectException does not have something > to do with memory size. ... > Please check the attached logs, if you want me to provide more info, please > let me know Seems like it really is the case that port 9300 has nobody listening on it from address "localhost". Can you capture the /etc/elasticsearch/elasticsearch.yml file that the ES instance is using? And if possible, see what ports are being listen'd on in that container? see attached elasticsearch.yml, and searched with `netstat -anp | grep 9300`, no return value, 9300 port was not used. I also compared with 3.4.1 es pod log, it is ES_INTERNAL_IP:9300 there # oc logs logging-es-xluc1lps-1-cjsqk [2017-02-24 07:24:01,337][INFO ][cluster.service ] [Micromax] new_master {Micromax}{Dt5C0vZ2QgmcgtmK5cWX8g}{10.2.2.44}{10.2.2.44:9300}{master=true}, reason: zen-disco-join(elected_as_master, [0] joins received) # oc get po -o wide NAME READY STATUS RESTARTS AGE IP NODE logging-es-xluc1lps-1-cjsqk 1/1 Running 0 15m 10.2.2.44 ip-172-18-9-248.ec2.internal but in 3.5.0 log, it is 127.0.0.1:9300, I think this caused ConnectException. [2017-02-24 06:09:36,525][INFO ][transport ] [Madame Masque] Using [com.floragunn.searchguard.ssl.transport.SearchGuardSSLNettyTransport] as transport, overridden by [search-guard-ssl] [2017-02-24 06:09:36,700][INFO ][client.transport ] [Madame Masque] failed to connect to node [{#transport#-1}{127.0.0.1}{127.0.0.1:9300}], removed from nodes list May this info could help you. Created attachment 1257166 [details]
elasticsearch.yml
Created attachment 1257167 [details]
3.4.1_es_pod_info.txt
Created attachment 1257168 [details]
3.5.0_es_pod_info.txt
Agree, the difference in logs would appear to indicate the 3.5 instance is trying to connect to the other ES instance as 127.0.0.1 instead of its kube pod IP. I wonder if our ES-kube clustering plugin needs a tweak? @Jeff, Tested again, ES still throws "java.net.ConnectException" error, from your fix, OSE_ES_VER should be 2.4.4.2, but OSE_ES_VER is still 2.4.4.1 # oc rsh logging-es-r4dazx42-1-brfv7 sh-4.2$ env | grep VER SG_VER=2.4.4.10 SG_SSL_VER=2.4.4.19 OSE_ES_VER=2.4.4.1 RECOVER_EXPECTED_NODES=1 ES_VER=2.4.4 RECOVER_AFTER_NODES=0 ES_CLOUD_K8S_VER=2.4.4 RECOVER_AFTER_TIME=5m JAVA_VER=1.8.0 Should we need to build image? # docker images|grep logging openshift3/logging-curator 3.5.0 8cfcb23f26b6 2 days ago 211.1 MB openshift3/logging-elasticsearch 3.5.0 d715f4d34ad4 3 weeks ago 399.2 MB openshift3/logging-kibana 3.5.0 e0ab09c2cbeb 5 weeks ago 342.9 MB openshift3/logging-fluentd 3.5.0 47057624ecab 5 weeks ago 233.1 MB openshift3/logging-auth-proxy 3.5.0 139f7943475e 6 weeks ago 220 MB I'm working to move into OCP repos. Already merged in origin 12706216 buildContainer (noarch) completed successfully koji_builds: https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=542604 repositories: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:rhaos-3.5-rhel-7-docker-candidate-20170307134801 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:3.5.0-6 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:3.5.0 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:latest brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:v3.5 @Jeff, Tested with logging-elasticsearch:3.5.0-6, ES still throws out java.net.ConnectException, see the attached ES log # oc rsh logging-es-8hkuwh00-1-372qg sh-4.2$ env | grep VER SG_VER=2.4.4.10 SG_SSL_VER=2.4.4.19 OSE_ES_VER=2.4.4.2 RECOVER_EXPECTED_NODES=1 ES_VER=2.4.4 RECOVER_AFTER_NODES=0 ES_CLOUD_K8S_VER=2.4.4 RECOVER_AFTER_TIME=5m JAVA_VER=1.8.0 Created attachment 1261100 [details]
es pod log, still have java.net.ConnectException
build in #47 did not include the plugin fix which was not tagged into the correct channels. Awaiting build service to become available. Please try this one. koji_builds: https://brewweb.engineering.redhat.com/brew/buildinfo?buildID=543761 repositories: brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:rhaos-3.5-rhel-7-docker-candidate-20170313104001 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:3.5.0-9 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:3.5.0 brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:latest brew-pulp-docker01.web.prod.ext.phx2.redhat.com:8888/openshift3/logging-elasticsearch:v3.5 blocked by https://bugzilla.redhat.com/show_bug.cgi?id=1431935, will verify it after BZ # 1431935 get fixed. Verified with logging-elasticsearch:3.5.0-9, ES still throws out java.net.ConnectException, see the attached ES log # oc rsh logging-es-h34lhvtg-1-0l536 sh-4.2$ env | grep VER SG_VER=2.4.4.10 SG_SSL_VER=2.4.4.19 OSE_ES_VER=2.4.4.2 RECOVER_EXPECTED_NODES=1 ES_VER=2.4.4 RECOVER_AFTER_NODES=0 ES_CLOUD_K8S_VER=2.4.4 RECOVER_AFTER_TIME=5m JAVA_VER=1.8.0 Created attachment 1264701 [details]
es pod log, java.net.ConnectException
The logging stacks are deployed in the environment mentioned in Comment 59, attached es pod logs, you can also check on these machines under project logging. Created attachment 1266813 [details]
es pod log - 20170228
Yes, it is transient exception, it does not affect the whole logging function, but from user's perspective, it is not friendly Lowing the priority as per #64 the stack is functional and this does not appear to block anything. Junqi, Can you edit the logging-es#logging.yml configmap to make the root logger 'DEBUG' and provide additional logs for further investigation. log level is debug for es and es-ops, see the attached log files. Created attachment 1267685 [details]
es pod log - 20170331
Created attachment 1267686 [details]
es-ops pod log - 20170331
Fixed upstream with: https://github.com/fabric8io/openshift-elasticsearch-plugin/pull/77 Commit pushed to master at https://github.com/openshift/origin-aggregated-logging https://github.com/openshift/origin-aggregated-logging/commit/e77c62bb3c0bfb046db504753c6b4d4c02ffbe85 bug 1420217. Update ES plugin that squashes stack on start It's fixed with the latest v3.6 image v3.6.143-2. Uploaded the full_es_log_latest_3.6, no ConnectException or others observed, set to verified. Images tested with: logging-elasticsearch v3.6.143-2 ca1c9074bf99 6 hours ago 404.7 MB logging-elasticsearch v3.6 ca1c9074bf99 6 hours ago 404.7 MB Created attachment 1297356 [details]
es_log_latest_3.6
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:1716 |