Bug 1416629
| Summary: | [IntService_public_324]Deploy logging with ansible, failed to create es pod for invalid INSTANCE_RAM env value | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Junqi Zhao <juzhao> | ||||||
| Component: | Logging | Assignee: | ewolinet | ||||||
| Status: | CLOSED ERRATA | QA Contact: | Junqi Zhao <juzhao> | ||||||
| Severity: | medium | Docs Contact: | |||||||
| Priority: | medium | ||||||||
| Version: | 3.5.0 | CC: | aos-bugs, juzhao | ||||||
| Target Milestone: | --- | ||||||||
| Target Release: | 3.5.z | ||||||||
| Hardware: | Unspecified | ||||||||
| OS: | Unspecified | ||||||||
| Whiteboard: | |||||||||
| Fixed In Version: | Doc Type: | No Doc Update | |||||||
| Doc Text: |
undefined
|
Story Points: | --- | ||||||
| Clone Of: | Environment: | ||||||||
| Last Closed: | 2017-10-25 13:00:48 UTC | Type: | Bug | ||||||
| Regression: | --- | Mount Type: | --- | ||||||
| Documentation: | --- | CRM: | |||||||
| Verified Versions: | Category: | --- | |||||||
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
| Cloudforms Team: | --- | Target Upstream Version: | |||||||
| Embargoed: | |||||||||
| Attachments: |
|
||||||||
|
Description
Junqi Zhao
2017-01-26 06:12:02 UTC
Created attachment 1244600 [details]
ansible log
In https://github.com/openshift/openshift-ansible/tree/master/roles/openshift_logging openshift_logging_es_memory_limit: The amount of RAM that should be assigned to ES. Defaults to '1024Mi'. Maybe this error is related to it Can you please attach the logs for the ES pod that is failing? It should be able to correctly use the default of '1024Mi'. To answer your above questions: 1) We no longer use a jks generation pod due to issues with it needing to be scheduled on a specific node. A script is now executed on the control host 2) Possibly, however we are letting that be handled by the signing tools, it shouldn't impact this working or not working though. 3) Is that the output from while it is retrying until it sees that Kibana has successfully started up? I'll check to see if its until statement is incorrect... it looks like everything started up eventually (With the exception of ES)... (In reply to ewolinet from comment #3) > Can you please attach the logs for the ES pod that is failing? It should be > able to correctly use the default of '1024Mi'. Sorry for forgetting to attach ES pod log when I submitted this defect # oc logs logging-es-127rf9yo-1-m07kk INSTANCE_RAM env var is invalid: 1024Mi Tested with latest es 3.5.0 image on ops registry,same error with https://bugzilla.redhat.com/show_bug.cgi?id=1419244 # oc get po NAME READY STATUS RESTARTS AGE logging-curator-1-85sb4 1/1 Running 2 7m logging-es-eft6uu2i-1-hqk3r 0/1 CrashLoopBackOff 6 7m logging-fluentd-mvpb2 1/1 Running 0 8m logging-fluentd-tprgq 1/1 Running 0 8m logging-fluentd-vvvrh 1/1 Running 0 8m logging-kibana-1-bt7tr 2/2 Running 0 7m # oc logs logging-es-eft6uu2i-1-hqk3r Comparing the specificed RAM to the maximum recommended for ElasticSearch... Inspecting the maximum RAM available... ES_JAVA_OPTS: '-Dmapper.allow_dots_in_name=true -Xms128M -Xmx512m' /opt/app-root/src/run.sh: line 141: /usr/share/elasticsearch/bin/elasticsearch: No such file or directory Images tested with: openshift3/logging-elasticsearch 3.5.0 eed2ca51f2ba 9 hours ago 399.2 MB # openshift version openshift v3.5.0.17+c55cf2b kubernetes v1.5.2+43a9be4 etcd 3.1.0 error "INSTANCE_RAM env var is invalid: 1024Mi" does not exist now, although same error with https://bugzilla.redhat.com/show_bug.cgi?id=1419244 happens now. Set this defect to VERIFIED and close it. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:3049 |