Created attachment 1304065 [details] es pod, dc info Description of problem: Failed to start ES pod, "Could not resolve placeholder 'DC_NAME'" in ES pod log Version-Release number of selected component (if applicable): # openshift version openshift v3.4.1.44.6 kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 Images from brew registry logging-kibana 3.4.1-24 11cf01510399 11 hours ago 338.6 MB logging-fluentd 3.4.1-22 3e18e38e6c37 11 hours ago 232.7 MB logging-elasticsearch 3.4.1-37 f456ed538ee6 11 hours ago 400.5 MB logging-deployer v3.4.1.44.6-2 681cc9fc6f62 12 hours ago 856.9 MB logging-auth-proxy 3.4.1-26 8ebe1898f497 5 days ago 215.3 MB logging-curator 3.4.1-20 fa1520d5c994 3 weeks ago 244.5 MB # oc get po NAME READY STATUS RESTARTS AGE logging-curator-1-j6o35 1/1 Running 5 22m logging-curator-ops-1-pmed7 1/1 Running 5 22m logging-deployer-rps67 0/1 Completed 0 23m logging-es-lhgss3sq-1-b7zfh 0/1 CrashLoopBackOff 9 22m logging-es-ops-3sc1j55c-1-bfngx 0/1 CrashLoopBackOff 9 22m logging-fluentd-dxmul 1/1 Running 0 22m logging-kibana-1-7e747 2/2 Running 0 22m logging-kibana-ops-1-to9ch 2/2 Running 0 22m # oc logs logging-es-lhgss3sq-1-b7zfh [2017-07-25 06:28:11,918][INFO ][container.run ] Begin Elasticsearch startup script [2017-07-25 06:28:11,948][INFO ][container.run ] Comparing the specified RAM to the maximum recommended for Elasticsearch... [2017-07-25 06:28:11,949][INFO ][container.run ] Inspecting the maximum RAM available... [2017-07-25 06:28:11,958][INFO ][container.run ] ES_HEAP_SIZE: '4096m' [2017-07-25 06:28:11,962][INFO ][container.run ] Checking if Elasticsearch is ready on https://localhost:9200 Exception in thread "main" java.lang.IllegalArgumentException: Could not resolve placeholder 'DC_NAME' at org.elasticsearch.common.property.PropertyPlaceholder.parseStringValue(PropertyPlaceholder.java:128) at org.elasticsearch.common.property.PropertyPlaceholder.replacePlaceholders(PropertyPlaceholder.java:81) at org.elasticsearch.common.settings.Settings$Builder.replacePropertyPlaceholders(Settings.java:1179) at org.elasticsearch.node.internal.InternalSettingsPreparer.initializeSettings(InternalSettingsPreparer.java:131) at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:100) at org.elasticsearch.common.cli.CliTool.<init>(CliTool.java:107) at org.elasticsearch.common.cli.CliTool.<init>(CliTool.java:100) at org.elasticsearch.bootstrap.BootstrapCLIParser.<init>(BootstrapCLIParser.java:48) at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:242) at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35) Refer to the log for complete error details. How reproducible: Always Steps to Reproduce: 1. Deploy logging 3.4.1 2. 3. Actual results: It is failed to start ES pod, "Could not resolve placeholder 'DC_NAME'" in ES pod log Expected results: All pods should be healthy Additional info: Attached ES dc info
Was this deployment installed or upgraded using deployer from https://bugzilla.redhat.com/show_bug.cgi?id=1470368
(In reply to Jeff Cantrill from comment #2) > Was this deployment installed or upgraded using deployer from > https://bugzilla.redhat.com/show_bug.cgi?id=1470368 no, the latest 3.4.1 deployer version is v3.4.1.44.6-2 when this defect was tested, BZ # 1470368 deployer version is v3.4.1.44.4-2
Please try with the latest deployer.
(In reply to Jeff Cantrill from comment #4) > Please try with the latest deployer. Isn't v3.4.1.44.6-2 newer than v3.4.1.44.4-2? The latest deployer version is v3.4.1.44.6-2, which was used when this defect was reported. v3.4.1.44.4-2 was built 12 days ago, v3.4.1.44.6-2 was build 30 hours ago
@Junqi, Can you please provide more information other then 'it is not working and is broken'? Can you attach: * The logs from the deployer pod * The DC's from the Elasticnodes Additionally, you can work around this issue by manually editing the DC to add the environment variable 'DC_NAME' and setting it to the name of the DeploymentConfig
Also, was this a new deployment or an upgrade? Comment #3 does not answer the question definitively.
(In reply to Jeff Cantrill from comment #7) > @Junqi, > > Can you please provide more information other then 'it is not working and is > broken'? Can you attach: > > * The logs from the deployer pod > * The DC's from the Elasticnodes > > Additionally, you can work around this issue by manually editing the DC to > add the environment variable 'DC_NAME' and setting it to the name of the > DeploymentConfig dc output and deployer pod log please see the attached file. Although we can do the work around, we also need to do testing with healthy deployer pod before it is released
Created attachment 1305106 [details] es dc info and deployer pod log, used deployer version v3.4.1.44.6-2
(In reply to Jeff Cantrill from comment #8) > Also, was this a new deployment or an upgrade? Comment #3 does not answer > the question definitively. It is a new deployment, I already mentioned it in Comment 0 Steps to Reproduce: 1. Deploy logging 3.4.1
Tested with logging-deployer:v3.4.1.44.6-3, es pods can be started up now. Testing environment: # openshift version openshift v3.4.1.44.6 kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 Images from brew registry logging-deployer v3.4.1.44.6-3
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:3049