Bug 1474689 - Failed to start ES pod, "Could not resolve placeholder 'DC_NAME'" in ES pod log
Summary: Failed to start ES pod, "Could not resolve placeholder 'DC_NAME'" in ES pod log
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Logging
Version: 3.4.1
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: ---
: 3.4.z
Assignee: Jeff Cantrill
QA Contact: Junqi Zhao
URL:
Whiteboard:
Depends On:
Blocks: 1449378 1465464 1468987 1470368
TreeView+ depends on / blocked
 
Reported: 2017-07-25 07:24 UTC by Junqi Zhao
Modified: 2017-10-25 13:04 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Feature: Modified the Elasticsearch configuration to persist the ACL documents to an index based upon the deploymentconfig name Reason: Initial ACL seeding is really only need once. When the seeding was based on the hostname (e.g. podname), the seeding needed to be performed everytime a pod was redeployed. User's would sometimes be left with an unusable logging cluster because ES was trying to rebalance its indexes and the response to the reseeding operation was slow. Result: More consistent access to the ES cluster
Clone Of:
Environment:
Last Closed: 2017-10-25 13:04:36 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
es pod, dc info (12.33 KB, text/plain)
2017-07-25 07:24 UTC, Junqi Zhao
no flags Details
es dc info and deployer pod log, used deployer version v3.4.1.44.6-2 (62.29 KB, text/plain)
2017-07-27 00:24 UTC, Junqi Zhao
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:3049 0 normal SHIPPED_LIVE OpenShift Container Platform 3.6, 3.5, and 3.4 bug fix and enhancement update 2017-10-25 15:57:15 UTC

Description Junqi Zhao 2017-07-25 07:24:31 UTC
Created attachment 1304065 [details]
es pod, dc info

Description of problem:
Failed to start ES pod, "Could not resolve placeholder 'DC_NAME'" in ES pod log

Version-Release number of selected component (if applicable):
# openshift version
openshift v3.4.1.44.6
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0

Images from brew registry
logging-kibana          3.4.1-24            11cf01510399        11 hours ago        338.6 MB
logging-fluentd         3.4.1-22            3e18e38e6c37        11 hours ago        232.7 MB
logging-elasticsearch   3.4.1-37            f456ed538ee6        11 hours ago        400.5 MB
logging-deployer        v3.4.1.44.6-2       681cc9fc6f62        12 hours ago        856.9 MB
logging-auth-proxy      3.4.1-26            8ebe1898f497        5 days ago          215.3 MB
logging-curator         3.4.1-20            fa1520d5c994        3 weeks ago         244.5 MB

# oc get po
NAME                              READY     STATUS             RESTARTS   AGE
logging-curator-1-j6o35           1/1       Running            5          22m
logging-curator-ops-1-pmed7       1/1       Running            5          22m
logging-deployer-rps67            0/1       Completed          0          23m
logging-es-lhgss3sq-1-b7zfh       0/1       CrashLoopBackOff   9          22m
logging-es-ops-3sc1j55c-1-bfngx   0/1       CrashLoopBackOff   9          22m
logging-fluentd-dxmul             1/1       Running            0          22m
logging-kibana-1-7e747            2/2       Running            0          22m
logging-kibana-ops-1-to9ch        2/2       Running            0          22m

# oc logs logging-es-lhgss3sq-1-b7zfh
[2017-07-25 06:28:11,918][INFO ][container.run            ] Begin Elasticsearch startup script
[2017-07-25 06:28:11,948][INFO ][container.run            ] Comparing the specified RAM to the maximum recommended for Elasticsearch...
[2017-07-25 06:28:11,949][INFO ][container.run            ] Inspecting the maximum RAM available...
[2017-07-25 06:28:11,958][INFO ][container.run            ] ES_HEAP_SIZE: '4096m'
[2017-07-25 06:28:11,962][INFO ][container.run            ] Checking if Elasticsearch is ready on https://localhost:9200
Exception in thread "main" java.lang.IllegalArgumentException: Could not resolve placeholder 'DC_NAME'
    at org.elasticsearch.common.property.PropertyPlaceholder.parseStringValue(PropertyPlaceholder.java:128)
    at org.elasticsearch.common.property.PropertyPlaceholder.replacePlaceholders(PropertyPlaceholder.java:81)
    at org.elasticsearch.common.settings.Settings$Builder.replacePropertyPlaceholders(Settings.java:1179)
    at org.elasticsearch.node.internal.InternalSettingsPreparer.initializeSettings(InternalSettingsPreparer.java:131)
    at org.elasticsearch.node.internal.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:100)
    at org.elasticsearch.common.cli.CliTool.<init>(CliTool.java:107)
    at org.elasticsearch.common.cli.CliTool.<init>(CliTool.java:100)
    at org.elasticsearch.bootstrap.BootstrapCLIParser.<init>(BootstrapCLIParser.java:48)
    at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:242)
    at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:35)
Refer to the log for complete error details.

How reproducible:
Always

Steps to Reproduce:
1. Deploy logging 3.4.1
2.
3.

Actual results:
It is failed to start ES pod, "Could not resolve placeholder 'DC_NAME'" in ES pod log

Expected results:
All pods should be healthy

Additional info:
Attached ES dc info

Comment 2 Jeff Cantrill 2017-07-25 13:44:45 UTC
Was this deployment installed or upgraded using deployer from https://bugzilla.redhat.com/show_bug.cgi?id=1470368

Comment 3 Junqi Zhao 2017-07-25 14:48:43 UTC
(In reply to Jeff Cantrill from comment #2)
> Was this deployment installed or upgraded using deployer from
> https://bugzilla.redhat.com/show_bug.cgi?id=1470368

no, the latest 3.4.1 deployer version is v3.4.1.44.6-2 when this defect was tested, BZ # 1470368 deployer version is v3.4.1.44.4-2

Comment 4 Jeff Cantrill 2017-07-25 18:20:01 UTC
Please try with the latest deployer.

Comment 5 Junqi Zhao 2017-07-26 00:35:17 UTC
(In reply to Jeff Cantrill from comment #4)
> Please try with the latest deployer.

Isn't v3.4.1.44.6-2 newer than v3.4.1.44.4-2? The latest deployer version is v3.4.1.44.6-2, which was used when this defect was reported.

v3.4.1.44.4-2 was built 12 days ago, v3.4.1.44.6-2 was build 30 hours ago

Comment 7 Jeff Cantrill 2017-07-26 12:42:04 UTC
@Junqi,

Can you please provide more information other then 'it is not working and is broken'?  Can you attach:

* The logs from the deployer pod
* The DC's from the Elasticnodes

Additionally, you can work around this issue by manually editing the DC to add the environment variable 'DC_NAME' and setting it to the name of the DeploymentConfig

Comment 8 Jeff Cantrill 2017-07-26 12:44:17 UTC
Also, was this a new deployment or an upgrade? Comment #3 does not answer the question definitively.

Comment 10 Junqi Zhao 2017-07-27 00:21:43 UTC
(In reply to Jeff Cantrill from comment #7)
> @Junqi,
> 
> Can you please provide more information other then 'it is not working and is
> broken'?  Can you attach:
> 
> * The logs from the deployer pod
> * The DC's from the Elasticnodes
> 
> Additionally, you can work around this issue by manually editing the DC to
> add the environment variable 'DC_NAME' and setting it to the name of the
> DeploymentConfig

dc output and deployer pod log please see the attached file.

Although we can do the work around, we also need to do testing with healthy deployer pod before it is released

Comment 11 Junqi Zhao 2017-07-27 00:24:17 UTC
Created attachment 1305106 [details]
es dc info and deployer pod log, used deployer version v3.4.1.44.6-2

Comment 12 Junqi Zhao 2017-07-27 00:26:46 UTC
(In reply to Jeff Cantrill from comment #8)
> Also, was this a new deployment or an upgrade? Comment #3 does not answer
> the question definitively.

It is a new deployment, I already mentioned it in Comment 0

Steps to Reproduce:
1. Deploy logging 3.4.1

Comment 13 Junqi Zhao 2017-07-27 00:43:31 UTC
Tested with logging-deployer:v3.4.1.44.6-3, es pods can be started up now.

Testing environment:
# openshift version
openshift v3.4.1.44.6
kubernetes v1.4.0+776c994
etcd 3.1.0-rc.0

Images from brew registry
logging-deployer        v3.4.1.44.6-3

Comment 15 errata-xmlrpc 2017-10-25 13:04:36 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:3049


Note You need to log in before you can comment on or make changes to this bug.