Bug 1474689
Summary: | Failed to start ES pod, "Could not resolve placeholder 'DC_NAME'" in ES pod log | ||||||||
---|---|---|---|---|---|---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Junqi Zhao <juzhao> | ||||||
Component: | Logging | Assignee: | Jeff Cantrill <jcantril> | ||||||
Status: | CLOSED ERRATA | QA Contact: | Junqi Zhao <juzhao> | ||||||
Severity: | urgent | Docs Contact: | |||||||
Priority: | urgent | ||||||||
Version: | 3.4.1 | CC: | aos-bugs, juzhao, rmeggins, wsun | ||||||
Target Milestone: | --- | Keywords: | Regression, TestBlocker | ||||||
Target Release: | 3.4.z | ||||||||
Hardware: | Unspecified | ||||||||
OS: | Unspecified | ||||||||
Whiteboard: | |||||||||
Fixed In Version: | Doc Type: | Enhancement | |||||||
Doc Text: |
Feature: Modified the Elasticsearch configuration to persist the ACL documents to an index based upon the deploymentconfig name
Reason: Initial ACL seeding is really only need once. When the seeding was based on the hostname (e.g. podname), the seeding needed to be performed everytime a pod was redeployed. User's would sometimes be left with an unusable logging cluster because ES was trying to rebalance its indexes and the response to the reseeding operation was slow.
Result: More consistent access to the ES cluster
|
Story Points: | --- | ||||||
Clone Of: | Environment: | ||||||||
Last Closed: | 2017-10-25 13:04:36 UTC | Type: | Bug | ||||||
Regression: | --- | Mount Type: | --- | ||||||
Documentation: | --- | CRM: | |||||||
Verified Versions: | Category: | --- | |||||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||||
Cloudforms Team: | --- | Target Upstream Version: | |||||||
Embargoed: | |||||||||
Bug Depends On: | |||||||||
Bug Blocks: | 1449378, 1465464, 1468987, 1470368 | ||||||||
Attachments: |
|
Description
Junqi Zhao
2017-07-25 07:24:31 UTC
Was this deployment installed or upgraded using deployer from https://bugzilla.redhat.com/show_bug.cgi?id=1470368 (In reply to Jeff Cantrill from comment #2) > Was this deployment installed or upgraded using deployer from > https://bugzilla.redhat.com/show_bug.cgi?id=1470368 no, the latest 3.4.1 deployer version is v3.4.1.44.6-2 when this defect was tested, BZ # 1470368 deployer version is v3.4.1.44.4-2 Please try with the latest deployer. (In reply to Jeff Cantrill from comment #4) > Please try with the latest deployer. Isn't v3.4.1.44.6-2 newer than v3.4.1.44.4-2? The latest deployer version is v3.4.1.44.6-2, which was used when this defect was reported. v3.4.1.44.4-2 was built 12 days ago, v3.4.1.44.6-2 was build 30 hours ago @Junqi, Can you please provide more information other then 'it is not working and is broken'? Can you attach: * The logs from the deployer pod * The DC's from the Elasticnodes Additionally, you can work around this issue by manually editing the DC to add the environment variable 'DC_NAME' and setting it to the name of the DeploymentConfig Also, was this a new deployment or an upgrade? Comment #3 does not answer the question definitively. (In reply to Jeff Cantrill from comment #7) > @Junqi, > > Can you please provide more information other then 'it is not working and is > broken'? Can you attach: > > * The logs from the deployer pod > * The DC's from the Elasticnodes > > Additionally, you can work around this issue by manually editing the DC to > add the environment variable 'DC_NAME' and setting it to the name of the > DeploymentConfig dc output and deployer pod log please see the attached file. Although we can do the work around, we also need to do testing with healthy deployer pod before it is released Created attachment 1305106 [details]
es dc info and deployer pod log, used deployer version v3.4.1.44.6-2
(In reply to Jeff Cantrill from comment #8) > Also, was this a new deployment or an upgrade? Comment #3 does not answer > the question definitively. It is a new deployment, I already mentioned it in Comment 0 Steps to Reproduce: 1. Deploy logging 3.4.1 Tested with logging-deployer:v3.4.1.44.6-3, es pods can be started up now. Testing environment: # openshift version openshift v3.4.1.44.6 kubernetes v1.4.0+776c994 etcd 3.1.0-rc.0 Images from brew registry logging-deployer v3.4.1.44.6-3 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:3049 |