Bug 1291866
Summary: | Create a better deployment strategy for fluentd than simply scaling the pod. | ||
---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Eric Rich <erich> |
Component: | Logging | Assignee: | ewolinet |
Status: | CLOSED ERRATA | QA Contact: | chunchen <chunchen> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 3.1.0 | CC: | aos-bugs, jialiu, jokerman, lmeyer, mbarrett, mmccomas, pruan, tdawson, wsun, xiazhao |
Target Milestone: | --- | ||
Target Release: | --- | ||
Hardware: | Unspecified | ||
OS: | Unspecified | ||
Whiteboard: | |||
Fixed In Version: | Doc Type: | Bug Fix | |
Doc Text: | Story Points: | --- | |
Clone Of: | 1291786 | Environment: | |
Last Closed: | 2016-09-27 09:34:38 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: | |||
Bug Depends On: | 1291786, 1337329 | ||
Bug Blocks: |
Description
Eric Rich
2015-12-15 18:37:43 UTC
Another usecase for this is the Docker Registry Another usecase for this is the Router The current solution or "hack" for ensuring that fluend is deployed to all nodes in the environment (https://docs.openshift.com/enterprise/3.1/install_config/aggregate_logging.html#fluentd)[scale the pod to the number of nodes] does not *ensure* that this component is on all nodes in the infrastructure. As a result it is possible that new nodes or nodes that were stopped and restarted (examples) do not get a fluentd cartridge deployed (or have there pods re-scheduled to nodes that have fluentd running on them) to them and thus logs are not aggregated from these nodes. The use of DaemonSets (Bug #1291786) for logging and metics collection is a logical fit for this as the DaemonSet functionality provided in Kubernetes was designed to meet/fulfill this purpose. (In reply to Eric Rich from comment #3) > As a result it is possible that new nodes or nodes that were stopped and > restarted (examples) do not get a fluentd cartridge deployed As long as fluentd is scaled to match the number of nodes, this *shouldn't* happen - have you seen a node restart come back with no fluentd on it for more than a few minutes? > (or have there > pods re-scheduled to nodes that have fluentd running on them) They might be scheduled, but they'll fail to run due to the hack (port conflict) so the situation should resolve itself in time. All of this is not how we would like things to stay, of course. I'm just saying I don't think it's as bad as you describe. > The use of DaemonSets (Bug #1291786) for logging and metics collection is a > logical fit for this as the DaemonSet functionality provided in Kubernetes > was designed to meet/fulfill this purpose. Absolutely. We're waiting for this to be enabled in the product and considered stable enough for production. With the advent of DaemonSets https://github.com/openshift/origin/pull/6854 this is unblocked and being worked on with https://trello.com/c/jjIFKzNU Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2016:1933 |