Bug 1335951
Summary: | heavy logging leads to Docker daemon OOM-ing | ||
---|---|---|---|
Product: | Red Hat Enterprise Linux 7 | Reporter: | Jay Vyas <jvyas> |
Component: | docker | Assignee: | Nalin Dahyabhai <nalin> |
Status: | CLOSED ERRATA | QA Contact: | Luwen Su <lsu> |
Severity: | medium | Docs Contact: | |
Priority: | unspecified | ||
Version: | 7.2 | CC: | amurdaca, aos-bugs, dwalsh, gouyang, jokerman, lsm5, lsu, mmccomas, rmeggins, tstclair |
Target Milestone: | rc | Keywords: | Extras |
Target Release: | 7.3 | ||
Hardware: | All | ||
OS: | Linux | ||
Whiteboard: | |||
Fixed In Version: | docker-1.10.3-46.el7.4 | Doc Type: | Bug Fix |
Doc Text: | Story Points: | --- | |
Clone Of: | Environment: | ||
Last Closed: | 2016-11-04 09:08:30 UTC | Type: | Bug |
Regression: | --- | Mount Type: | --- |
Documentation: | --- | CRM: | |
Verified Versions: | Category: | --- | |
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
Cloudforms Team: | --- | Target Upstream Version: | |
Embargoed: |
Description
Jay Vyas
2016-05-13 15:22:13 UTC
More details for folks doing scale testing of logging resilience: To generate the cluster wide soak test for this feature (basically, its a way to rapidly destroy a cluster if logging isnt tuned correctly, or else, to fill up ELK with logging if you are evaluating production quality performance of ELK logging stack for openshift), you can run : https://github.com/kubernetes/kubernetes/pull/24536 (--ginkgo.focus="Logging soak" --scale=5) to generate 5 noisy pods per node. I think a combination of 1.10's changes to limit how much of the container's data it buffers, along with https://github.com/docker/docker/pull/22982 (currently in design review), should keep us from hitting OOM. So does that mean with docker 1.10 we have at least 2 options - 1) use docker journald log driver and avoid json file logs completely, or 2) use the docker json log file driver and configure it not to fill up the container file system? @Nalin have you verified the memory limit reached with your PR against the repro example listed above? (In reply to Timothy St. Clair from comment #5) > @Nalin have you verified the memory limit reached with your PR against the > repro example listed above? The original description's case has its memory usage limited by 1.10, as starting with that version the container blocks trying to write to stdout/stderr until the daemon is ready to read that data, presumably after flushing whatever it's already done to disk. The upstream issue 18057 also mentions OOMing when echo -n is used, so that the output doesn't include newlines. That's something PR 22982 aims to fix, and it does when I run it on my system. In both cases we grow until we hit a plateau. Reading the logs seems to require allocating more memory than just writing them, though that now plateaus, too. This should be fixed by a backport that landed in docker-1.10.3-46.el7.4 and later. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://rhn.redhat.com/errata/RHSA-2016-2634.html |