Bug 1328913
Summary: | Long running reliability tests show network errors on nodes | ||||||
---|---|---|---|---|---|---|---|
Product: | OpenShift Container Platform | Reporter: | Vikas Laad <vlaad> | ||||
Component: | Node | Assignee: | Seth Jennings <sjenning> | ||||
Status: | CLOSED ERRATA | QA Contact: | Vikas Laad <vlaad> | ||||
Severity: | urgent | Docs Contact: | |||||
Priority: | urgent | ||||||
Version: | 3.2.0 | CC: | agoldste, aos-bugs, decarr, jkaur, jokerman, mmccomas, nbhatt, sjenning, sten, vlaad, xtian | ||||
Target Milestone: | --- | ||||||
Target Release: | 3.7.0 | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | Doc Type: | Bug Fix | |||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | |||||||
: | 1496245 (view as bug list) | Environment: | |||||
Last Closed: | 2017-11-28 21:51:43 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | 1367141 | ||||||
Bug Blocks: | 1303130, 1496245 | ||||||
Attachments: |
|
Description
Vikas Laad
2016-04-20 14:33:11 UTC
Description of problem: Long running reliability tests have run into this problem many times, these tests created few sample applications (ruby, cakephp, dancer and eap) applications in the beginning and keep rebuilding/accessing the apps for few days. I have the output if network debug script output but could not attach to the bug due to size. Please ping me on IRC vlaad is my nick. Version-Release number of selected component (if applicable): openshift v3.2.0.16 How reproducible: Steps to Reproduce: 1. Please see the description Actual results: Logs are full of following errors Apr 20 10:42:31 ip-172-31-7-135 atomic-openshift-node: I0420 10:42:31.779154 63549 helpers.go:101] Unable to get network stats from pid 128265: couldn't read network stats: failure opening /proc/128265/net/dev: open /proc/128265/net/dev: no such file or directory Expected results: Additional info: I still have the environment running in case someone wants to look at it. These messages are usually from cadvisor / heapster trying to get the stats from a pod that's just gone away. Are you getting messages repeated for the same pid for a long time? Or are the pids changing. If they are changing, I'm told that it's somewhat expected (but annoying) behavior. pids are changing. Could you please try to reproduce the issue again with higher log level (5 could be fine) and send me the logs? I am running the tests, will update the bug with information when I have it reproduced. *** Bug 1357052 has been marked as a duplicate of this bug. *** The direct cause is that the container process has exited but cadvisor continues to try to monitor it. This is because docker containers are removed from cadvisor monitoring implicitly by the removal of their cgroup, which cadvisor watches for with inotify. A disconnect is occurring where the pid 1 of the docker container exits but the cgroup isn't removed, leading to a systemd docker container slice with no tasks: # find /sys/fs/cgroup/systemd/system.slice/docker* -name tasks | xargs wc -l | sort -rn ... 0 /sys/fs/cgroup/systemd/system.slice/docker-4af4dc9c32a97a7a0bf0f26464898426389f18228dedf835e1ec8bad61d4c623.scope/tasks Created attachment 1180293 [details]
dm-failure.log
Attaching log with selected section regarding a container that resulted in an dead cgroup. It is showing a device mapper removal failure that might be causing docker to bail out before the cgroup teardown.
Upstream issue https://github.com/kubernetes/kubernetes/issues/30171 Opened a PR to cadvisor to reduce log spam: https://github.com/google/cadvisor/pull/1700 Origin PR: https://github.com/openshift/origin/pull/16189 This will reduce the spam but good log rotation practices must still be used to avoid filling the disk. Please let me know if this bug is backported in 3.2.1. I do not see these errors at logleve=2 in following version and later openshift v3.7.0-0.143.2 kubernetes v1.7.0+80709908fd etcd 3.2.1 Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2017:3188 |