Bug 1459265 - journactl on node repeats: du and find on following dirs took
journactl on node repeats: du and find on following dirs took
Status: CLOSED NOTABUG
Product: OpenShift Origin
Classification: Red Hat
Component: Pod (Show other bugs)
3.x
Unspecified Unspecified
unspecified Severity unspecified
: ---
: ---
Assigned To: Solly Ross
DeShuai Ma
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2017-06-06 12:47 EDT by Phil Cameron
Modified: 2017-06-06 15:56 EDT (History)
6 users (show)

See Also:
Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-06-06 15:56:12 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Phil Cameron 2017-06-06 12:47:15 EDT
Description of problem:
node log is repeating following messages:
Jun 06 11:21:50 netdev28 atomic-openshift-node[24976]: I0606 11:21:50.696665   24976 fsHandler.go:131] du and find on following dirs took 1.898312093s: [ /var/lib/docker/containers/f01a2988c639323417c2acdf7c07511cfde49241ae52935c159c0542c404c916]
Jun 06 11:22:29 netdev28 atomic-openshift-node[24976]: I0606 11:22:29.166540   24976 fsHandler.go:131] du and find on following dirs took 2.525366746s: [ /var/lib/docker/containers/f58933fe3229c70ae51f4600b031cf7ad2c951210f4dc961f440b39fa464d970]


Version-Release number of selected component (if applicable):
Development build from latest origin 

How reproducible:
won't stop

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:

Master Log:

Node Log (of failed PODs):

PV Dump:

PVC Dump:

StorageClass Dump (if StorageClass used by PV/PVC):

Additional info:
Comment 1 Bradley Childs 2017-06-06 13:01:15 EDT
Can you provide dump of PV, PVC and PODs in use?
Comment 2 Phil Cameron 2017-06-06 13:09:10 EDT
No PV/PVC configured.

# oc get po
NAME                      READY     STATUS        RESTARTS   AGE
docker-registry-4-2hdmp   1/1       Running       0          1h
hello-rc-c9m05            1/1       Running       0          4d


hello-rc is "hello openshift!"
Comment 3 Matthew Wong 2017-06-06 13:47:57 EDT
This occurs because the node is running low on resources (https://github.com/kubernetes/kubernetes/issues/42164) which can easily happen because of https://bugzilla.redhat.com/show_bug.cgi?id=1459252; so I would say https://bugzilla.redhat.com/show_bug.cgi?id=1459252 is the root cause and this is just a symptom
Comment 4 Phil Cameron 2017-06-06 14:22:40 EDT
This is a Dell R730 24 physical CPUs, 256G memory 10Ge networking. Which resource is running short?

top:
Tasks: 505 total,   2 running, 503 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.8 us,  1.3 sy,  0.0 ni, 95.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem : 26386145+total, 20777251+free, 37240100 used, 18848852 buff/cache
KiB Swap:        0 total,        0 free,        0 used. 21836817+avail Mem
Comment 5 Phil Cameron 2017-06-06 14:30:28 EDT
Is this trying to delete something on disk? If so where/what is it?
Comment 6 Matthew Wong 2017-06-06 14:56:40 EDT
No, it's cadvisor keeping track of filesystem stats and taking too long for some reason. It's out of scope of storage, I think this is a metrics issue
Comment 7 Eric Paris 2017-06-06 15:15:53 EDT
bounding this to Solly on the kube team to further debug
Comment 8 Phil Cameron 2017-06-06 15:55:31 EDT
Spoke with Solly Ross. Problem was cause by go v1.8.1 build. files/directories were created and not cleaned up. Based on this the message is correct. So not a bug.

Note You need to log in before you can comment on or make changes to this bug.