atomic-openshift-node-3.7.4-1.git.0.472090f.el7.x86_64 I see numerous of these back to back and then every 60 second sor so another batch of log spam message like: Nov 13 22:36:21 ip-172-31-71-195.us-east-2.compute.internal atomic-openshift-node[106479]: E1113 22:36:21.428420 106479 fs.go:382] Stat fs failed. Error: no such file or directory I have no idea what it means or what to do about it. What does it mean? What should I do about it? I'm sure I can find the message on more nodes that the 1 described above, if needed.
The only thing that can spit this error is cadvisor. I wish it would at least say which "file or directory" is it missing...
There looks to be a similar issue discussed in kubernetes upstream: https://github.com/kubernetes/kubernetes/issues/35062
Are there some nodes in the cluster where you *don't* see those log messages? We might find what is different there. Looks like a kubelet change/restart might make cadvisor to start logging those errors...
Cadvisor problem fixed with https://github.com/kubernetes/kubernetes/pull/17883, closing. This needs someone who knows cadvisor... *** This bug has been marked as a duplicate of bug 1511576 ***