Bug 1292964 - OpenShift doesn't notice that Docker Storage is, or is reaching that state of being, full
Summary: OpenShift doesn't notice that Docker Storage is, or is reaching that state of...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 3.1.0
Hardware: Unspecified
OS: Unspecified
urgent
medium
Target Milestone: ---
: ---
Assignee: Derek Carr
QA Contact: DeShuai Ma
URL:
Whiteboard:
Depends On:
Blocks: 1267746 1292845
TreeView+ depends on / blocked
 
Reported: 2015-12-18 21:23 UTC by Eric Jones
Modified: 2020-02-14 17:37 UTC (History)
12 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
OpenShift Enterprise 3.1 Current issue found on a a node hosted in OpenStack
Last Closed: 2017-01-18 12:38:53 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:0066 0 normal SHIPPED_LIVE Red Hat OpenShift Container Platform 3.4 RPM Release Advisory 2017-01-18 17:23:26 UTC

Description Eric Jones 2015-12-18 21:23:38 UTC
Description of problem:
OpenShift doesn't notice that Docker Storage is, or is reaching that state of being, full. This allows the storage to fill up and no one be the wiser, and then the docker service can, and will likely, fail, preventing further use of the node.

Version-Release number of selected component (if applicable):
Found on OSE 3.0, but cannot find evidence that it is fixed in 3.1

How reproducible:
100%

Steps to Reproduce:
1. Spin up normal OSE 3.0 environment (infrastructure of the env does not appears to impact it)
2. Use like normal, pull images and fill up storage

Actual results:
Either the docker service completely fails or it pretends to work but will not properly pull images into pods leaving pods stuck in a pending state

Expected results:
As you pull images, you get an update on storage levels.

Additional info:
This is only so useful if you can clear up the docker storage, hence blocks on bug 1292845.

Comment 1 Paul Weil 2015-12-23 15:23:40 UTC
GH issue: https://github.com/openshift/origin/issues/6350

Comment 2 Boris Kurktchiev 2016-01-04 21:29:59 UTC
So I reported this originally, and as pointed out it blocks on the image clean up. I had to manually run clean up steps (specifically the image ones) from the OSE 3 docs in order to get it to actually clean up after itself. I have not tried running the ansible playbooks, but overall it seems like the clean up process should be happening automatically at least semi scheduled. Which in my case did not seem to be the case

Comment 3 Ryan Howe 2016-03-29 21:35:35 UTC
Has this been fixed in 3.1 with this PR? 

https://github.com/openshift/origin/pull/5599

Comment 4 Michal Minar 2016-03-30 06:37:03 UTC
No, we have no fix yet.

Comment 7 Michal Minar 2016-05-12 07:25:59 UTC
This work is being done upstream. According to a proposal [1], everything we need (volume accounting) shall be covered. As of now, only the volume interface [2] is in place. Unfortunatelly, accounting for the host_path volumes has been recently disabled [3] due to high CPU load. Neither NFS nor AWS nor GCE is supported yet.

[1] https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/disk-accounting.md#introduction
[2] https://github.com/kubernetes/kubernetes/pull/18232
[3] https://github.com/kubernetes/kubernetes/pull/23446

Comment 8 Derek Carr 2016-08-12 15:17:52 UTC
This is a new feature in Kubernetes 1.4 that just got merged.

You can read the feature description here:

https://github.com/kubernetes/kubernetes/blob/master/docs/proposals/kubelet-eviction.md

Users will be able to set thresholds for both the rootfs (i.e. nodefs) and the imagefs (i.e. docker storage).  If those thresholds are met, the node will report disk pressure, perform image gc, and evict pods on the node to reduce disk pressure to a stable state.  While the node reports disk pressure, no additional pods are admitted to the node for execution.

Marking this upcoming release.

Comment 9 Boris Kurktchiev 2016-08-12 15:40:44 UTC
I am assuming these are going to be exposed in some way in the node configs?

Comment 10 Derek Carr 2016-09-30 14:22:35 UTC
Boris - correct, in 3.4, users will be able to configure the values in node-config.

Comment 11 Derek Carr 2016-10-25 16:38:22 UTC
OCP 3.4 has added support to handle disk pressure based on the work we did in upstream Kubernetes 1.4.

For details:
http://kubernetes.io/docs/admin/out-of-resource/

I am moving this to ON_QA as a result.

Comment 12 DeShuai Ma 2016-10-26 05:53:10 UTC
Test on openshift v3.4.0.15+9c963ec, disk pressure works as expected. 
detail in the card. https://trello.com/c/3LvGAHr3/371-5-kubelet-evicts-pods-when-low-on-disk-node-reliability

Verify this bug.

Comment 14 errata-xmlrpc 2017-01-18 12:38:53 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:0066


Note You need to log in before you can comment on or make changes to this bug.