Bug 1572285 - XFS quota not enforced on emptydirs
Summary: XFS quota not enforced on emptydirs
Status: CLOSED DUPLICATE of bug 1579305
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 3.10.0
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: 3.10.0
Assignee: Seth Jennings
QA Contact: Chao Yang
Depends On:
TreeView+ depends on / blocked
Reported: 2018-04-26 15:49 UTC by Seth Jennings
Modified: 2018-07-23 13:10 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Last Closed: 2018-05-18 21:46:59 UTC
Target Upstream Version:

Attachments (Terms of Use)

Description Seth Jennings 2018-04-26 15:49:55 UTC
Due to the restructuring of OCP for 3.10, the patch that enabled XFS quota to limit the size of emptydirs and emptydir-based volumes like secrets and configmaps is no longer in effect.


Need to find some way to apply this to openshift/kubernetes so that the kubelet build from that source can have equivalent functionality.

Comment 1 Seth Jennings 2018-04-26 21:48:32 UTC
origin PR:

Comment 2 Seth Jennings 2018-05-14 19:11:16 UTC
We are going to need documentation for this.

Comment 4 Chao Yang 2018-05-17 10:29:54 UTC
1.The new config file path is

apiVersion: kubelet.config.openshift.io/v1
kind: VolumeConfig
  perFSGroupInGiB: 1

2.systemctl restart atomic-openshift-node.service
3.Create a pod
4.xfs_quota -x -c 'report -n -L 123450 -U 123460' /var/lib/origin/openshift.local.volumes/
Group quota on /var/lib/origin/openshift.local.volumes (/dev/xvdf)
Group ID         Used       Soft       Hard    Warn/Grace     
---------- -------------------------------------------------- 
#123456              0    1048576    1048576     00 [--------]

Comment 5 Matt Woodson 2018-05-18 14:28:06 UTC
If I'm reading this correctly, the configuration has been removed from a section in the /etc/origin/node-config.yaml to /var/lib/origin/openshift.local.volumes/volume-config.yaml.

In 3.10, the config files for the node are now being set in the master as a config map, then sync'd to nodes.

I would like to understand the reasoning for this change.  

As a sysadmin, I have issues with this approach.

1) We were used to finding this setting in one location, the node-config.yaml.  Now it isn't set there.

2) The new paradigm appears to be setting configuration for the node on the master, and then having it be sync'd down to the node.  This new configuration change does not follow this paradigm.

There may be technical reasons this change was made, and I may not understand them.  But as a sysadmin, this is extremely confusing to know which node settings go in the config map on the master and what settings need to be placed in config files on the node.

This is also hard when using immutable infrastructure.  Having to build different images with this setting based on what I want, or devise a way to update this setting when a node is started outside of the standard openshift (config map) setting isn't optimal as well.

Comment 6 Matt Woodson 2018-05-18 14:30:40 UTC
One other comment.  The config living in /var/lib/origin also seems to be bad practice.  I would never look for a config file to live in /var.

Comment 7 Matt Woodson 2018-05-18 15:07:43 UTC
another question about this change.  We were doing 512Mi as our default.  Do . (decimals) work in the new format.  For example:

apiVersion: kubelet.config.openshift.io/v1
kind: VolumeConfig
  perFSGroupInGiB: .5

Is this valid?

Comment 8 Derek Carr 2018-05-18 15:33:09 UTC
I think we may want to re-evaluate how this was configured in favor of a configmap based solution so all node config is delivered dynamically out of openshift-node project.

Comment 9 Seth Jennings 2018-05-18 21:46:59 UTC
New method being tracked here

QE can drop this.  Duping to 1579305.

*** This bug has been marked as a duplicate of bug 1579305 ***

Note You need to log in before you can comment on or make changes to this bug.