Bug 1198777 - Default resource_limits.conf files all have the same quota_files limit
Summary: Default resource_limits.conf files all have the same quota_files limit
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Containers
Version: 2.2.0
Hardware: x86_64
OS: Linux
unspecified
low
Target Milestone: ---
: ---
Assignee: Scott Dodson
QA Contact: libra bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2015-03-04 19:33 UTC by Timothy Williams
Modified: 2019-04-16 14:40 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Clone Of:
Environment:
Last Closed: 2015-09-23 18:34:02 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Timothy Williams 2015-03-04 19:33:52 UTC
Description of problem:
In each of the provided resource_limits.conf files, the default `quota_files` directive is always set to 80000 inodes. The number of inodes allowed does not scale with the amount of disk space allowed.

  resource_limits.conf:                   quota_files=80000
  resource_limits.conf.large.m3.xlarge:   quota_files=80000
  resource_limits.conf.medium.m3.xlarge:  quota_files=80000
  resource_limits.conf.small.m3.xlarge:   quota_files=80000
  resource_limits.conf.xpaas.m3.xlarge:   quota_files=80000

For a "large" node profile that allocates 4Gb of space to each gear, why does it not also allocate 320,000 inodes (80k inodes per 1Gb)?

Version-Release number of selected component (if applicable):
2.2

Comment 1 Scott Dodson 2015-03-04 19:50:51 UTC
Those values should be updated in our example config files. I don't see anything in the pull request that updated the limit to 80000 per GB that actually makes it scale based on the quota_blocks.

Reference:
https://github.com/openshift/origin-server/pull/4279/files
https://bugzilla.redhat.com/show_bug.cgi?id=1031112

Comment 2 Scott Dodson 2015-03-04 19:59:10 UTC
PR to update example configs :

https://github.com/openshift/origin-server/pull/6091

Comment 3 Scott Dodson 2015-03-04 20:16:16 UTC
80000 per GB is probably safe in general. The default filesystem parameters for ext4 are 65536 inodes per GB but inodes are 256bytes in size so you'll end up using far less than that unless all your files are < 256bytes in size.

Comment 4 Miciah Dashiel Butler Masters 2015-03-04 20:23:56 UTC
Shouldn't the quota be lower than the filesystem limit to prevent DoSing, or is the idea that very few users will even approach the quota, which gives us some slack that enables us to overcommit and allow more inodes to other users who might otherwise hit the quota, in case these users have legitimate reasons for creating many thousands of <256-byte files?

Comment 5 Scott Dodson 2015-03-04 21:27:13 UTC
Miciah,

Yeah, online teams suggest that most users hit quota_blocks rather than quota_files so I think ~20% over subscription on inodes is probably safe.

I'm fine with also changing the scaling in the examples to be 65335 too, though I guess we should leave the default at 80000.

--
Scott

Comment 7 Miciah Dashiel Butler Masters 2015-09-23 18:34:02 UTC
The issue was resolved with <https://github.com/openshift/origin-server/commit/9b3149223b6e6c564e1259ecc5dce9ebceb88369>, which shipped in rubygem-openshift-origin-node-1.35.4.2-1.el6op included in RHBA-2015:0779 "Red Hat OpenShift Enterprise 2.2.5 bug fix and enhancement update".


Note You need to log in before you can comment on or make changes to this bug.