Description of problem:
In each of the provided resource_limits.conf files, the default `quota_files` directive is always set to 80000 inodes. The number of inodes allowed does not scale with the amount of disk space allowed.
For a "large" node profile that allocates 4Gb of space to each gear, why does it not also allocate 320,000 inodes (80k inodes per 1Gb)?
Version-Release number of selected component (if applicable):
Those values should be updated in our example config files. I don't see anything in the pull request that updated the limit to 80000 per GB that actually makes it scale based on the quota_blocks.
PR to update example configs :
80000 per GB is probably safe in general. The default filesystem parameters for ext4 are 65536 inodes per GB but inodes are 256bytes in size so you'll end up using far less than that unless all your files are < 256bytes in size.
Shouldn't the quota be lower than the filesystem limit to prevent DoSing, or is the idea that very few users will even approach the quota, which gives us some slack that enables us to overcommit and allow more inodes to other users who might otherwise hit the quota, in case these users have legitimate reasons for creating many thousands of <256-byte files?
Yeah, online teams suggest that most users hit quota_blocks rather than quota_files so I think ~20% over subscription on inodes is probably safe.
I'm fine with also changing the scaling in the examples to be 65335 too, though I guess we should leave the default at 80000.
The issue was resolved with <https://github.com/openshift/origin-server/commit/9b3149223b6e6c564e1259ecc5dce9ebceb88369>, which shipped in rubygem-openshift-origin-node-18.104.22.168-1.el6op included in RHBA-2015:0779 "Red Hat OpenShift Enterprise 2.2.5 bug fix and enhancement update".