Bug 1031112 - users are hitting their quota limit very quickly due to the new deployments feature
users are hitting their quota limit very quickly due to the new deployments f...
Product: OpenShift Online
Classification: Red Hat
Component: Containers (Show other bugs)
Unspecified Unspecified
medium Severity high
: ---
: ---
Assigned To: Andy Goldstein
libra bugs
: UpcomingRelease
Depends On:
  Show dependency treegraph
Reported: 2013-11-15 11:09 EST by Corey Daley
Modified: 2015-05-14 19:32 EDT (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Last Closed: 2014-01-29 19:50:14 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---

Attachments (Terms of Use)

External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Knowledge Base (Article) 549423 None None None Never

  None (edit)
Description Corey Daley 2013-11-15 11:09:04 EST
Description of problem:
Users are hitting their quota limit (either disk space or file limits) very quickly because of the new deployments feature, which defaults to keeping 1 deployment.  There needs to be an option to turn this off, or double everyones disk space (at least)
We have also received a few emails to openshift@redhat.com about the same issue.

Version-Release number of selected component (if applicable):

How reproducible:
depends on what they are pushing up as an app, but I think it will only get worse

Steps to Reproduce:
1. create an application
2.  add some node.js modules or ruby gems
3.  git push, it will fill up the disk more quickly with file limit and disk limit

Actual results:

Expected results:
There should be a way to turn off deployments, or increase free users disk space (probably silver users to)

Additional info:
Comment 1 sebastian.kopf 2013-11-17 17:18:44 EST
I can reproduce this error. It is not possible to deploy a rails app. I tried to deploy redmine + 11 plugins (aprox. 3700 files). after only one commit the first deployment already fails with this error: 

remote: sent 198864294 bytes  received 324389 bytes  4106983.15 bytes/sec
remote: total size is 203670962  speedup is 1.02
remote: stderr: rsync: mkstemp "/var/lib/openshift/5289204750044632df00025f/app-deployments/2013-11-17_15-57-33.214/repo/vendor/bundle/ruby/1.9.1/gems/ruby-ole-" failed: Disk quota exceeded (122)

counting the files (for i in *; do echo -n "$i: ";  find $i | sort -u | wc -l ; done | sort -rn | head -6) gets me this result:
ruby: 177
postgresql: 1074
git: 36
app-root: 20282
app-deployments: 18442

what is causing so many files to be created in deployments?
Comment 2 ddl449 2013-11-18 13:36:18 EST
Same with Python and Django.

Using sebastian's code above:
python: 80
git: 36
app-root: 30980
app-deployments: 8786

The strange is when I am in: /app-root/runtime/dependencies/python/virtenv/build
six: 27
requests-oauthlib: 49
requests: 217
PyYAML: 699
pytz: 1267
python-social-auth: 309

And when I cd pytz I have:
setup.py: 1
setup.cfg: 1
README.txt: 1
pytz.egg-info: 6
pytz: 626
Comment 3 Xybrek 2013-11-20 11:08:12 EST
Also with Capedwarf:


At first my app is working fine, now cannot 'git push' or even 'rhc app-tidy' to clean up disk space because it says: 

'Warning gear <> is using 100.0% of inodes allowed'
Comment 4 Alex Knol 2013-11-25 04:57:35 EST
We have a Nodejs app that deployed fine until recently.
I removed the app and recreated and deployed the first time without errors.
the second time:

remote: error: unable to create temporary file: Disk quota exceeded        
remote: fatal: failed to write object        
error: unpack failed: unpack-objects abnormal exit
To ssh://xxxxx@stage-risingtool.rhcloud.com/~/git/stage.git/
 ! [remote rejected] master -> master (unpacker error)
error: failed to push some refs to 'ssh://xxxx@stage-risingtool.rhcloud.com/~/git/stage.git/'

Will upgrading to Silver solve this? Guaranteed?
Comment 5 Andy Grimm 2013-11-25 12:59:10 EST
I have tried to break this down a little from and example ruby app, here are some of the largest gems in terms of file count.  Many of these are pretty standard to have for any ruby app:

    103 vendor/bundle/ruby/1.9.1/gems/thor-0.18.1
    107 vendor/bundle/ruby/1.9.1/gems/cucumber-rails-1.3.1
    120 vendor/bundle/ruby/1.9.1/gems/i18n-0.6.1
    123 vendor/bundle/ruby/1.9.1/gems/simple_form-2.1.0
    132 vendor/bundle/ruby/1.9.1/gems/therubyracer-0.11.4
    134 vendor/bundle/ruby/1.9.1/gems/arel-3.0.2
    134 vendor/bundle/ruby/1.9.1/gems/google_visualr-2.1.7
    134 vendor/bundle/ruby/1.9.1/gems/treetop-1.4.15
    136 vendor/bundle/ruby/1.9.1/gems/json-1.8.0
    138 vendor/bundle/ruby/1.9.1/gems/capistrano-2.11.2
    141 vendor/bundle/ruby/1.9.1/gems/capybara-2.1.0
    147 vendor/bundle/ruby/1.9.1/gems/database_cleaner-1.1.1
    160 vendor/bundle/ruby/1.9.1/gems/mail-2.5.4
    163 vendor/bundle/ruby/1.9.1/gems/jquery-ui-rails-4.0.4
    165 vendor/bundle/ruby/1.9.1/gems/best_in_place-2.1.0
    167 vendor/bundle/ruby/1.9.1/gems/rake-10.1.0
    173 vendor/bundle/ruby/1.9.1/gems/gmaps4rails-1.5.6
    174 vendor/bundle/ruby/1.9.1/gems/net-ssh-2.6.8
    181 vendor/bundle/ruby/1.9.1/gems/activerecord-3.2.13
    207 vendor/bundle/ruby/1.9.1/gems/rack-1.4.5
    223 vendor/bundle/ruby/1.9.1/gems/actionpack-3.2.13
    250 vendor/bundle/ruby/1.9.1/gems/rdoc-3.12.2
    251 vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.13
    255 vendor/bundle/ruby/1.9.1/gems/twitter-4.8.1
    293 vendor/bundle/ruby/1.9.1/gems/less-2.3.2
    299 vendor/bundle/ruby/1.9.1/gems/devise-3.0.3
    318 vendor/bundle/ruby/1.9.1/gems/sass-3.2.10
    347 vendor/bundle/ruby/1.9.1/gems/erubis-2.7.0
    381 vendor/bundle/ruby/1.9.1/gems/sass-rails-3.2.6
    635 vendor/bundle/ruby/1.9.1/gems/railties-3.2.13
    665 vendor/bundle/ruby/1.9.1/gems/tzinfo-0.3.37
    669 vendor/bundle/ruby/1.9.1/gems/gherkin-2.12.1
    857 vendor/bundle/ruby/1.9.1/gems/cucumber-1.3.6
   3291 vendor/bundle/ruby/1.9.1/gems/passenger-4.0.13
   7491 vendor/bundle/ruby/1.9.1/gems/nokogiri-1.6.0

In total, the app I looked at was using nearly 22000 files in their gems directory.  but looking closer, the biggest issue here is nokogiri, and drilling down further, the problem appears to be that nokogiri pulls in source trees for libxml2 and libxslt, even though these libraries and their -devel subpackages are already present on the system.  These files and their duplication under app-deployments account for over 30% of the gear's quota, and should not be necessary.  Even if they are temporarily necessary during the build, it should be possible to clean them up when the build completes.

In the case of passenger, the largest directory by far is the boost extension, which contains over 2000 files; it's not clear to me how much of that is temporary versus permanently required.
Comment 6 Andy Grimm 2013-11-25 13:05:34 EST
looks like setting NOKOGIRI_USE_SYSTEM_LIBRARIES might help a lot here.  I'm going to test that.
Comment 7 Andy Grimm 2013-11-25 13:19:45 EST
Related:  https://github.com/sparklemotion/nokogiri/issues/936
Comment 8 Andy Grimm 2013-11-25 14:23:56 EST
I have confirmed that for this one particular case, setting NOKOGIRI_USE_SYSTEM_LIBRARIES=1 at app create time made a substantial difference in both the build time and the number of inodes required to deploy the application.

Unfortunately, there may be many other issues, like this one, and we'll have to look at them on a case-by-case basis.
Comment 9 Corey Daley 2013-11-26 15:47:02 EST
Anyone who is getting the quota error for inodes, please email openshift@redhat.com with the following information:

UUID of your gear (if you just provide the ENTIRE git url that would be fine)
The url to your application (app-domain.rhcloud.com)

Subject of the email: [inode limit fix]

And we will put a fix in place until it is fixed globally.

Comment 10 Andy Goldstein 2013-12-04 11:34:09 EST


With the 2 pull requests above, the default inode quotas for new gears should be 80000 instead of 40000. Additionally, when migrating to the next version, all gears whose inode quotas are now too low will have their inode quotas increased.

Upgrade steps:
1. Create apps and verify their inode quota is 40000 (or manually set the quota to 40000)
2. Update to the latest RPMs
3. Make sure /etc/openshift/resource_limits.conf has been updated appropriately so quota_files is 80000, or manually make this change.
4. Restart mcollective
5. Upgrade a gear or the entire node
6. Check the quota of the upgraded gears, it should now be 80000 inodes per GB of storage quota (e.g. 80000 for 1GB, 160000 for 2GB, etc)
Comment 11 Alex Knol 2013-12-05 05:21:43 EST
raising the limit to 80000 will only help for certain apps. Our simple nodejs app already uses 33373 files on Openshift:

Disk quotas for user 529f8c225973ca844d000080 (uid 6655): 
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
                   459M       0   1024M           33373       0   40000

in ~ "for i in *; do echo -e "$(find $i | wc -l)\t$i"; done | sort -n" outputs
4	gear-registry
39	haproxy
51	nodejs
126	git
16363	app-deployments
16516	app-root
I'm not using deployments, or at least haven't touched my deployments command yet still a copy is kept. 

My app is 122Mb with 112Mb in nodemodules. Where have the other 337 Mb gone?

nodemodules = 8195 files, a few thousand files come from the dev dependencies, so running npm install --production would help a bit, but keeping more than 2 deployments won't work either.
Comment 12 Meng Bo 2013-12-05 05:25:05 EST
Checked on devenv_4098,
It will fail during the gear upgrade.

Error in the migrate log:

            "pre_upgrade": {
                "context": {}, 
                "errors": [
                    "Unhandled exception performing step: undefined method `exists' for File:Class\n/usr/libexec/openshift/lib/gear_upgrade_extension.rb:87:in `pre_upgrade'\n/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-node-1.18.1/lib/openshift-origin-node/model/upgrade.rb:249:in `block in gear_pre_upgrade'\n/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-node-1.18.1/lib/openshift-origin-node/utils/upgrade_progress.rb:32:in `step'\n/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-node-1.18.1/lib/openshift-origin-node/model/upgrade.rb:248:in `gear_pre_upgrade'\n/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-node-1.18.1/lib/openshift-origin-node/model/upgrade.rb:154:in `execute'\n/opt/rh/ruby193/root/usr/libexec/mcollective/mcollective/agent/openshift.rb:241:in `upgrade_action'\n/opt/rh/ruby193/root/usr/share/ruby/mcollective/rpc/agent.rb:86:in `handlemsg'\n/opt/rh/ruby193/root/usr/share/ruby/mcollective/agents.rb:126:in `block (2 levels) in dispatch'\n/opt/rh/ruby193/root/usr/share/ruby/timeout.rb:69:in `timeout'\n/opt/rh/ruby193/root/usr/share/ruby/mcollective/agents.rb:125:in `block in dispatch'"
                "status": "incomplete"
Comment 13 Andy Goldstein 2013-12-05 09:15:54 EST
https://github.com/openshift/li/pull/2211 should fix the exception above. Please retest once this PR merges.
Comment 14 Meng Bo 2013-12-09 03:55:59 EST
Upgrade from devenv-stage_601 to latest stage_branch (devenv-stage_604).

The migration script will fix the inode for existing gears.

            "minimum_inodes_limit = 80000", 
            "gear_inodes_limit = 40000", 
            "Setting quota to 1048576 blocks, 80000 inodes", 
            "Resetting quota blocks: 1048576  inodes: 80000", 

Check the quota for the existing app after upgrade, the inode limit has been fixed.

[springs-bmengmdev.dev.rhcloud.com 52a564bf14a5f353890000e9]\> quota -s
Disk quotas for user 52a564bf14a5f353890000e9 (uid 1004): 
     Filesystem  blocks   quota   limit   grace   files   quota   limit   grace
     /dev/xvde2   85044       0   1024M            3229       0   80000
Comment 15 Corey Daley 2013-12-13 10:29:12 EST
We also need to double the disk space allowed for free accounts, users can no longer install apps they used to be able to install like OpenCV.  It takes 600MB, which would be fine if there were not two copies of it stored on the gear.

Note You need to log in before you can comment on or make changes to this bug.