Red Hat Bugzilla – Bug 1031112
users are hitting their quota limit very quickly due to the new deployments feature
Last modified: 2015-05-14 19:32:53 EDT
Description of problem:
Users are hitting their quota limit (either disk space or file limits) very quickly because of the new deployments feature, which defaults to keeping 1 deployment. There needs to be an option to turn this off, or double everyones disk space (at least)
We have also received a few emails to firstname.lastname@example.org about the same issue.
Version-Release number of selected component (if applicable):
depends on what they are pushing up as an app, but I think it will only get worse
Steps to Reproduce:
1. create an application
2. add some node.js modules or ruby gems
3. git push, it will fill up the disk more quickly with file limit and disk limit
There should be a way to turn off deployments, or increase free users disk space (probably silver users to)
I can reproduce this error. It is not possible to deploy a rails app. I tried to deploy redmine + 11 plugins (aprox. 3700 files). after only one commit the first deployment already fails with this error:
remote: sent 198864294 bytes received 324389 bytes 4106983.15 bytes/sec
remote: total size is 203670962 speedup is 1.02
remote: stderr: rsync: mkstemp "/var/lib/openshift/5289204750044632df00025f/app-deployments/2013-11-17_15-57-33.214/repo/vendor/bundle/ruby/1.9.1/gems/ruby-ole-22.214.171.124/test/.test_meta_data.rb.IEcD1K" failed: Disk quota exceeded (122)
counting the files (for i in *; do echo -n "$i: "; find $i | sort -u | wc -l ; done | sort -rn | head -6) gets me this result:
what is causing so many files to be created in deployments?
Same with Python and Django.
Using sebastian's code above:
The strange is when I am in: /app-root/runtime/dependencies/python/virtenv/build
And when I cd pytz I have:
Also with Capedwarf:
At first my app is working fine, now cannot 'git push' or even 'rhc app-tidy' to clean up disk space because it says:
'Warning gear <> is using 100.0% of inodes allowed'
We have a Nodejs app that deployed fine until recently.
I removed the app and recreated and deployed the first time without errors.
the second time:
remote: error: unable to create temporary file: Disk quota exceeded
remote: fatal: failed to write object
error: unpack failed: unpack-objects abnormal exit
! [remote rejected] master -> master (unpacker error)
error: failed to push some refs to 'ssh://email@example.com/~/git/stage.git/'
Will upgrading to Silver solve this? Guaranteed?
I have tried to break this down a little from and example ruby app, here are some of the largest gems in terms of file count. Many of these are pretty standard to have for any ruby app:
In total, the app I looked at was using nearly 22000 files in their gems directory. but looking closer, the biggest issue here is nokogiri, and drilling down further, the problem appears to be that nokogiri pulls in source trees for libxml2 and libxslt, even though these libraries and their -devel subpackages are already present on the system. These files and their duplication under app-deployments account for over 30% of the gear's quota, and should not be necessary. Even if they are temporarily necessary during the build, it should be possible to clean them up when the build completes.
In the case of passenger, the largest directory by far is the boost extension, which contains over 2000 files; it's not clear to me how much of that is temporary versus permanently required.
looks like setting NOKOGIRI_USE_SYSTEM_LIBRARIES might help a lot here. I'm going to test that.
I have confirmed that for this one particular case, setting NOKOGIRI_USE_SYSTEM_LIBRARIES=1 at app create time made a substantial difference in both the build time and the number of inodes required to deploy the application.
Unfortunately, there may be many other issues, like this one, and we'll have to look at them on a case-by-case basis.
Anyone who is getting the quota error for inodes, please email firstname.lastname@example.org with the following information:
UUID of your gear (if you just provide the ENTIRE git url that would be fine)
The url to your application (app-domain.rhcloud.com)
Subject of the email: [inode limit fix]
And we will put a fix in place until it is fixed globally.
With the 2 pull requests above, the default inode quotas for new gears should be 80000 instead of 40000. Additionally, when migrating to the next version, all gears whose inode quotas are now too low will have their inode quotas increased.
1. Create apps and verify their inode quota is 40000 (or manually set the quota to 40000)
2. Update to the latest RPMs
3. Make sure /etc/openshift/resource_limits.conf has been updated appropriately so quota_files is 80000, or manually make this change.
4. Restart mcollective
5. Upgrade a gear or the entire node
6. Check the quota of the upgraded gears, it should now be 80000 inodes per GB of storage quota (e.g. 80000 for 1GB, 160000 for 2GB, etc)
raising the limit to 80000 will only help for certain apps. Our simple nodejs app already uses 33373 files on Openshift:
Disk quotas for user 529f8c225973ca844d000080 (uid 6655):
Filesystem blocks quota limit grace files quota limit grace
459M 0 1024M 33373 0 40000
in ~ "for i in *; do echo -e "$(find $i | wc -l)\t$i"; done | sort -n" outputs
I'm not using deployments, or at least haven't touched my deployments command yet still a copy is kept.
My app is 122Mb with 112Mb in nodemodules. Where have the other 337 Mb gone?
nodemodules = 8195 files, a few thousand files come from the dev dependencies, so running npm install --production would help a bit, but keeping more than 2 deployments won't work either.
Checked on devenv_4098,
It will fail during the gear upgrade.
Error in the migrate log:
"Unhandled exception performing step: undefined method `exists' for File:Class\n/usr/libexec/openshift/lib/gear_upgrade_extension.rb:87:in `pre_upgrade'\n/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-node-1.18.1/lib/openshift-origin-node/model/upgrade.rb:249:in `block in gear_pre_upgrade'\n/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-node-1.18.1/lib/openshift-origin-node/utils/upgrade_progress.rb:32:in `step'\n/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-node-1.18.1/lib/openshift-origin-node/model/upgrade.rb:248:in `gear_pre_upgrade'\n/opt/rh/ruby193/root/usr/share/gems/gems/openshift-origin-node-1.18.1/lib/openshift-origin-node/model/upgrade.rb:154:in `execute'\n/opt/rh/ruby193/root/usr/libexec/mcollective/mcollective/agent/openshift.rb:241:in `upgrade_action'\n/opt/rh/ruby193/root/usr/share/ruby/mcollective/rpc/agent.rb:86:in `handlemsg'\n/opt/rh/ruby193/root/usr/share/ruby/mcollective/agents.rb:126:in `block (2 levels) in dispatch'\n/opt/rh/ruby193/root/usr/share/ruby/timeout.rb:69:in `timeout'\n/opt/rh/ruby193/root/usr/share/ruby/mcollective/agents.rb:125:in `block in dispatch'"
https://github.com/openshift/li/pull/2211 should fix the exception above. Please retest once this PR merges.
Upgrade from devenv-stage_601 to latest stage_branch (devenv-stage_604).
The migration script will fix the inode for existing gears.
"minimum_inodes_limit = 80000",
"gear_inodes_limit = 40000",
"Setting quota to 1048576 blocks, 80000 inodes",
"Resetting quota blocks: 1048576 inodes: 80000",
Check the quota for the existing app after upgrade, the inode limit has been fixed.
[springs-bmengmdev.dev.rhcloud.com 52a564bf14a5f353890000e9]\> quota -s
Disk quotas for user 52a564bf14a5f353890000e9 (uid 1004):
Filesystem blocks quota limit grace files quota limit grace
/dev/xvde2 85044 0 1024M 3229 0 80000
We also need to double the disk space allowed for free accounts, users can no longer install apps they used to be able to install like OpenCV. It takes 600MB, which would be fine if there were not two copies of it stored on the gear.