Hide Forgot
Description of problem: We've been testing out our Web App image on Openshift 4(running on AWS) lately, and i've come across an issue that i was hoping you might be able to help with or even point me in the right direction. I've been using this app as an example, https://github.com/lholmquist/react-web-app Basically, the problem is during the build phase, specifically during the "npm install". The error returned is "EMFILE: too many open files", which relates to the "ulimit" that is set. That value inside the build pod of "ulimit -n" is "1024", which is pretty low. I've noticed that on Openshift 3.x(Online) that same value is a lot higher, "1048676", so the error never happens there. I've tried to set the ulimit in my assemble script for the s2i image, but that returns a permission error, which i sort of expected. So i guess my question is, is it possible to increase the ulimit value from an s2i image, or is this something that needs to be set when openshift is created? Version-Release number of selected component (if applicable): 4.1.0-0.nightly-2019-04-28-064010 How reproducible: Always Steps to Reproduce: Start an s2i build using the latest nodejs imagestream tag and https://github.com/lholmquist/react-web-app as the source Actual results: Build fails with EMFILE: too many open files Expected results: Build should succeed
Thanks for narrowing down where to look! Opened https://github.com/openshift/builder/pull/69.
Checked and this issue is fixed, s2i work well with `npm install` for `oc new-app nodejs~https://github.com/lholmquist/react-web-app` # openshift-sti-build version openshift-sti-build v4.1.0-201905032232+59e5dc1-dirty # oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.1.0-0.nightly-2019-05-05-070156 True False 15m Cluster version is 4.1.0-0.nightly-2019-05-05-070156 # oc get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME ip-10-0-133-56.ap-northeast-1.compute.internal Ready worker 23m v1.13.4+db7b699c3 10.0.133.56 <none> Red Hat Enterprise Linux CoreOS 410.8.20190505.0 (Ootpa) 4.18.0-80.el8.x86_64 cri-o://1.13.9-1.rhaos4.1.gitd70609a.el8 ip-10-0-137-75.ap-northeast-1.compute.internal Ready master 33m v1.13.4+db7b699c3 10.0.137.75 <none> Red Hat Enterprise Linux CoreOS 410.8.20190505.0 (Ootpa) 4.18.0-80.el8.x86_64 cri-o://1.13.9-1.rhaos4.1.gitd70609a.el8 ip-10-0-149-8.ap-northeast-1.compute.internal Ready worker 23m v1.13.4+db7b699c3 10.0.149.8 <none> Red Hat Enterprise Linux CoreOS 410.8.20190505.0 (Ootpa) 4.18.0-80.el8.x86_64 cri-o://1.13.9-1.rhaos4.1.gitd70609a.el8 ip-10-0-158-112.ap-northeast-1.compute.internal Ready master 33m v1.13.4+db7b699c3 10.0.158.112 <none> Red Hat Enterprise Linux CoreOS 410.8.20190505.0 (Ootpa) 4.18.0-80.el8.x86_64 cri-o://1.13.9-1.rhaos4.1.gitd70609a.el8 ip-10-0-169-206.ap-northeast-1.compute.internal Ready worker 23m v1.13.4+db7b699c3 10.0.169.206 <none> Red Hat Enterprise Linux CoreOS 410.8.20190505.0 (Ootpa) 4.18.0-80.el8.x86_64 cri-o://1.13.9-1.rhaos4.1.gitd70609a.el8 ip-10-0-174-121.ap-northeast-1.compute.internal Ready master 33m v1.13.4+db7b699c3 10.0.174.121 <none> Red Hat Enterprise Linux CoreOS 410.8.20190505.0 (Ootpa) 4.18.0-80.el8.x86_64 cri-o://1.13.9-1.rhaos4.1.gitd70609a.el8
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0758