Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
DescriptionKarim Boumedhel
2016-06-07 15:55:29 UTC
Description of problem:
launch provisioning of more than 6 concurrent hosts and watch capsule process dies
Version-Release number of selected component (if applicable):
foreman-proxy-1.7.2.6-1.el7sat.noarch
How reproducible:
launch provisioning of more than 6 concurrent hosts and watch capsule process dies
Steps to Reproduce:
1.launch provisioning of more than 6 concurrent hosts
2.wait
3.cry
Actual results:
provisioning of more than 6 concurrent hosts and watch capsule process dies
Expected results:
provisioning of more than 6 concurrent hosts and watch capsule process should work
Additional info:
we believe the fix is to fine tune the passenger.conf of the capsule so that the parameter PassengerMaxPoolSize is set to an higher value.
PassengerMaxPoolSize 30 does the trick but other tweaks should be applied as stated in https://access.redhat.com/sites/default/files/attachments/sat6-perfbrief-v1.0.pdf
I think [1] is the relevant upstream bug / dup of [2]. This has been fixed in 6.1.9 (or some prior z-stream version). Definitely I see the patch applied to foreman-proxy-1.7.2.8-1.el7sat.noarch distributed with 6.1.9.
Checking the attached support cases, foreman-proxy version is 1.7.2.6-1 where the fix is not present.
Lukas, could you please confirm if the [1] applies for any type of rendering?
[1] http://projects.theforeman.org/issues/12319
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1275647
That is definitely relevant. Yes, it applies to any template, I just used iPXE because it was easy to test with. The bug that has been fixed in 6.1.9. Please:
1) Check that customer upgraded to 6.1.9.
2) Try the reproducer shell script from http://projects.theforeman.org/issues/12319 if it breaks the proxy process
Customer upgrade to 6.1.9 and it seems that it worked. Thanks so much.
[root@server01:~]# rpm -qa | grep foreman-proxy
...
foreman-proxy-1.7.2.8-1.el7sat.noarch
(In reply to Edu Alcaniz from comment #10)
> Customer upgrade to 6.1.9 and it seems that it worked. Thanks so much.
>
> [root@server01:~]# rpm -qa | grep foreman-proxy
> ...
> foreman-proxy-1.7.2.8-1.el7sat.noarch
Thanks for confirming it, closing as DUP of bz1275647.
*** This bug has been marked as a duplicate of bug 1275647 ***