Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2036222 - Changing puma workers via satellite-installer forgets to restart foreman service to apply the change
Summary: Changing puma workers via satellite-installer forgets to restart foreman serv...
Keywords:
Status: CLOSED DUPLICATE of bug 2025760
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Installation
Version: 6.10.0
Hardware: x86_64
OS: Linux
medium
medium
Target Milestone: 6.11.0
Assignee: satellite6-bugs
QA Contact: Omkar Khatavkar
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-12-30 15:54 UTC by Pavel Moravec
Modified: 2022-07-19 16:29 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-01-27 12:43:45 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Pavel Moravec 2021-12-30 15:54:10 UTC
Description of problem:
Attempting to change the number of Puma workers, an execution of satellite-installer has no immediate effect. The execution of

satellite-installer --foreman-foreman-service-puma-workers=8

does update FOREMAN_PUMA_WORKERS in /etc/systemd/system/foreman.service.d/installer.conf properly, but 'foreman' service is not restarted afterwards, keeping the old number of workers running.


Version-Release number of selected component (if applicable):
Sat 6.10 (also 6.9)


How reproducible:
100%


Steps to Reproduce:
1. Check currently set number of Puma workers and running Puma worker processes:
grep WORKERS /etc/systemd/system/foreman.service.d/installer.conf
ps aux | grep puma | grep worker | nl

2. Change that (ensure you use a different value than the current # of workers):
satellite-installer --foreman-foreman-service-puma-workers=8

3. Check 1. again

4. restart foreman service

5. check 1. again


Actual results:
1. shows same number of FOREMAN_PUMA_WORKERS like # of those processes:
Environment=FOREMAN_PUMA_WORKERS=6
     1	foreman  28025  0.1  1.3 974404 449932 ?       Sl   16:23   0:00 puma: cluster worker 0: 27954 [foreman]
     2	foreman  28031  0.1  1.3 974404 450008 ?       Sl   16:23   0:00 puma: cluster worker 1: 27954 [foreman]
     3	foreman  28037  0.1  1.3 976456 450328 ?       Sl   16:23   0:00 puma: cluster worker 2: 27954 [foreman]
     4	foreman  28042  0.1  1.3 974404 449892 ?       Sl   16:23   0:00 puma: cluster worker 3: 27954 [foreman]
     5	foreman  28046  2.9  1.9 1155700 623236 ?      Sl   16:23   0:04 puma: cluster worker 4: 27954 [foreman]
     6	foreman  28049  0.1  1.3 976456 450116 ?       Sl   16:23   0:00 puma: cluster worker 5: 27954 [foreman]

3. shows config change but same processes:
Environment=FOREMAN_PUMA_WORKERS=8
     1	foreman  28025  0.1  1.3 974404 449932 ?       Sl   16:23   0:00 puma: cluster worker 0: 27954 [foreman]
     2	foreman  28031  0.1  1.3 974404 450008 ?       Sl   16:23   0:00 puma: cluster worker 1: 27954 [foreman]
     3	foreman  28037  0.1  1.3 976456 450328 ?       Sl   16:23   0:00 puma: cluster worker 2: 27954 [foreman]
     4	foreman  28042  0.1  1.3 974404 449892 ?       Sl   16:23   0:00 puma: cluster worker 3: 27954 [foreman]
     5	foreman  28046  2.9  1.9 1155700 623236 ?      Sl   16:23   0:04 puma: cluster worker 4: 27954 [foreman]
     6	foreman  28049  0.1  1.3 976456 450116 ?       Sl   16:23   0:00 puma: cluster worker 5: 27954 [foreman]

5. even after the service restart, we do see the change applied:
Environment=FOREMAN_PUMA_WORKERS=8
     1	foreman  29007  1.0  1.3 976380 449632 ?       Sl   16:46   0:00 puma: cluster worker 0: 28979 [foreman]
     2	foreman  29012  1.1  1.3 974328 449704 ?       Sl   16:46   0:00 puma: cluster worker 1: 28979 [foreman]
     3	foreman  29018  1.0  1.3 976380 449580 ?       Sl   16:46   0:00 puma: cluster worker 2: 28979 [foreman]
     4	foreman  29025  1.0  1.3 976380 449880 ?       Sl   16:46   0:00 puma: cluster worker 3: 28979 [foreman]
     5	foreman  29028  1.0  1.3 976380 449776 ?       Sl   16:46   0:00 puma: cluster worker 4: 28979 [foreman]
     6	foreman  29032  1.1  1.3 976380 449668 ?       Sl   16:46   0:00 puma: cluster worker 5: 28979 [foreman]
     7	foreman  29037  1.0  1.3 976380 449696 ?       Sl   16:46   0:00 puma: cluster worker 6: 28979 [foreman]
     8	foreman  29044  1.1  1.3 976380 449692 ?       Sl   16:46   0:00 puma: cluster worker 7: 28979 [foreman]


Expected results:
Even 3. should show 8 worker processes


Additional info:

Comment 1 Pavel Moravec 2022-01-10 19:06:36 UTC
Also Sat6.9 is affected the same.

Comment 3 Evgeni Golov 2022-01-27 12:39:48 UTC
This is a duplicate of BZ#2025760 which is fixed in 7.0 (but the fix can't be trivially backported as it relies on changes to a 3rd party puppet module)

Comment 4 Evgeni Golov 2022-01-27 12:43:45 UTC

*** This bug has been marked as a duplicate of bug 2025760 ***


Note You need to log in before you can comment on or make changes to this bug.