Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 2025760 - installer does not restart foreman.service when changing puma configuration
Summary: installer does not restart foreman.service when changing puma configuration
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Installation
Version: 6.10.0
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: 6.11.0
Assignee: Evgeni Golov
QA Contact: Gaurav Talreja
URL:
Whiteboard:
: 2036222 (view as bug list)
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-11-23 01:37 UTC by wclark
Modified: 2022-08-15 15:22 UTC (History)
7 users (show)

Fixed In Version: foreman-installer-3.1.2.4
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 2070993 (view as bug list)
Environment:
Last Closed: 2022-07-05 14:30:29 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Foreman Issue Tracker 33973 0 Normal Closed installer does not restart foreman.service when changing puma configuration 2022-02-28 16:14:47 UTC
Foreman Issue Tracker 34824 0 Normal Closed installer does not restart foreman.service when changing puma configuration 2022-04-29 07:14:30 UTC
Red Hat Product Errata RHSA-2022:5498 0 None None None 2022-07-05 14:30:39 UTC

Description wclark 2021-11-23 01:37:23 UTC
Initial Setup: puma configured and running with 4 workers and 4 workers per thread

# grep -n puma /etc/foreman-installer/scenarios.d/satellite-answers.yaml 
122:  foreman_service_puma_threads_min: 
123:  foreman_service_puma_threads_max: 4
124:  foreman_service_puma_workers: 4

# ps aux | grep -v grep | grep -e USER -e puma
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
foreman   3472  0.6  2.2 951716 462672 ?       Ssl  18:08   0:47 puma 5.3.2 (unix:///run/foreman.sock) [foreman]
foreman   4060  0.0  3.1 1141940 648704 ?      Sl   18:09   0:05 puma: cluster worker 0: 3472 [foreman]
foreman   4061  0.0  3.1 1139888 648544 ?      Sl   18:09   0:06 puma: cluster worker 1: 3472 [foreman]
foreman   4064  0.1  3.0 1119804 618152 ?      Sl   18:09   0:12 puma: cluster worker 2: 3472 [foreman]
foreman   4065  0.0  3.0 1143428 631664 ?      Sl   18:09   0:05 puma: cluster worker 3: 3472 [foreman]


Reproduction steps:

1. change puma configuration foreman_service_threads_max and/or foreman_service_puma_workers.

# satellite-installer --foreman-foreman-service-puma-workers 5 --foreman-foreman-service-puma-threads-max 5 --verbose-log-level=info

2. check # of puma worker processes running is still 4 after installer has completed:

# ps aux | grep -v grep | grep -e USER -e puma
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
foreman   8148 11.9  2.2 951776 462376 ?       Ssl  20:21   0:43 puma 5.3.2 (unix:///run/foreman.sock) [foreman]
foreman   8219  1.2  3.1 1150208 649108 ?      Sl   20:22   0:03 puma: cluster worker 0: 8148 [foreman]
foreman   8223  0.0  2.2 978532 456372 ?       Sl   20:22   0:00 puma: cluster worker 1: 8148 [foreman]
foreman   8227  0.0  2.2 978532 456260 ?       Sl   20:22   0:00 puma: cluster worker 2: 8148 [foreman]
foreman   8236  0.0  2.2 978532 456492 ?       Sl   20:22   0:00 puma: cluster worker 3: 8148 [foreman]

3. restart foreman.service: # systemctl restart foreman.service

4. Observe there are now 5 puma worker processes running:

# ps aux | grep -v grep | grep -e USER -e puma
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
foreman   8857 76.2  2.2 947880 457108 ?       Ssl  20:28   0:44 puma 5.3.2 (unix:///run/foreman.sock) [foreman]
foreman   8883  1.2  2.1 976688 450764 ?       Sl   20:28   0:00 puma: cluster worker 0: 8857 [foreman]
foreman   8886  1.1  2.1 976688 450628 ?       Sl   20:28   0:00 puma: cluster worker 1: 8857 [foreman]
foreman   8894  1.1  2.1 976688 450616 ?       Sl   20:28   0:00 puma: cluster worker 2: 8857 [foreman]
foreman   8900  1.1  2.1 976688 450956 ?       Sl   20:28   0:00 puma: cluster worker 3: 8857 [foreman]
foreman   8902  1.2  2.1 976688 450748 ?       Sl   20:28   0:00 puma: cluster worker 4: 8857 [foreman]



Further information:

You can observe that foreman.socket was restarted while foreman.service was not restarted:

2021-11-22 20:21:22 [DEBUG ] [configure] Executing: '/bin/systemctl restart -- foreman.socket'
2021-11-22 20:21:23 [INFO  ] [configure] /Service[foreman.socket]: Triggered 'refresh' from 1 event
2021-11-22 20:21:23 [DEBUG ] [configure] /Service[foreman.socket]: The container Class[Foreman::Service] will propagate my refresh event
2021-11-22 20:21:23 [DEBUG ] [configure] /Service[foreman.socket]: Evaluated in 1.36 seconds
2021-11-22 20:21:23 [DEBUG ] [configure] /Service[foreman]: Starting to evaluate the resource (1592 of 2025)
2021-11-22 20:21:23 [DEBUG ] [configure] Executing: '/bin/systemctl is-active -- foreman'
2021-11-22 20:21:23 [DEBUG ] [configure] Executing: '/bin/systemctl is-enabled -- foreman'
2021-11-22 20:21:23 [DEBUG ] [configure] Executing: '/bin/systemctl show --property=NeedDaemonReload -- foreman'
2021-11-22 20:21:23 [DEBUG ] [configure] Executing: '/bin/systemctl daemon-reload'
2021-11-22 20:21:23 [DEBUG ] [configure] Executing: '/bin/systemctl unmask -- foreman'
2021-11-22 20:21:23 [DEBUG ] [configure] Executing: '/bin/systemctl start -- foreman'
2021-11-22 20:22:08 [DEBUG ] [configure] Executing: '/bin/systemctl is-enabled -- foreman'
2021-11-22 20:22:08 [INFO  ] [configure] /Stage[main]/Foreman::Service/Service[foreman]/ensure: ensure changed 'stopped' to 'running'
2021-11-22 20:22:08 [DEBUG ] [configure] /Service[foreman]: The container Class[Foreman::Service] will propagate my refresh event
2021-11-22 20:22:08 [DEBUG ] [configure] /Service[foreman]: Unscheduling refresh on Service[foreman]
2021-11-22 20:22:08 [DEBUG ] [configure] /Service[foreman]: Evaluated in 44.66 seconds

Comment 1 wclark 2021-11-23 02:22:08 UTC
Created redmine issue https://projects.theforeman.org/issues/33973 from this bug

Comment 2 Bryan Kearney 2021-11-30 08:05:45 UTC
Moving this bug to POST for triage into Satellite since the upstream issue https://projects.theforeman.org/issues/33973 has been resolved.

Comment 3 Evgeni Golov 2022-01-27 12:43:45 UTC
*** Bug 2036222 has been marked as a duplicate of this bug. ***

Comment 6 Evgeni Golov 2022-04-22 09:40:10 UTC
Omkar poked me via chat about this, and I've investigated his setup a bit.

Here are the relevant installer and system logs:
2022-04-22 04:39:49 [DEBUG ] [configure] Adding relationship from File[/etc/systemd/system/foreman.service.d/installer.conf] to Service[foreman] with 'notify'
2022-04-22 04:39:51 [DEBUG ] [configure] /Stage[main]/Foreman::Config/Systemd::Dropin_file[foreman-service]/File[/etc/systemd/system/foreman.service.d/installer.conf]/notify: notify to Service[foreman]
2022-04-22 04:40:12 [INFO  ] [configure] /Stage[main]/Foreman::Config/Systemd::Dropin_file[foreman-service]/File[/etc/systemd/system/foreman.service.d/installer.conf]/content:
2022-04-22 04:40:12 [INFO  ] [configure] --- /etc/systemd/system/foreman.service.d/installer.conf       2022-04-22 04:19:07.223912046 -0400
2022-04-22 04:40:12 [INFO  ] [configure] +++ /tmp/puppet-file20220422-60048-s2v7oa      2022-04-22 04:40:12.570242661 -0400
2022-04-22 04:40:12 [INFO  ] [configure] @@ -4,4 +4,4 @@
2022-04-22 04:40:12 [INFO  ] [configure] Environment=FOREMAN_HOME=/usr/share/foreman
2022-04-22 04:40:12 [INFO  ] [configure] Environment=FOREMAN_PUMA_THREADS_MIN=5
2022-04-22 04:40:12 [INFO  ] [configure] Environment=FOREMAN_PUMA_THREADS_MAX=5
2022-04-22 04:40:12 [INFO  ] [configure] -Environment=FOREMAN_PUMA_WORKERS=5
2022-04-22 04:40:12 [INFO  ] [configure] +Environment=FOREMAN_PUMA_WORKERS=3
2022-04-22 04:40:12 [INFO  ] [configure] /Stage[main]/Foreman::Config/Systemd::Dropin_file[foreman-service]/File[/etc/systemd/system/foreman.service.d/installer.conf]/content: content changed '{sha256}64a6656af6b5bc96dcc5fafbd9595e0a9f54e06d0f528a5ba5171759b834e124' to '{sha256}794ebb890c119323bb93353334e70111d0f6dbf6730f66e153399821a47349d2'
2022-04-22 04:40:12 [DEBUG ] [configure] /Stage[main]/Foreman::Config/Systemd::Dropin_file[foreman-service]/File[/etc/systemd/system/foreman.service.d/installer.conf]: Scheduling refresh of Service[foreman]
2022-04-22 04:40:12 [DEBUG ] [configure] /Stage[main]/Foreman::Config/Systemd::Dropin_file[foreman-service]/File[/etc/systemd/system/foreman.service.d/installer.conf]: The container Systemd::Dropin_file[foreman-service] will propagate my refresh event
2022-04-22 04:40:12 [DEBUG ] [configure] /Stage[main]/Foreman::Config/Systemd::Dropin_file[foreman-service]/File[/etc/systemd/system/foreman.service.d/installer.conf]: Evaluated in 0.03 seconds
2022-04-22 04:42:12 [DEBUG ] [configure] /Service[foreman.socket]: Starting to evaluate the resource (2265 of 2522)
2022-04-22 04:42:12 [DEBUG ] [configure] Executing: '/bin/systemctl is-active -- foreman.socket'
2022-04-22 04:42:12 [DEBUG ] [configure] Executing: '/bin/systemctl is-enabled -- foreman.socket'
2022-04-22 04:42:12 [DEBUG ] [configure] Executing: '/bin/systemctl is-active -- foreman.socket'
2022-04-22 04:42:12 [DEBUG ] [configure] Executing: '/bin/systemctl show --property=NeedDaemonReload -- foreman.socket'
2022-04-22 04:42:12 [DEBUG ] [configure] Executing: '/bin/systemctl restart -- foreman.socket'
2022-04-22 04:42:14 [INFO  ] [configure] /Service[foreman.socket]: Triggered 'refresh' from 1 event
2022-04-22 04:42:14 [DEBUG ] [configure] /Service[foreman.socket]: The container Class[Foreman::Service] will propagate my refresh event
2022-04-22 04:42:14 [DEBUG ] [configure] /Service[foreman.socket]: Evaluated in 1.38 seconds
2022-04-22 04:42:14 [DEBUG ] [configure] /Service[foreman]: Starting to evaluate the resource (2266 of 2522)
2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl is-active -- foreman'
2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl is-enabled -- foreman'
2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl show --property=NeedDaemonReload -- foreman'
2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl daemon-reload'
2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl unmask -- foreman'
2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl start -- foreman'
2022-04-22 04:42:55 [DEBUG ] [configure] Executing: '/bin/systemctl is-enabled -- foreman'
2022-04-22 04:42:55 [INFO  ] [configure] /Stage[main]/Foreman::Service/Service[foreman]/ensure: ensure changed 'stopped' to 'running'
2022-04-22 04:42:55 [DEBUG ] [configure] /Service[foreman]: The container Class[Foreman::Service] will propagate my refresh event
2022-04-22 04:42:55 [DEBUG ] [configure] /Service[foreman]: Unscheduling refresh on Service[foreman]
2022-04-22 04:42:55 [DEBUG ] [configure] /Service[foreman]: Evaluated in 41.07 seconds

Apr 22 04:42:12 sat.example.com systemd[1]: Stopping Foreman...
Apr 22 04:42:12 sat.example.com systemd[1]: Stopping Foreman jobs daemon - worker-1 on sidekiq...
Apr 22 04:42:12 sat.example.com systemd[1]: Stopping Foreman jobs daemon - orchestrator on sidekiq...
Apr 22 04:42:12 sat.example.com systemd[1]: Stopping Foreman jobs daemon - worker-hosts-queue-1 on sidekiq...
Apr 22 04:42:12 sat.example.com foreman[55835]: [55835] === puma shutdown: 2022-04-22 04:42:12 -0400 ===
Apr 22 04:42:12 sat.example.com foreman[55835]: [55835] - Goodbye!
Apr 22 04:42:12 sat.example.com foreman[55835]: [55835] - Gracefully shutting down workers...
Apr 22 04:42:13 sat.example.com systemd[1]: dynflow-sidekiq: Succeeded.
Apr 22 04:42:13 sat.example.com systemd[1]: Stopped Foreman jobs daemon - orchestrator on sidekiq.
Apr 22 04:42:13 sat.example.com systemd[1]: Starting Foreman jobs daemon - orchestrator on sidekiq...
Apr 22 04:42:13 sat.example.com systemd[1]: dynflow-sidekiq: Succeeded.
Apr 22 04:42:13 sat.example.com systemd[1]: Stopped Foreman jobs daemon - worker-1 on sidekiq.
Apr 22 04:42:13 sat.example.com systemd[1]: Starting Foreman jobs daemon - worker-1 on sidekiq...
Apr 22 04:42:14 sat.example.com foreman[55835]: Exiting
Apr 22 04:42:14 sat.example.com systemd[1]: foreman.service: Succeeded.
Apr 22 04:42:14 sat.example.com systemd[1]: Stopped Foreman.
Apr 22 04:42:14 sat.example.com systemd[1]: foreman.socket: Succeeded.
Apr 22 04:42:14 sat.example.com systemd[1]: Closed Foreman HTTP Server Accept Sockets.
Apr 22 04:42:14 sat.example.com systemd[1]: Stopping Foreman HTTP Server Accept Sockets.
Apr 22 04:42:14 sat.example.com systemd[1]: foreman.socket: TCP_NODELAY failed: Operation not supported
Apr 22 04:42:14 sat.example.com systemd[1]: Starting Foreman HTTP Server Accept Sockets.
Apr 22 04:42:14 sat.example.com systemd[1]: Listening on Foreman HTTP Server Accept Sockets.
Apr 22 04:42:14 sat.example.com systemd[1]: Starting Foreman...
Apr 22 04:42:14 sat.example.com systemd[1]: Reloading.
Apr 22 04:42:14 sat.example.com systemd[1]: Reloading.
Apr 22 04:42:24 sat.example.com foreman[62315]: => Booting Puma
Apr 22 04:42:24 sat.example.com foreman[62315]: => Rails 6.0.3.7 application starting in production
Apr 22 04:42:24 sat.example.com foreman[62315]: => Run `rails server --help` for more startup options
Apr 22 04:42:37 sat.example.com foreman[62315]: /usr/share/gems/gems/foreman_theme_satellite-9.0.0.6/app/models/concerns/distributor_version.rb:5: warning: already initialized constant Katello::Glue::Provider::D>
Apr 22 04:42:37 sat.example.com foreman[62315]: /usr/share/gems/gems/katello-4.3.0.20/app/models/katello/glue/provider.rb:3: warning: previous definition of DISTRIBUTOR_VERSION was here
Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] * Enabling systemd notification integration
Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] Puma starting in cluster mode...
Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] * Puma version: 5.6.2 (ruby 2.7.4-p191) ("Birdie's Version")
Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] *  Min threads: 5
Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] *  Max threads: 5
Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] *  Environment: production
Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] *   Master PID: 62315
Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] *      Workers: 5
Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] *     Restarts: (✔) hot (✖) phased
Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] * Preloading application
Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] * Activated unix:///run/foreman.sock
Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] Use Ctrl-C to stop
Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] * Starting control server on unix:///usr/share/foreman/tmp/sockets/pumactl.sock
Apr 22 04:42:55 sat.example.com foreman[62315]: [62315] - Worker 0 (PID: 62433) booted in 0.31s, phase: 0
Apr 22 04:42:55 sat.example.com foreman[62315]: [62315] - Worker 1 (PID: 62437) booted in 0.3s, phase: 0
Apr 22 04:42:55 sat.example.com foreman[62315]: [62315] - Worker 2 (PID: 62444) booted in 0.31s, phase: 0
Apr 22 04:42:55 sat.example.com foreman[62315]: [62315] - Worker 3 (PID: 62450) booted in 0.33s, phase: 0
Apr 22 04:42:55 sat.example.com foreman[62315]: [62315] - Worker 4 (PID: 62453) booted in 0.34s, phase: 0
Apr 22 04:42:55 sat.example.com systemd[1]: Started Foreman.

On first sight, this looks correct, Puppet schedules notifications when our drop-in file changes and foreman IS restarted… But it has the wrong number of workers!

When we look closer, the following lines stand out:
Apr 22 04:42:14 sat.example.com systemd[1]: Starting Foreman...
Apr 22 04:42:14 sat.example.com systemd[1]: Reloading.
Apr 22 04:42:14 sat.example.com systemd[1]: Reloading.

Foreman was started *before* systemd was reloaded, which means that while the service is restarted, it's still using the OLD values from the drop-in file, not the new ones we just deployed.

Let's look at the Puppet log again (I removed a few lines, to make it more obvious):
2022-04-22 04:42:12 [DEBUG ] [configure] /Service[foreman.socket]: Starting to evaluate the resource (2265 of 2522)
2022-04-22 04:42:12 [DEBUG ] [configure] Executing: '/bin/systemctl is-active -- foreman.socket'
2022-04-22 04:42:12 [DEBUG ] [configure] Executing: '/bin/systemctl restart -- foreman.socket'
2022-04-22 04:42:14 [DEBUG ] [configure] /Service[foreman]: Starting to evaluate the resource (2266 of 2522)
2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl is-active -- foreman'
2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl show --property=NeedDaemonReload -- foreman'
2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl daemon-reload'
2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl unmask -- foreman'
2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl start -- foreman'
2022-04-22 04:42:55 [DEBUG ] [configure] Executing: '/bin/systemctl is-enabled -- foreman'
2022-04-22 04:42:55 [INFO  ] [configure] /Stage[main]/Foreman::Service/Service[foreman]/ensure: ensure changed 'stopped' to 'running'

Puppet did restart the socket, then went on to the service, identified it needs a daemon reload, did that and then started(!) the service.
Why did it start, not restart it? Because a restart of the socket *does* restart the service too (but in the background), so "is-active" returned inactive, which means that the service just needs a start and that's what Puppet did for us. But a start of a running service is a noop.

And this is why the service is running with the wrong drop-in content.

Comment 7 Evgeni Golov 2022-04-22 13:58:00 UTC
Replacing the require of foreman.service on foreman.socket (https://github.com/theforeman/puppet-foreman/blob/1b5eec23169a585f506a8317823c8bb41a6224ff/manifests/service.pp#L46) with an "before", so that service is always handled before the socket seems to fix it.

But I have no idea *why* this relationship was there in the first place

Comment 8 Bryan Kearney 2022-04-27 20:04:37 UTC
Upstream bug assigned to wclark

Comment 9 Evgeni Golov 2022-04-29 07:14:54 UTC
the second fix is merged.

Comment 10 Omkar Khatavkar 2022-05-16 08:08:11 UTC
Steps executed : 

# satellite-installer --foreman-foreman-service-puma-workers 4 --foreman-foreman-service-puma-threads-max 4 --verbose-log-level=info

# ps aux | grep -v grep | grep -e USER -e puma
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
foreman  38325  4.8  2.2 949408 456240 ?       Ssl  03:37   0:42 puma 5.6.2 (unix:///run/foreman.sock) [foreman]
foreman  38389  0.6  3.0 1151460 618412 ?      Sl   03:37   0:05 puma: cluster worker 0: 38325 [foreman]
foreman  38394  0.5  3.1 1139196 635224 ?      Sl   03:37   0:04 puma: cluster worker 1: 38325 [foreman]
foreman  38400  0.5  3.0 1144936 617900 ?      Sl   03:37   0:04 puma: cluster worker 2: 38325 [foreman]
foreman  38406  0.0  2.2 976164 451096 ?       Sl   03:37   0:00 puma: cluster worker 3: 38325 [foreman]


Verified on Satellite 6.11 with snap 20 and it is working as expected.

Comment 13 errata-xmlrpc 2022-07-05 14:30:29 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Moderate: Satellite 6.11 Release), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5498


Note You need to log in before you can comment on or make changes to this bug.