Bug 2025760
Summary: | installer does not restart foreman.service when changing puma configuration | |||
---|---|---|---|---|
Product: | Red Hat Satellite | Reporter: | wclark | |
Component: | Installation | Assignee: | Evgeni Golov <egolov> | |
Status: | CLOSED ERRATA | QA Contact: | Gaurav Talreja <gtalreja> | |
Severity: | medium | Docs Contact: | ||
Priority: | medium | |||
Version: | 6.10.0 | CC: | ahumbe, egolov, ehelms, gtalreja, pmendezh, pmoravec, tasander | |
Target Milestone: | 6.11.0 | Keywords: | Triaged | |
Target Release: | Unused | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | foreman-installer-3.1.2.4 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 2070993 (view as bug list) | Environment: | ||
Last Closed: | 2022-07-05 14:30:29 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: |
Description
wclark
2021-11-23 01:37:23 UTC
Created redmine issue https://projects.theforeman.org/issues/33973 from this bug Moving this bug to POST for triage into Satellite since the upstream issue https://projects.theforeman.org/issues/33973 has been resolved. *** Bug 2036222 has been marked as a duplicate of this bug. *** Omkar poked me via chat about this, and I've investigated his setup a bit. Here are the relevant installer and system logs: 2022-04-22 04:39:49 [DEBUG ] [configure] Adding relationship from File[/etc/systemd/system/foreman.service.d/installer.conf] to Service[foreman] with 'notify' 2022-04-22 04:39:51 [DEBUG ] [configure] /Stage[main]/Foreman::Config/Systemd::Dropin_file[foreman-service]/File[/etc/systemd/system/foreman.service.d/installer.conf]/notify: notify to Service[foreman] 2022-04-22 04:40:12 [INFO ] [configure] /Stage[main]/Foreman::Config/Systemd::Dropin_file[foreman-service]/File[/etc/systemd/system/foreman.service.d/installer.conf]/content: 2022-04-22 04:40:12 [INFO ] [configure] --- /etc/systemd/system/foreman.service.d/installer.conf 2022-04-22 04:19:07.223912046 -0400 2022-04-22 04:40:12 [INFO ] [configure] +++ /tmp/puppet-file20220422-60048-s2v7oa 2022-04-22 04:40:12.570242661 -0400 2022-04-22 04:40:12 [INFO ] [configure] @@ -4,4 +4,4 @@ 2022-04-22 04:40:12 [INFO ] [configure] Environment=FOREMAN_HOME=/usr/share/foreman 2022-04-22 04:40:12 [INFO ] [configure] Environment=FOREMAN_PUMA_THREADS_MIN=5 2022-04-22 04:40:12 [INFO ] [configure] Environment=FOREMAN_PUMA_THREADS_MAX=5 2022-04-22 04:40:12 [INFO ] [configure] -Environment=FOREMAN_PUMA_WORKERS=5 2022-04-22 04:40:12 [INFO ] [configure] +Environment=FOREMAN_PUMA_WORKERS=3 2022-04-22 04:40:12 [INFO ] [configure] /Stage[main]/Foreman::Config/Systemd::Dropin_file[foreman-service]/File[/etc/systemd/system/foreman.service.d/installer.conf]/content: content changed '{sha256}64a6656af6b5bc96dcc5fafbd9595e0a9f54e06d0f528a5ba5171759b834e124' to '{sha256}794ebb890c119323bb93353334e70111d0f6dbf6730f66e153399821a47349d2' 2022-04-22 04:40:12 [DEBUG ] [configure] /Stage[main]/Foreman::Config/Systemd::Dropin_file[foreman-service]/File[/etc/systemd/system/foreman.service.d/installer.conf]: Scheduling refresh of Service[foreman] 2022-04-22 04:40:12 [DEBUG ] [configure] /Stage[main]/Foreman::Config/Systemd::Dropin_file[foreman-service]/File[/etc/systemd/system/foreman.service.d/installer.conf]: The container Systemd::Dropin_file[foreman-service] will propagate my refresh event 2022-04-22 04:40:12 [DEBUG ] [configure] /Stage[main]/Foreman::Config/Systemd::Dropin_file[foreman-service]/File[/etc/systemd/system/foreman.service.d/installer.conf]: Evaluated in 0.03 seconds 2022-04-22 04:42:12 [DEBUG ] [configure] /Service[foreman.socket]: Starting to evaluate the resource (2265 of 2522) 2022-04-22 04:42:12 [DEBUG ] [configure] Executing: '/bin/systemctl is-active -- foreman.socket' 2022-04-22 04:42:12 [DEBUG ] [configure] Executing: '/bin/systemctl is-enabled -- foreman.socket' 2022-04-22 04:42:12 [DEBUG ] [configure] Executing: '/bin/systemctl is-active -- foreman.socket' 2022-04-22 04:42:12 [DEBUG ] [configure] Executing: '/bin/systemctl show --property=NeedDaemonReload -- foreman.socket' 2022-04-22 04:42:12 [DEBUG ] [configure] Executing: '/bin/systemctl restart -- foreman.socket' 2022-04-22 04:42:14 [INFO ] [configure] /Service[foreman.socket]: Triggered 'refresh' from 1 event 2022-04-22 04:42:14 [DEBUG ] [configure] /Service[foreman.socket]: The container Class[Foreman::Service] will propagate my refresh event 2022-04-22 04:42:14 [DEBUG ] [configure] /Service[foreman.socket]: Evaluated in 1.38 seconds 2022-04-22 04:42:14 [DEBUG ] [configure] /Service[foreman]: Starting to evaluate the resource (2266 of 2522) 2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl is-active -- foreman' 2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl is-enabled -- foreman' 2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl show --property=NeedDaemonReload -- foreman' 2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl daemon-reload' 2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl unmask -- foreman' 2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl start -- foreman' 2022-04-22 04:42:55 [DEBUG ] [configure] Executing: '/bin/systemctl is-enabled -- foreman' 2022-04-22 04:42:55 [INFO ] [configure] /Stage[main]/Foreman::Service/Service[foreman]/ensure: ensure changed 'stopped' to 'running' 2022-04-22 04:42:55 [DEBUG ] [configure] /Service[foreman]: The container Class[Foreman::Service] will propagate my refresh event 2022-04-22 04:42:55 [DEBUG ] [configure] /Service[foreman]: Unscheduling refresh on Service[foreman] 2022-04-22 04:42:55 [DEBUG ] [configure] /Service[foreman]: Evaluated in 41.07 seconds Apr 22 04:42:12 sat.example.com systemd[1]: Stopping Foreman... Apr 22 04:42:12 sat.example.com systemd[1]: Stopping Foreman jobs daemon - worker-1 on sidekiq... Apr 22 04:42:12 sat.example.com systemd[1]: Stopping Foreman jobs daemon - orchestrator on sidekiq... Apr 22 04:42:12 sat.example.com systemd[1]: Stopping Foreman jobs daemon - worker-hosts-queue-1 on sidekiq... Apr 22 04:42:12 sat.example.com foreman[55835]: [55835] === puma shutdown: 2022-04-22 04:42:12 -0400 === Apr 22 04:42:12 sat.example.com foreman[55835]: [55835] - Goodbye! Apr 22 04:42:12 sat.example.com foreman[55835]: [55835] - Gracefully shutting down workers... Apr 22 04:42:13 sat.example.com systemd[1]: dynflow-sidekiq: Succeeded. Apr 22 04:42:13 sat.example.com systemd[1]: Stopped Foreman jobs daemon - orchestrator on sidekiq. Apr 22 04:42:13 sat.example.com systemd[1]: Starting Foreman jobs daemon - orchestrator on sidekiq... Apr 22 04:42:13 sat.example.com systemd[1]: dynflow-sidekiq: Succeeded. Apr 22 04:42:13 sat.example.com systemd[1]: Stopped Foreman jobs daemon - worker-1 on sidekiq. Apr 22 04:42:13 sat.example.com systemd[1]: Starting Foreman jobs daemon - worker-1 on sidekiq... Apr 22 04:42:14 sat.example.com foreman[55835]: Exiting Apr 22 04:42:14 sat.example.com systemd[1]: foreman.service: Succeeded. Apr 22 04:42:14 sat.example.com systemd[1]: Stopped Foreman. Apr 22 04:42:14 sat.example.com systemd[1]: foreman.socket: Succeeded. Apr 22 04:42:14 sat.example.com systemd[1]: Closed Foreman HTTP Server Accept Sockets. Apr 22 04:42:14 sat.example.com systemd[1]: Stopping Foreman HTTP Server Accept Sockets. Apr 22 04:42:14 sat.example.com systemd[1]: foreman.socket: TCP_NODELAY failed: Operation not supported Apr 22 04:42:14 sat.example.com systemd[1]: Starting Foreman HTTP Server Accept Sockets. Apr 22 04:42:14 sat.example.com systemd[1]: Listening on Foreman HTTP Server Accept Sockets. Apr 22 04:42:14 sat.example.com systemd[1]: Starting Foreman... Apr 22 04:42:14 sat.example.com systemd[1]: Reloading. Apr 22 04:42:14 sat.example.com systemd[1]: Reloading. Apr 22 04:42:24 sat.example.com foreman[62315]: => Booting Puma Apr 22 04:42:24 sat.example.com foreman[62315]: => Rails 6.0.3.7 application starting in production Apr 22 04:42:24 sat.example.com foreman[62315]: => Run `rails server --help` for more startup options Apr 22 04:42:37 sat.example.com foreman[62315]: /usr/share/gems/gems/foreman_theme_satellite-9.0.0.6/app/models/concerns/distributor_version.rb:5: warning: already initialized constant Katello::Glue::Provider::D> Apr 22 04:42:37 sat.example.com foreman[62315]: /usr/share/gems/gems/katello-4.3.0.20/app/models/katello/glue/provider.rb:3: warning: previous definition of DISTRIBUTOR_VERSION was here Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] * Enabling systemd notification integration Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] Puma starting in cluster mode... Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] * Puma version: 5.6.2 (ruby 2.7.4-p191) ("Birdie's Version") Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] * Min threads: 5 Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] * Max threads: 5 Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] * Environment: production Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] * Master PID: 62315 Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] * Workers: 5 Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] * Restarts: (✔) hot (✖) phased Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] * Preloading application Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] * Activated unix:///run/foreman.sock Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] Use Ctrl-C to stop Apr 22 04:42:54 sat.example.com foreman[62315]: [62315] * Starting control server on unix:///usr/share/foreman/tmp/sockets/pumactl.sock Apr 22 04:42:55 sat.example.com foreman[62315]: [62315] - Worker 0 (PID: 62433) booted in 0.31s, phase: 0 Apr 22 04:42:55 sat.example.com foreman[62315]: [62315] - Worker 1 (PID: 62437) booted in 0.3s, phase: 0 Apr 22 04:42:55 sat.example.com foreman[62315]: [62315] - Worker 2 (PID: 62444) booted in 0.31s, phase: 0 Apr 22 04:42:55 sat.example.com foreman[62315]: [62315] - Worker 3 (PID: 62450) booted in 0.33s, phase: 0 Apr 22 04:42:55 sat.example.com foreman[62315]: [62315] - Worker 4 (PID: 62453) booted in 0.34s, phase: 0 Apr 22 04:42:55 sat.example.com systemd[1]: Started Foreman. On first sight, this looks correct, Puppet schedules notifications when our drop-in file changes and foreman IS restarted… But it has the wrong number of workers! When we look closer, the following lines stand out: Apr 22 04:42:14 sat.example.com systemd[1]: Starting Foreman... Apr 22 04:42:14 sat.example.com systemd[1]: Reloading. Apr 22 04:42:14 sat.example.com systemd[1]: Reloading. Foreman was started *before* systemd was reloaded, which means that while the service is restarted, it's still using the OLD values from the drop-in file, not the new ones we just deployed. Let's look at the Puppet log again (I removed a few lines, to make it more obvious): 2022-04-22 04:42:12 [DEBUG ] [configure] /Service[foreman.socket]: Starting to evaluate the resource (2265 of 2522) 2022-04-22 04:42:12 [DEBUG ] [configure] Executing: '/bin/systemctl is-active -- foreman.socket' 2022-04-22 04:42:12 [DEBUG ] [configure] Executing: '/bin/systemctl restart -- foreman.socket' 2022-04-22 04:42:14 [DEBUG ] [configure] /Service[foreman]: Starting to evaluate the resource (2266 of 2522) 2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl is-active -- foreman' 2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl show --property=NeedDaemonReload -- foreman' 2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl daemon-reload' 2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl unmask -- foreman' 2022-04-22 04:42:14 [DEBUG ] [configure] Executing: '/bin/systemctl start -- foreman' 2022-04-22 04:42:55 [DEBUG ] [configure] Executing: '/bin/systemctl is-enabled -- foreman' 2022-04-22 04:42:55 [INFO ] [configure] /Stage[main]/Foreman::Service/Service[foreman]/ensure: ensure changed 'stopped' to 'running' Puppet did restart the socket, then went on to the service, identified it needs a daemon reload, did that and then started(!) the service. Why did it start, not restart it? Because a restart of the socket *does* restart the service too (but in the background), so "is-active" returned inactive, which means that the service just needs a start and that's what Puppet did for us. But a start of a running service is a noop. And this is why the service is running with the wrong drop-in content. Replacing the require of foreman.service on foreman.socket (https://github.com/theforeman/puppet-foreman/blob/1b5eec23169a585f506a8317823c8bb41a6224ff/manifests/service.pp#L46) with an "before", so that service is always handled before the socket seems to fix it. But I have no idea *why* this relationship was there in the first place Upstream bug assigned to wclark the second fix is merged. Steps executed : # satellite-installer --foreman-foreman-service-puma-workers 4 --foreman-foreman-service-puma-threads-max 4 --verbose-log-level=info # ps aux | grep -v grep | grep -e USER -e puma USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND foreman 38325 4.8 2.2 949408 456240 ? Ssl 03:37 0:42 puma 5.6.2 (unix:///run/foreman.sock) [foreman] foreman 38389 0.6 3.0 1151460 618412 ? Sl 03:37 0:05 puma: cluster worker 0: 38325 [foreman] foreman 38394 0.5 3.1 1139196 635224 ? Sl 03:37 0:04 puma: cluster worker 1: 38325 [foreman] foreman 38400 0.5 3.0 1144936 617900 ? Sl 03:37 0:04 puma: cluster worker 2: 38325 [foreman] foreman 38406 0.0 2.2 976164 451096 ? Sl 03:37 0:00 puma: cluster worker 3: 38325 [foreman] Verified on Satellite 6.11 with snap 20 and it is working as expected. Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: Satellite 6.11 Release), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:5498 |