Bug 2135498

Summary: Expose puma worker_timeout configuration option
Product: Red Hat Satellite Reporter: Joniel Pasqualetto <jpasqual>
Component: InstallationAssignee: wclark
Status: CLOSED ERRATA QA Contact: Griffin Sullivan <gsulliva>
Severity: medium Docs Contact:
Priority: medium    
Version: 6.12.0CC: ahumbe, alsouza, ehelms, jpathan, wclark
Target Milestone: 6.14.0Keywords: FutureFeature, PrioBumpGSS, RFE, Triaged
Target Release: Unused   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2023-11-08 14:18:02 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Joniel Pasqualetto 2022-10-17 17:58:55 UTC
Description of problem:

Currently, we can't configure the worker_timeout used by puma workers. The default of 60s is used.

Changing the worker_timeout would allow users to workaround specific issues happening during peak moments. Here's an example where increasing the timeout would help:


~~~
Oct 14 06:08:45 satellite.example.com foreman: [1290] ! Terminating timed out worker (worker failed to check in within 60 seconds): 2420
~~~

Version-Release number of selected component (if applicable):

Actual results:

Not possible to change the default 60s timeout in a supported way.

Expected results:
Have an installer option to define the worker_timeout.

Additional info:

The timeout can be defined with the following line on the file /usr/share/foreman/config/puma/production.rb

~~~
worker_timeout 120
~~~

My suggestion is to add a line on the file /usr/share/foreman/config/puma/production.rb that will read a variable to be defined on the systemd unit, like we do for the number of workers, and min/max threads.

Like this:

~~~
worker_timeout ENV.fetch('FOREMAN_PUMA_WORKER_TIMEOUT', 60).to_i
~~~

Then, setting the value FOREMAN_PUMA_WORKER_TIMEOUT on the systemd unit would be enough:

~~~
# cat /etc/systemd/system/foreman.service.d/installer.conf 
[Service]
User=foreman
Environment=FOREMAN_ENV=production
Environment=FOREMAN_HOME=/usr/share/foreman
Environment=FOREMAN_PUMA_THREADS_MIN=5
Environment=FOREMAN_PUMA_THREADS_MAX=5
Environment=FOREMAN_PUMA_WORKERS=6
Environment=FOREMAN_PUMA_WORKER_TIMEOUT=120 <============ new variable to define the timeout
~~~

Comment 1 wclark 2022-10-17 19:45:55 UTC
Created redmine issue https://projects.theforeman.org/issues/35641 from this bug

Comment 2 Bryan Kearney 2022-10-25 20:04:28 UTC
Upstream bug assigned to wclark

Comment 3 Bryan Kearney 2022-10-25 20:04:30 UTC
Upstream bug assigned to wclark

Comment 4 Bryan Kearney 2023-01-24 15:08:46 UTC
Moving this bug to POST for triage into Satellite since the upstream issue https://projects.theforeman.org/issues/35641 has been resolved.

Comment 6 Griffin Sullivan 2023-06-06 20:00:33 UTC
Verified in 6.14 snap 2

Code change is in the snap. Full verification of this BZ is unnecessary and the changes here are only meant for debugging purposes.

Comment 9 errata-xmlrpc 2023-11-08 14:18:02 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: Satellite 6.14 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2023:6818