Bug 1839096
| Summary: | Clients Intermittently Can't Grab Puppet Node Definition | ||
|---|---|---|---|
| Product: | Red Hat Satellite | Reporter: | myoder |
| Component: | Puppet | Assignee: | satellite6-bugs <satellite6-bugs> |
| Status: | CLOSED WONTFIX | QA Contact: | Vladimír Sedmík <vsedmik> |
| Severity: | low | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 6.6.0 | CC: | mhulan |
| Target Milestone: | Unspecified | Keywords: | Triaged |
| Target Release: | Unused | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | If docs needed, set a value | |
| Doc Text: | Story Points: | --- | |
| Clone Of: | Environment: | ||
| Last Closed: | 2022-09-02 15:20:14 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
Does the customer use any ERB in their smart class parameters or global paramebers (the value would contain <% %> tags)? The error indicates Satellite failed to talk with some external service during the rendering of some ERB template. This does not seem to be problem of Satellite itself but rather the external service not responding in time. Hi Marek, Yes, it does look like ERB templates are being used in the puppet module. Is there anything we could add in the puppet modules that would help with logging, to perhaps show where the error is coming from? Kind regards, I didn't mean ERB in the puppet modules but rather used as a value for some of the smart class parameter. Note that the value in Satellite can be dynamic and it would get evaluated only when a puppetserver asks Satellite for the ENC. That could cause some non-deterministic behavior. So review all parameters for the probaltic host(s), does any value of such parameters in Satllite contain the <% %> tags? Upon review of our valid but aging backlog the Satellite Team has concluded that this Bugzilla does not meet the criteria for a resolution in the near term, and are planning to close in a month. This message may be a repeat of a previous update and the bug is again being considered to be closed. If you have any concerns about this, please contact your Red Hat Account team. Thank you. Thank you for your interest in Red Hat Satellite. We have evaluated this request, and while we recognize that it is a valid request, we do not expect this to be implemented in the product in the foreseeable future. This is due to other priorities for the product, and not a reflection on the request itself. We are therefore closing this out as WONTFIX. If you have any concerns about this feel free to contact your Red Hat Account Team. Thank you. |
Description of problem: Very infrequently, clients are unable to grab their node definition for their puppet modules. Systems are setup to run puppet modules every 30 minutes. In the foreman-ssl_access.log we see 412 errors being returned for a GET request for a node definition for a content hosts. This happens very infrequently (once a day maybe), and on random hosts. Hosts get their node definition just fine, then fail once, then continue to get their node definition just fine. This is the error from foreman-ssl_access.log: ~~~ 1.2.3.4 - - [17/Feb/2020:21:00:49 -0500] "GET /node/node.example.com?format=yml HTTP/1.1" 412 42 "-" "Ruby" ~~~ Putting foreman in debug, we see this in the production.log: ~~~ 2020-03-06T06:01:04 [W|app|9635f82c] Failed to generate external nodes for node.example.com 2020-03-06T06:01:04 [D|app|9635f82c] Backtrace for 'Failed to generate external nodes for node.example.com' error (Timeout::Error): execution expired /opt/rh/rh-ruby25/root/usr/share/ruby/timeout.rb:108:in `timeout' /opt/theforeman/tfm/root/usr/share/gems/gems/ruby_parser-3.10.1/lib/ruby_parser_extras.rb:1075:in `process' /opt/theforeman/tfm/root/usr/share/gems/gems/ruby_parser-3.10.1/lib/ruby_parser.rb:31:in `block in process' /opt/theforeman/tfm/root/usr/share/gems/gems/ruby_parser-3.10.1/lib/ruby_parser.rb:28:in `each' /opt/theforeman/tfm/root/usr/share/gems/gems/ruby_parser-3.10.1/lib/ruby_parser.rb:28:in `process' /opt/theforeman/tfm/root/usr/share/gems/gems/safemode-1.3.5/lib/safemode/parser.rb:18:in `parse' /opt/theforeman/tfm/root/usr/share/gems/gems/safemode-1.3.5/lib/safemode/parser.rb:9:in `jail' /opt/theforeman/tfm/root/usr/share/gems/gems/safemode-1.3.5/lib/safemode.rb:49:in `eval' /usr/share/foreman/lib/foreman/renderer/safe_mode_renderer.rb:7:in `render' /usr/share/foreman/lib/foreman/renderer/base_renderer.rb:16:in `render' /usr/share/foreman/lib/foreman/renderer.rb:45:in `render' ~~~ In first sosreport, saw this issue with 2 hosts, on different days, at different times: ~~~ 17/Feb/2020:21:00:49 -0500 18/Feb/2020:08:06:03 -0500 ~~~ Quite random, without any correlation. Version-Release number of selected component (if applicable): Satellite 6.6 How reproducible: Not sure how to reproduce Steps to Reproduce: 1. 2. 3. Actual results: Expected results: Additional info: