Bug 1458857
Summary: | Improved performance of Puppet and RHSM fact importers | |||
---|---|---|---|---|
Product: | Red Hat Satellite | Reporter: | Chris Duryee <cduryee> | |
Component: | Hosts - Content | Assignee: | Shimon Shtein <sshtein> | |
Status: | CLOSED ERRATA | QA Contact: | jcallaha | |
Severity: | high | Docs Contact: | ||
Priority: | high | |||
Version: | 6.2.9 | CC: | andrew.schofield, aperotti, aruzicka, bbuckingham, bkearney, cduryee, hmore, inecas, jcallaha, lzap, mmccune, sshtein, wpinheir | |
Target Milestone: | Unspecified | Keywords: | FieldEngineering, PrioBumpField, Triaged | |
Target Release: | Unused | |||
Hardware: | Unspecified | |||
OS: | Unspecified | |||
Whiteboard: | ||||
Fixed In Version: | foreman-1.11.0.81-1 | Doc Type: | If docs needed, set a value | |
Doc Text: | Story Points: | --- | ||
Clone Of: | ||||
: | 1463803 (view as bug list) | Environment: | ||
Last Closed: | 2017-08-10 17:02:29 UTC | Type: | Bug | |
Regression: | --- | Mount Type: | --- | |
Documentation: | --- | CRM: | ||
Verified Versions: | Category: | --- | ||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | ||
Cloudforms Team: | --- | Target Upstream Version: | ||
Embargoed: |
Description
Chris Duryee
2017-06-05 16:33:16 UTC
Created redmine issue http://projects.theforeman.org/issues/19951 from this bug The fact that the step tool a long time to change the state doesn't mean the issue is the serialization. I'm pretty sure we would not see the issue, if the `run` of the facts importer would be cleaned, and we would leave just serialization/deserialization there. Do we have task export with the fact improt tasks? Preliminary analysis: from checks performed by @aruzicka it seems that the serialization is not the main time consumer. From my own checks, I can confirm that fact updating for a single host takes around 600ms. I am investigating how to reduce this time. Connecting redmine issue http://projects.theforeman.org/issues/20024 from this bug Upstream bug assigned to sshtein Upstream bug assigned to sshtein Moving this bug to POST for triage into Satellite 6 since the upstream issue http://projects.theforeman.org/issues/20024 has been resolved. QA notes: Please test also with existing systems, import some systems with structured facts without the patch, upgrade (apply the errata) then compare how it works and how this is presented in the Facts pages. QA notes: Stress test RHSM fact uploads and Puppet fact uploads. Make sure there are multiple passenger processes serving the requests. Verified in Satellite 6.2.11 Snap 3 I registered 100 then additional 100 docker-based content hosts to my satellite. Each one looped through a subman facts upload then a 1 second sleep, for an hour. When there were 100 hosts, the maximum number of pending tasks was 115, with an average of 80 pending tasks. When bumped up to 200, the maximum number of pending tasks was up to 328, with an average of 211 pending tasks. Below is the passenger status with 200 hosts running facts upload loops -bash-4.2# passenger-status Version : 4.0.18 Date : 2017-08-02 12:42:00 -0400 Instance: 21529 ----------- General information ----------- Max pool size : 12 Processes : 12 Requests in top-level queue : 0 ----------- Application groups ----------- /usr/share/foreman#default: App root: /usr/share/foreman Requests in queue: 18 * PID: 12040 Sessions: 1 Processed: 7128 Uptime: 20h 6m 17s CPU: 0% Memory : 582M Last used: 0s ago * PID: 1052 Sessions: 1 Processed: 6734 Uptime: 1h 28m 16s CPU: 12% Memory : 641M Last used: 0s ago * PID: 1075 Sessions: 1 Processed: 6805 Uptime: 1h 28m 15s CPU: 12% Memory : 306M Last used: 0s ago * PID: 1101 Sessions: 1 Processed: 7220 Uptime: 1h 28m 14s CPU: 13% Memory : 309M Last used: 0s ago * PID: 1124 Sessions: 1 Processed: 6869 Uptime: 1h 28m 13s CPU: 12% Memory : 629M Last used: 0s ago * PID: 1159 Sessions: 1 Processed: 7382 Uptime: 1h 28m 12s CPU: 13% Memory : 669M Last used: 0s ago * PID: 1199 Sessions: 1 Processed: 6003 Uptime: 1h 28m 11s CPU: 11% Memory : 300M Last used: 0s ago * PID: 1227 Sessions: 1 Processed: 7072 Uptime: 1h 28m 10s CPU: 13% Memory : 304M Last used: 0s ago * PID: 1255 Sessions: 1 Processed: 6897 Uptime: 1h 28m 9s CPU: 13% Memory : 299M Last used: 0s ago * PID: 1288 Sessions: 1 Processed: 6724 Uptime: 1h 28m 7s CPU: 12% Memory : 299M Last used: 0s ago * PID: 1318 Sessions: 1 Processed: 6829 Uptime: 1h 28m 6s CPU: 12% Memory : 298M Last used: 0s ago /etc/puppet/rack#default: App root: /etc/puppet/rack Requests in queue: 0 * PID: 23904 Sessions: 0 Processed: 270 Uptime: 26h 30m 36s CPU: 0% Memory : 50M Last used: 7s ago After killing the hosts, all the tasks were completed within 2 imutes and the passenger status was what is shown below. -bash-4.2# passenger-status Version : 4.0.18 Date : 2017-08-02 12:43:35 -0400 Instance: 21529 ----------- General information ----------- Max pool size : 12 Processes : 12 Requests in top-level queue : 0 ----------- Application groups ----------- /usr/share/foreman#default: App root: /usr/share/foreman Requests in queue: 0 * PID: 12040 Sessions: 0 Processed: 7163 Uptime: 20h 7m 52s CPU: 1% Memory : 582M Last used: 1m 7s ag * PID: 1052 Sessions: 0 Processed: 6766 Uptime: 1h 29m 51s CPU: 12% Memory : 641M Last used: 1m 15s a * PID: 1075 Sessions: 0 Processed: 6844 Uptime: 1h 29m 50s CPU: 12% Memory : 306M Last used: 1m 12s a * PID: 1101 Sessions: 0 Processed: 7262 Uptime: 1h 29m 49s CPU: 13% Memory : 309M Last used: 1m 11s a * PID: 1124 Sessions: 0 Processed: 6900 Uptime: 1h 29m 48s CPU: 12% Memory : 630M Last used: 1m 10s a * PID: 1159 Sessions: 0 Processed: 7421 Uptime: 1h 29m 47s CPU: 13% Memory : 669M Last used: 1m 13s a * PID: 1199 Sessions: 0 Processed: 6039 Uptime: 1h 29m 46s CPU: 11% Memory : 301M Last used: 1m 12s a * PID: 1227 Sessions: 0 Processed: 7117 Uptime: 1h 29m 45s CPU: 12% Memory : 304M Last used: 1m 7s ag * PID: 1255 Sessions: 0 Processed: 6947 Uptime: 1h 29m 44s CPU: 12% Memory : 300M Last used: 10s ago * PID: 1288 Sessions: 0 Processed: 6763 Uptime: 1h 29m 42s CPU: 12% Memory : 299M Last used: 1m 14s a * PID: 1318 Sessions: 0 Processed: 6862 Uptime: 1h 29m 41s CPU: 12% Memory : 298M Last used: 1m 15s a /etc/puppet/rack#default: App root: /etc/puppet/rack Requests in queue: 0 * PID: 23904 Sessions: 0 Processed: 270 Uptime: 26h 32m 11s CPU: 0% Memory : 50M Last used: 1m 42s -------------------------------------------------------------------------- for puppet facts, i created a new container image that uploaded puppet facts on a loop with only a one second interval. I spun up 25 container hosts simultaneously and monitored the passenger status. the initial status had very few puppet workers. -bash-4.2# passenger-status Version : 4.0.18 Date : 2017-08-03 16:43:48 -0400 Instance: 11062 ----------- General information ----------- Max pool size : 12 Processes : 12 Requests in top-level queue : 0 ----------- Application groups ----------- /usr/share/foreman#default: App root: /usr/share/foreman Requests in queue: 0 * PID: 11539 Sessions: 0 Processed: 127 Uptime: 16m 21s CPU: 2% Memory : 590M Last used: 8s ago * PID: 12299 Sessions: 0 Processed: 89 Uptime: 5m 29s CPU: 2% Memory : 277M Last used: 5s ago * PID: 13319 Sessions: 0 Processed: 45 Uptime: 1m 35s CPU: 6% Memory : 275M Last used: 8s ago * PID: 13340 Sessions: 0 Processed: 62 Uptime: 1m 34s CPU: 8% Memory : 276M Last used: 8s ago * PID: 13358 Sessions: 0 Processed: 73 Uptime: 1m 33s CPU: 9% Memory : 283M Last used: 5s ago * PID: 13379 Sessions: 1 Processed: 3 Uptime: 1m 32s CPU: 3% Memory : 263M Last used: 35s ago * PID: 13406 Sessions: 1 Processed: 3 Uptime: 1m 31s CPU: 3% Memory : 262M Last used: 35s ago * PID: 13435 Sessions: 0 Processed: 43 Uptime: 1m 29s CPU: 5% Memory : 267M Last used: 6s ago * PID: 13468 Sessions: 0 Processed: 53 Uptime: 1m 27s CPU: 7% Memory : 266M Last used: 8s ago * PID: 13510 Sessions: 0 Processed: 15 Uptime: 1m 26s CPU: 4% Memory : 242M Last used: 8s ago /etc/puppet/rack#default: App root: /etc/puppet/rack Requests in queue: 0 * PID: 11880 Sessions: 0 Processed: 179 Uptime: 11m 28s CPU: 0% Memory : 49M Last used: 2s ago * PID: 13038 Sessions: 0 Processed: 0 Uptime: 2m 19s CPU: 0% Memory : 8M Last used: 2m 19s ago but after running for a bit, the number of workers increased. at no point did the queue grow beyond 0. -bash-4.2# passenger-status Version : 4.0.18 Date : 2017-08-03 16:55:22 -0400 Instance: 11062 ----------- General information ----------- Max pool size : 12 Processes : 12 Requests in top-level queue : 0 ----------- Application groups ----------- /usr/share/foreman#default: App root: /usr/share/foreman Requests in queue: 0 * PID: 11539 Sessions: 0 Processed: 216 Uptime: 27m 55s CPU: 1% Memory : 653M Last used: 1m 43s ago * PID: 12299 Sessions: 0 Processed: 149 Uptime: 17m 3s CPU: 2% Memory : 572M Last used: 2m 56s ago * PID: 13319 Sessions: 0 Processed: 133 Uptime: 13m 9s CPU: 3% Memory : 633M Last used: 1m 44s ago * PID: 13340 Sessions: 0 Processed: 123 Uptime: 13m 8s CPU: 3% Memory : 583M Last used: 1m 7s ago * PID: 13379 Sessions: 0 Processed: 97 Uptime: 13m 6s CPU: 2% Memory : 579M Last used: 2m 46s ago * PID: 13406 Sessions: 0 Processed: 70 Uptime: 13m 5s CPU: 2% Memory : 546M Last used: 1m 44s ago * PID: 13468 Sessions: 0 Processed: 132 Uptime: 13m 1s CPU: 2% Memory : 518M Last used: 1m 44s ago /etc/puppet/rack#default: App root: /etc/puppet/rack Requests in queue: 0 * PID: 11880 Sessions: 0 Processed: 998 Uptime: 23m 2s CPU: 2% Memory : 52M Last used: 1s ago * PID: 16573 Sessions: 0 Processed: 617 Uptime: 3m 54s CPU: 8% Memory : 41M Last used: 1s ago * PID: 16579 Sessions: 0 Processed: 670 Uptime: 3m 54s CPU: 9% Memory : 40M Last used: 10s ago * PID: 16587 Sessions: 0 Processed: 610 Uptime: 3m 54s CPU: 9% Memory : 38M Last used: 10s ago * PID: 16594 Sessions: 0 Processed: 791 Uptime: 3m 54s CPU: 10% Memory : 40M Last used: 10s ago Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:2466 |