Bug 1419985
Summary: | synchronous import_pools occurs during host unregistration, leading to db locks | ||||||
---|---|---|---|---|---|---|---|
Product: | Red Hat Satellite | Reporter: | Chris Duryee <cduryee> | ||||
Component: | Registration | Assignee: | satellite6-bugs <satellite6-bugs> | ||||
Status: | CLOSED ERRATA | QA Contact: | jcallaha | ||||
Severity: | medium | Docs Contact: | |||||
Priority: | medium | ||||||
Version: | 6.2.7 | CC: | bbuckingham, bkearney, brubisch, dgross, egolov, jcallaha, jsherril, mmccune, oshtaier, peter.vreman, rvdwees, zhunting | ||||
Target Milestone: | Unspecified | Keywords: | Triaged | ||||
Target Release: | Unused | ||||||
Hardware: | Unspecified | ||||||
OS: | Unspecified | ||||||
Whiteboard: | |||||||
Fixed In Version: | tfm-rubygem-katello-3.0.0.115-1 | Doc Type: | If docs needed, set a value | ||||
Doc Text: | Story Points: | --- | |||||
Clone Of: | Environment: | ||||||
Last Closed: | 2017-05-01 13:59:32 UTC | Type: | Bug | ||||
Regression: | --- | Mount Type: | --- | ||||
Documentation: | --- | CRM: | |||||
Verified Versions: | Category: | --- | |||||
oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |||||
Cloudforms Team: | --- | Target Upstream Version: | |||||
Embargoed: | |||||||
Bug Depends On: | |||||||
Bug Blocks: | 1122832, 1418455, 1426395 | ||||||
Attachments: |
|
Description
Chris Duryee
2017-02-07 14:55:35 UTC
*** Bug 1384086 has been marked as a duplicate of this bug. *** ** 6.2.8 Hotfix Instructions ** 1) Copy 1419985.patch to your Satellite server, into /opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.0.0.105 2) cd /opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.0.0.105 3) patch -p1 < 1419985.patch 4) katello-service restart At that point, to be able to register new systems please: * Remove any offending hosts that fail to register from the Hosts -> All Hosts UI. Select the host and remove. * Re-register any systems that failed to register before Created attachment 1267159 [details]
hotfix patch
Verified in Satellite 6.2.9 Snap 2. I spun up 100 docker-based content hosts that registered to the Satellite. Then each host would run these commands on an endless loop. subscription-manager refresh subscription-manager unregister subscription-manager register <info> While all 100 hosts were running their loops, I queried the postgres locks as well as the passenger status. At no point did either grow out of control, or stop being processed. After killing all the hosts, the locks and passengers were quickly reduced to normal levels. for i in {1..100}; do docker run -d -h test$i -e "SATHOST=ibm-x3550m3-06.lab.eng.brq.redhat.com" -e "AK=7key" ch-d:test1419985; done; +++++ While Running +++++ postgres=# SELECT count(*) FROM pg_locks; count ------- 13 (1 row) postgres=# SELECT count(*) FROM pg_locks; count ------- 140 (1 row) postgres=# SELECT count(*) FROM pg_locks; count ------- 205 (1 row) postgres=# SELECT count(*) FROM pg_locks; count ------- 249 (1 row) postgres=# SELECT count(*) FROM pg_locks; count ------- 593 (1 row) postgres=# SELECT count(*) FROM pg_locks; count ------- 292 (1 row) postgres=# SELECT count(*) FROM pg_locks; count ------- 10 (1 row) postgres=# SELECT count(*) FROM pg_locks; count ------- 247 (1 row) postgres=# SELECT count(*) FROM pg_locks; count ------- 17 (1 row) postgres=# SELECT count(*) FROM pg_locks; count ------- 2 (1 row) postgres=# SELECT count(*) FROM pg_locks; count ------- 151 (1 row) postgres=# SELECT count(*) FROM pg_locks; count ------- 825 (1 row) postgres=# SELECT count(*) FROM pg_locks; count ------- 2 (1 row) postgres=# SELECT count(*) FROM pg_locks; count ------- 163 (1 row) postgres=# SELECT count(*) FROM pg_locks; count ------- 176 (1 row) postgres=# SELECT count(*) FROM pg_locks; count ------- 14 (1 row) -bash-4.1# passenger-status Version : 4.0.18 Date : Mon Apr 10 21:48:20 +0200 2017 Instance: 3835 ----------- General information ----------- Max pool size : 12 Processes : 12 Requests in top-level queue : 0 ----------- Application groups ----------- /usr/share/foreman#default: App root: /usr/share/foreman Requests in queue: 90 * PID: 31308 Sessions: 1 Processed: 490 Uptime: 31m 44s CPU: 3% Memory : 659M Last used: 16s ago * PID: 32433 Sessions: 1 Processed: 547 Uptime: 12m 51s CPU: 9% Memory : 297M Last used: 15s ago * PID: 32470 Sessions: 1 Processed: 294 Uptime: 12m 50s CPU: 6% Memory : 293M Last used: 12s ago * PID: 32505 Sessions: 1 Processed: 219 Uptime: 12m 50s CPU: 4% Memory : 288M Last used: 7s ago * PID: 32554 Sessions: 1 Processed: 312 Uptime: 12m 49s CPU: 5% Memory : 289M Last used: 12s ago * PID: 32634 Sessions: 1 Processed: 260 Uptime: 12m 49s CPU: 4% Memory : 284M Last used: 2s ago * PID: 32728 Sessions: 1 Processed: 231 Uptime: 12m 48s CPU: 5% Memory : 535M Last used: 11s ago * PID: 474 Sessions: 1 Processed: 519 Uptime: 12m 47s CPU: 8% Memory : 291M Last used: 16s ago * PID: 560 Sessions: 1 Processed: 299 Uptime: 12m 46s CPU: 5% Memory : 520M Last used: 8s ago * PID: 636 Sessions: 1 Processed: 192 Uptime: 12m 46s CPU: 5% Memory : 280M Last used: 4s ago * PID: 723 Sessions: 1 Processed: 402 Uptime: 12m 45s CPU: 8% Memory : 291M Last used: 11s ago /etc/puppet/rack#default: App root: /etc/puppet/rack Requests in queue: 0 * PID: 4978 Sessions: 0 Processed: 44 Uptime: 4h 8m 3s CPU: 0% Memory : 86M Last used: 7m 28s ago -bash-4.1# passenger-status Version : 4.0.18 Date : Mon Apr 10 21:48:25 +0200 2017 Instance: 3835 ----------- General information ----------- Max pool size : 12 Processes : 12 Requests in top-level queue : 0 ----------- Application groups ----------- /usr/share/foreman#default: App root: /usr/share/foreman Requests in queue: 90 * PID: 31308 Sessions: 1 Processed: 490 Uptime: 31m 49s CPU: 3% Memory : 659M Last used: 21s ago * PID: 32433 Sessions: 1 Processed: 547 Uptime: 12m 56s CPU: 9% Memory : 297M Last used: 20s ago * PID: 32470 Sessions: 1 Processed: 294 Uptime: 12m 55s CPU: 6% Memory : 293M Last used: 17s ago * PID: 32505 Sessions: 1 Processed: 219 Uptime: 12m 55s CPU: 4% Memory : 288M Last used: 12s ago * PID: 32554 Sessions: 1 Processed: 312 Uptime: 12m 54s CPU: 5% Memory : 289M Last used: 17s ago * PID: 32634 Sessions: 1 Processed: 260 Uptime: 12m 54s CPU: 4% Memory : 284M Last used: 7s ago * PID: 32728 Sessions: 1 Processed: 231 Uptime: 12m 53s CPU: 5% Memory : 535M Last used: 16s ago * PID: 474 Sessions: 1 Processed: 519 Uptime: 12m 52s CPU: 8% Memory : 291M Last used: 21s ago * PID: 560 Sessions: 1 Processed: 299 Uptime: 12m 51s CPU: 5% Memory : 520M Last used: 13s ago * PID: 636 Sessions: 1 Processed: 192 Uptime: 12m 51s CPU: 5% Memory : 280M Last used: 9s ago * PID: 723 Sessions: 1 Processed: 402 Uptime: 12m 50s CPU: 8% Memory : 291M Last used: 16s ago /etc/puppet/rack#default: App root: /etc/puppet/rack Requests in queue: 0 * PID: 4978 Sessions: 0 Processed: 44 Uptime: 4h 8m 8s CPU: 0% Memory : 86M Last used: 7m 33s ago -bash-4.1# passenger-status Version : 4.0.18 Date : Mon Apr 10 21:48:32 +0200 2017 Instance: 3835 ----------- General information ----------- Max pool size : 12 Processes : 12 Requests in top-level queue : 0 ----------- Application groups ----------- /usr/share/foreman#default: App root: /usr/share/foreman Requests in queue: 87 * PID: 31308 Sessions: 1 Processed: 491 Uptime: 31m 56s CPU: 3% Memory : 659M Last used: 6s ago * PID: 32433 Sessions: 1 Processed: 548 Uptime: 13m 3s CPU: 9% Memory : 297M Last used: 6s ago * PID: 32470 Sessions: 1 Processed: 294 Uptime: 13m 2s CPU: 6% Memory : 293M Last used: 24s ago * PID: 32505 Sessions: 1 Processed: 219 Uptime: 13m 2s CPU: 4% Memory : 288M Last used: 19s ago * PID: 32554 Sessions: 1 Processed: 314 Uptime: 13m 1s CPU: 5% Memory : 289M Last used: 0s ago * PID: 32634 Sessions: 1 Processed: 260 Uptime: 13m 1s CPU: 4% Memory : 284M Last used: 14s ago * PID: 32728 Sessions: 1 Processed: 231 Uptime: 13m 0s CPU: 5% Memory : 535M Last used: 23s ago * PID: 474 Sessions: 1 Processed: 520 Uptime: 12m 59s CPU: 8% Memory : 291M Last used: 7s ago * PID: 560 Sessions: 1 Processed: 302 Uptime: 12m 58s CPU: 5% Memory : 520M Last used: 2s ago * PID: 636 Sessions: 1 Processed: 193 Uptime: 12m 58s CPU: 5% Memory : 280M Last used: 6s ago * PID: 723 Sessions: 1 Processed: 405 Uptime: 12m 57s CPU: 8% Memory : 291M Last used: 1s ago /etc/puppet/rack#default: App root: /etc/puppet/rack Requests in queue: 0 * PID: 4978 Sessions: 0 Processed: 44 Uptime: 4h 8m 15s CPU: 0% Memory : 86M Last used: 7m 40s ag +++++ After Killing +++++ postgres=# SELECT count(*) FROM pg_locks; count ------- 2 (1 row) -bash-4.1# passenger-status Version : 4.0.18 Date : Mon Apr 10 21:50:33 +0200 2017 Instance: 3835 ----------- General information ----------- Max pool size : 12 Processes : 12 Requests in top-level queue : 0 ----------- Application groups ----------- /usr/share/foreman#default: App root: /usr/share/foreman Requests in queue: 0 * PID: 31308 Sessions: 0 Processed: 513 Uptime: 33m 57s CPU: 3% Memory : 659M Last used: 1m 42s ago * PID: 32433 Sessions: 0 Processed: 556 Uptime: 15m 4s CPU: 7% Memory : 297M Last used: 1m 47s ago * PID: 32470 Sessions: 0 Processed: 328 Uptime: 15m 3s CPU: 6% Memory : 292M Last used: 1m 25s ago * PID: 32505 Sessions: 0 Processed: 230 Uptime: 15m 3s CPU: 3% Memory : 288M Last used: 1m 6s ago * PID: 32554 Sessions: 0 Processed: 333 Uptime: 15m 2s CPU: 5% Memory : 299M Last used: 1m 44s ago * PID: 32634 Sessions: 0 Processed: 360 Uptime: 15m 2s CPU: 7% Memory : 289M Last used: 1m 7s ago * PID: 32728 Sessions: 0 Processed: 249 Uptime: 15m 1s CPU: 4% Memory : 535M Last used: 1m 52s ago * PID: 474 Sessions: 0 Processed: 521 Uptime: 15m 0s CPU: 7% Memory : 291M Last used: 1m 42s ago * PID: 560 Sessions: 0 Processed: 360 Uptime: 14m 59s CPU: 6% Memory : 522M Last used: 1m 7s ago * PID: 636 Sessions: 0 Processed: 227 Uptime: 14m 59s CPU: 5% Memory : 282M Last used: 1m 42s ago * PID: 723 Sessions: 0 Processed: 405 Uptime: 14m 58s CPU: 7% Memory : 291M Last used: 2m 2s ago /etc/puppet/rack#default: App root: /etc/puppet/rack Requests in queue: 0 * PID: 4978 Sessions: 0 Processed: 44 Uptime: 4h 10m 16s CPU: 0% Memory : 86M Last used: 9m 41s a Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:1191 |