Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Bug 1419985 - synchronous import_pools occurs during host unregistration, leading to db locks
Summary: synchronous import_pools occurs during host unregistration, leading to db locks
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Satellite
Classification: Red Hat
Component: Registration
Version: 6.2.7
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: Unspecified
Assignee: satellite6-bugs
QA Contact: jcallaha
URL:
Whiteboard:
: 1384086 (view as bug list)
Depends On:
Blocks: 1122832 1418455 1426395
TreeView+ depends on / blocked
 
Reported: 2017-02-07 14:55 UTC by Chris Duryee
Modified: 2021-03-11 14:56 UTC (History)
12 users (show)

Fixed In Version: tfm-rubygem-katello-3.0.0.115-1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-05-01 13:59:32 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)
hotfix patch (1.35 KB, application/mbox)
2017-03-28 21:19 UTC, Mike McCune
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Knowledge Base (Solution) 2858931 0 None None None 2018-12-06 20:45:54 UTC
Red Hat Product Errata RHBA-2017:1191 0 normal SHIPPED_LIVE Satellite 6.2.9 Async Bug Release 2017-05-01 17:49:42 UTC

Description Chris Duryee 2017-02-07 14:55:35 UTC
Description of problem:

(this is similar to https://bugzilla.redhat.com/show_bug.cgi?id=1414159)

*** do not clone this, issue is already fixed upstream via refactor ***

During host unregister, a synchronous pool import will occur. This can take some time, which can lead to DB locks like the following:


2017-01-17 00:11:20 UTC STATEMENT:  UPDATE "katello_pools" SET "consumed" = $1, "updated_at" = $2 WHERE "katello_pools"."id" = 5
2017-01-17 00:11:23 UTC LOG:  process 44158 acquired ShareLock on transaction 76129721 after 4032.569 ms

Version-Release number of selected component (if applicable): 6.2.7


Steps to Reproduce:
1. perform a large number of host registrations, checkins and unregistrations at once (perhaps 1 per 3 seconds for registrations, one per 5 seconds for unregistrations, and 1 per second for checkin

Actual results: lock appears in sql log, passenger queue starts to grow


Expected results: everything runs smoothly, system can keep up with load

note: this is fixed upstream via https://github.com/Katello/katello/commit/915993272f93da738d94aadddf850c3011d95e0c#diff-e21677c417bc37177525f3d926263212.

Comment 4 Justin Sherrill 2017-03-07 16:52:36 UTC
*** Bug 1384086 has been marked as a duplicate of this bug. ***

Comment 8 Mike McCune 2017-03-28 21:19:11 UTC
** 6.2.8 Hotfix Instructions **

1) Copy 1419985.patch to your Satellite server, into /opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.0.0.105

2) cd /opt/theforeman/tfm/root/usr/share/gems/gems/katello-3.0.0.105

3) patch -p1 < 1419985.patch

4) katello-service restart

At that point, to be able to register new systems please:

* Remove any offending hosts that fail to register from the Hosts -> All Hosts UI. Select the host and remove.


* Re-register any systems that failed to register before

Comment 9 Mike McCune 2017-03-28 21:19:52 UTC
Created attachment 1267159 [details]
hotfix patch

Comment 10 jcallaha 2017-04-10 20:18:53 UTC
Verified in Satellite 6.2.9 Snap 2.

I spun up 100 docker-based content hosts that registered to the Satellite. Then each host would run these commands on an endless loop.
subscription-manager refresh
subscription-manager unregister
subscription-manager register <info>

While all 100 hosts were running their loops, I queried the postgres locks as well as the passenger status. At no point did either grow out of control, or stop being processed. After killing all the hosts, the locks and passengers were quickly reduced to normal levels.

 
for i in {1..100}; do docker run -d -h test$i -e "SATHOST=ibm-x3550m3-06.lab.eng.brq.redhat.com" -e "AK=7key" ch-d:test1419985; done;

+++++ While Running +++++

postgres=# SELECT count(*) FROM pg_locks;
 count 
-------
    13
(1 row)

postgres=# SELECT count(*) FROM pg_locks;
 count 
-------
   140
(1 row)

postgres=# SELECT count(*) FROM pg_locks;
 count 
-------
   205
(1 row)

postgres=# SELECT count(*) FROM pg_locks;
 count 
-------
   249
(1 row)

postgres=# SELECT count(*) FROM pg_locks;
 count 
-------
   593
(1 row)

postgres=# SELECT count(*) FROM pg_locks;
 count 
-------
   292
(1 row)

postgres=# SELECT count(*) FROM pg_locks;
 count 
-------
    10
(1 row)

postgres=# SELECT count(*) FROM pg_locks;
 count 
-------
   247
(1 row)

postgres=# SELECT count(*) FROM pg_locks;
 count 
-------
    17
(1 row)

postgres=# SELECT count(*) FROM pg_locks;
 count 
-------
     2
(1 row)

postgres=# SELECT count(*) FROM pg_locks;
 count 
-------
   151
(1 row)

postgres=# SELECT count(*) FROM pg_locks;
 count 
-------
   825
(1 row)

postgres=# SELECT count(*) FROM pg_locks;
 count 
-------
     2
(1 row)

postgres=# SELECT count(*) FROM pg_locks;
 count 
-------
   163
(1 row)

postgres=# SELECT count(*) FROM pg_locks;
 count 
-------
   176
(1 row)

postgres=# SELECT count(*) FROM pg_locks;
 count 
-------
    14
(1 row)


-bash-4.1# passenger-status
Version : 4.0.18
Date    : Mon Apr 10 21:48:20 +0200 2017
Instance: 3835
----------- General information -----------
Max pool size : 12
Processes     : 12
Requests in top-level queue : 0

----------- Application groups -----------
/usr/share/foreman#default:
  App root: /usr/share/foreman
  Requests in queue: 90
  * PID: 31308   Sessions: 1       Processed: 490     Uptime: 31m 44s
    CPU: 3%      Memory  : 659M    Last used: 16s ago
  * PID: 32433   Sessions: 1       Processed: 547     Uptime: 12m 51s
    CPU: 9%      Memory  : 297M    Last used: 15s ago
  * PID: 32470   Sessions: 1       Processed: 294     Uptime: 12m 50s
    CPU: 6%      Memory  : 293M    Last used: 12s ago
  * PID: 32505   Sessions: 1       Processed: 219     Uptime: 12m 50s
    CPU: 4%      Memory  : 288M    Last used: 7s ago
  * PID: 32554   Sessions: 1       Processed: 312     Uptime: 12m 49s
    CPU: 5%      Memory  : 289M    Last used: 12s ago
  * PID: 32634   Sessions: 1       Processed: 260     Uptime: 12m 49s
    CPU: 4%      Memory  : 284M    Last used: 2s ago
  * PID: 32728   Sessions: 1       Processed: 231     Uptime: 12m 48s
    CPU: 5%      Memory  : 535M    Last used: 11s ago
  * PID: 474     Sessions: 1       Processed: 519     Uptime: 12m 47s
    CPU: 8%      Memory  : 291M    Last used: 16s ago
  * PID: 560     Sessions: 1       Processed: 299     Uptime: 12m 46s
    CPU: 5%      Memory  : 520M    Last used: 8s ago
  * PID: 636     Sessions: 1       Processed: 192     Uptime: 12m 46s
    CPU: 5%      Memory  : 280M    Last used: 4s ago
  * PID: 723     Sessions: 1       Processed: 402     Uptime: 12m 45s
    CPU: 8%      Memory  : 291M    Last used: 11s ago

/etc/puppet/rack#default:
  App root: /etc/puppet/rack
  Requests in queue: 0
  * PID: 4978    Sessions: 0       Processed: 44      Uptime: 4h 8m 3s
    CPU: 0%      Memory  : 86M     Last used: 7m 28s ago

-bash-4.1# passenger-status
Version : 4.0.18
Date    : Mon Apr 10 21:48:25 +0200 2017
Instance: 3835
----------- General information -----------
Max pool size : 12
Processes     : 12
Requests in top-level queue : 0

----------- Application groups -----------
/usr/share/foreman#default:
  App root: /usr/share/foreman
  Requests in queue: 90
  * PID: 31308   Sessions: 1       Processed: 490     Uptime: 31m 49s
    CPU: 3%      Memory  : 659M    Last used: 21s ago
  * PID: 32433   Sessions: 1       Processed: 547     Uptime: 12m 56s
    CPU: 9%      Memory  : 297M    Last used: 20s ago
  * PID: 32470   Sessions: 1       Processed: 294     Uptime: 12m 55s
    CPU: 6%      Memory  : 293M    Last used: 17s ago
  * PID: 32505   Sessions: 1       Processed: 219     Uptime: 12m 55s
    CPU: 4%      Memory  : 288M    Last used: 12s ago
  * PID: 32554   Sessions: 1       Processed: 312     Uptime: 12m 54s
    CPU: 5%      Memory  : 289M    Last used: 17s ago
  * PID: 32634   Sessions: 1       Processed: 260     Uptime: 12m 54s
    CPU: 4%      Memory  : 284M    Last used: 7s ago
  * PID: 32728   Sessions: 1       Processed: 231     Uptime: 12m 53s
    CPU: 5%      Memory  : 535M    Last used: 16s ago
  * PID: 474     Sessions: 1       Processed: 519     Uptime: 12m 52s
    CPU: 8%      Memory  : 291M    Last used: 21s ago
  * PID: 560     Sessions: 1       Processed: 299     Uptime: 12m 51s
    CPU: 5%      Memory  : 520M    Last used: 13s ago
  * PID: 636     Sessions: 1       Processed: 192     Uptime: 12m 51s
    CPU: 5%      Memory  : 280M    Last used: 9s ago
  * PID: 723     Sessions: 1       Processed: 402     Uptime: 12m 50s
    CPU: 8%      Memory  : 291M    Last used: 16s ago

/etc/puppet/rack#default:
  App root: /etc/puppet/rack
  Requests in queue: 0
  * PID: 4978    Sessions: 0       Processed: 44      Uptime: 4h 8m 8s
    CPU: 0%      Memory  : 86M     Last used: 7m 33s ago

-bash-4.1# passenger-status
Version : 4.0.18
Date    : Mon Apr 10 21:48:32 +0200 2017
Instance: 3835
----------- General information -----------
Max pool size : 12
Processes     : 12
Requests in top-level queue : 0

----------- Application groups -----------
/usr/share/foreman#default:
  App root: /usr/share/foreman
  Requests in queue: 87
  * PID: 31308   Sessions: 1       Processed: 491     Uptime: 31m 56s
    CPU: 3%      Memory  : 659M    Last used: 6s ago
  * PID: 32433   Sessions: 1       Processed: 548     Uptime: 13m 3s
    CPU: 9%      Memory  : 297M    Last used: 6s ago
  * PID: 32470   Sessions: 1       Processed: 294     Uptime: 13m 2s
    CPU: 6%      Memory  : 293M    Last used: 24s ago
  * PID: 32505   Sessions: 1       Processed: 219     Uptime: 13m 2s
    CPU: 4%      Memory  : 288M    Last used: 19s ago
  * PID: 32554   Sessions: 1       Processed: 314     Uptime: 13m 1s
    CPU: 5%      Memory  : 289M    Last used: 0s ago
  * PID: 32634   Sessions: 1       Processed: 260     Uptime: 13m 1s
    CPU: 4%      Memory  : 284M    Last used: 14s ago
  * PID: 32728   Sessions: 1       Processed: 231     Uptime: 13m 0s
    CPU: 5%      Memory  : 535M    Last used: 23s ago
  * PID: 474     Sessions: 1       Processed: 520     Uptime: 12m 59s
    CPU: 8%      Memory  : 291M    Last used: 7s ago
  * PID: 560     Sessions: 1       Processed: 302     Uptime: 12m 58s
    CPU: 5%      Memory  : 520M    Last used: 2s ago
  * PID: 636     Sessions: 1       Processed: 193     Uptime: 12m 58s
    CPU: 5%      Memory  : 280M    Last used: 6s ago
  * PID: 723     Sessions: 1       Processed: 405     Uptime: 12m 57s
    CPU: 8%      Memory  : 291M    Last used: 1s ago

/etc/puppet/rack#default:
  App root: /etc/puppet/rack
  Requests in queue: 0
  * PID: 4978    Sessions: 0       Processed: 44      Uptime: 4h 8m 15s
    CPU: 0%      Memory  : 86M     Last used: 7m 40s ag


+++++ After Killing +++++


postgres=# SELECT count(*) FROM pg_locks;
 count 
-------
     2
(1 row)


-bash-4.1# passenger-status
Version : 4.0.18
Date    : Mon Apr 10 21:50:33 +0200 2017
Instance: 3835
----------- General information -----------
Max pool size : 12
Processes     : 12
Requests in top-level queue : 0

----------- Application groups -----------
/usr/share/foreman#default:
  App root: /usr/share/foreman
  Requests in queue: 0
  * PID: 31308   Sessions: 0       Processed: 513     Uptime: 33m 57s
    CPU: 3%      Memory  : 659M    Last used: 1m 42s ago
  * PID: 32433   Sessions: 0       Processed: 556     Uptime: 15m 4s
    CPU: 7%      Memory  : 297M    Last used: 1m 47s ago
  * PID: 32470   Sessions: 0       Processed: 328     Uptime: 15m 3s
    CPU: 6%      Memory  : 292M    Last used: 1m 25s ago
  * PID: 32505   Sessions: 0       Processed: 230     Uptime: 15m 3s
    CPU: 3%      Memory  : 288M    Last used: 1m 6s ago
  * PID: 32554   Sessions: 0       Processed: 333     Uptime: 15m 2s
    CPU: 5%      Memory  : 299M    Last used: 1m 44s ago
  * PID: 32634   Sessions: 0       Processed: 360     Uptime: 15m 2s
    CPU: 7%      Memory  : 289M    Last used: 1m 7s ago
  * PID: 32728   Sessions: 0       Processed: 249     Uptime: 15m 1s
    CPU: 4%      Memory  : 535M    Last used: 1m 52s ago
  * PID: 474     Sessions: 0       Processed: 521     Uptime: 15m 0s
    CPU: 7%      Memory  : 291M    Last used: 1m 42s ago
  * PID: 560     Sessions: 0       Processed: 360     Uptime: 14m 59s
    CPU: 6%      Memory  : 522M    Last used: 1m 7s ago
  * PID: 636     Sessions: 0       Processed: 227     Uptime: 14m 59s
    CPU: 5%      Memory  : 282M    Last used: 1m 42s ago
  * PID: 723     Sessions: 0       Processed: 405     Uptime: 14m 58s
    CPU: 7%      Memory  : 291M    Last used: 2m 2s ago

/etc/puppet/rack#default:
  App root: /etc/puppet/rack
  Requests in queue: 0
  * PID: 4978    Sessions: 0       Processed: 44      Uptime: 4h 10m 16s
    CPU: 0%      Memory  : 86M     Last used: 9m 41s a

Comment 12 errata-xmlrpc 2017-05-01 13:59:32 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:1191


Note You need to log in before you can comment on or make changes to this bug.