Note: This bug is displayed in read-only format because
the product is no longer active in Red Hat Bugzilla.
Red Hat Satellite engineering is moving the tracking of its product development work on Satellite to Red Hat Jira (issues.redhat.com). If you're a Red Hat customer, please continue to file support cases via the Red Hat customer portal. If you're not, please head to the "Satellite project" in Red Hat Jira and file new tickets here. Individual Bugzilla bugs will be migrated starting at the end of May. If you cannot log in to RH Jira, please consult article #7032570. That failing, please send an e-mail to the RH Jira admins at rh-issues@redhat.com to troubleshoot your issue as a user management inquiry. The email creates a ServiceNow ticket with Red Hat. Individual Bugzilla bugs that are migrated will be moved to status "CLOSED", resolution "MIGRATED", and set with "MigratedToJIRA" in "Keywords". The link to the successor Jira issue will be found under "Links", have a little "two-footprint" icon next to it, and direct you to the "Satellite project" in Red Hat Jira (issue links are of type "https://issues.redhat.com/browse/SAT-XXXX", where "X" is a digit). This same link will be available in a blue banner at the top of the page informing you that that bug has been migrated.
Description of problem:
There was a deadlocks on katello_pools when registering content hosts (45 pieces, max 5 in parallel)
Version-Release number of selected component (if applicable):
satellite-6.2.0-9.0.beta.el7sat.noarch
How reproducible:
rarely
Steps to Reproduce:
1. Have Sat with RHEL7 and Sat Tools repo synced as custom repos and RHEL7 synced as Red Hat repo (not sure if this is important)
2. Attempt to register 45 systems (max 5 in parallel) few times
# subscription-manager register --org Default_Organization --environment Library --username admin --password changeme --force
Actual results:
rc: 70
stdout: "Registering to: capsule62.example.com:8443/rhsm"
stderr: "Task 90dbccbf-a57b-4bb5-bea7-146b7b9c1ce8: ActiveRecord::StatementInvalid: PG::Error: ERROR: deadlock detected\nDETAIL: Process 9939 waits for ShareLock on transaction 2727946; blocked by process 648.\nProcess 648 waits for ExclusiveLock on tuple (0,33) of relation 19660 of database 17944; blocked by process 662.\nProcess 662 waits for ShareLock on transaction 2727938; blocked by process 9939.\nHINT: See server log for query details.\n: UPDATE \"katello_pools\" SET \"consumed\" = $1, \"updated_at\" = $2 WHERE \"katello_pools\".\"id\" = 3"
Expected results:
There should be no DB deadlock as it indicates logical issue in the application which might became more obvious/problematic in bigger scale
Additional info:
LOG: database system is ready to accept connections
ERROR: deadlock detected
DETAIL: Process 9939 waits for ShareLock on transaction 2727946; blocked by process 648.
Process 648 waits for ExclusiveLock on tuple (0,33) of relation 19660 of database 17944; blocked by process 662.
Process 662 waits for ShareLock on transaction 2727938; blocked by process 9939.
Process 9939: UPDATE "katello_pools" SET "consumed" = $1, "updated_at" = $2 WHERE "katello_pools"."id" = 3
Process 648: UPDATE "katello_pools" SET "consumed" = $1, "updated_at" = $2 WHERE "katello_pools"."id" = 2
Process 662: UPDATE "katello_pools" SET "consumed" = $1, "updated_at" = $2 WHERE "katello_pools"."id" = 2
HINT: See server log for query details.
STATEMENT: UPDATE "katello_pools" SET "consumed" = $1, "updated_at" = $2 WHERE "katello_pools"."id" = 3
ERROR: deadlock detected
DETAIL: Process 648 waits for ShareLock on transaction 2727950; blocked by process 662.
Process 662 waits for ShareLock on transaction 2727946; blocked by process 648.
Process 648: UPDATE "katello_pools" SET "consumed" = $1, "updated_at" = $2 WHERE "katello_pools"."id" = 2
Process 662: UPDATE "katello_pools" SET "consumed" = $1, "updated_at" = $2 WHERE "katello_pools"."id" = 3
HINT: See server log for query details.
STATEMENT: UPDATE "katello_pools" SET "consumed" = $1, "updated_at" = $2 WHERE "katello_pools"."id" = 2
Jan,
Do you still run into this from time to time? We fixed a registration bug awhile back (in a later 6.2 snapshot) where a task was being run synchronously instead of async, I am wondering if that also alleviated this issue.
Hello. Using satellite-6.2.1-1.2.el7sat.noarch I have not noticed this issue now. I have tried to register 250 clients (max 10 in parallel) and then 248 clients (max 25 in parallel) and both scenarios passed without a deadlock (or any other error) in /var/lib/pgsql/data/pg_log/postgresql-*.log
Description of problem: There was a deadlocks on katello_pools when registering content hosts (45 pieces, max 5 in parallel) Version-Release number of selected component (if applicable): satellite-6.2.0-9.0.beta.el7sat.noarch How reproducible: rarely Steps to Reproduce: 1. Have Sat with RHEL7 and Sat Tools repo synced as custom repos and RHEL7 synced as Red Hat repo (not sure if this is important) 2. Attempt to register 45 systems (max 5 in parallel) few times # subscription-manager register --org Default_Organization --environment Library --username admin --password changeme --force Actual results: rc: 70 stdout: "Registering to: capsule62.example.com:8443/rhsm" stderr: "Task 90dbccbf-a57b-4bb5-bea7-146b7b9c1ce8: ActiveRecord::StatementInvalid: PG::Error: ERROR: deadlock detected\nDETAIL: Process 9939 waits for ShareLock on transaction 2727946; blocked by process 648.\nProcess 648 waits for ExclusiveLock on tuple (0,33) of relation 19660 of database 17944; blocked by process 662.\nProcess 662 waits for ShareLock on transaction 2727938; blocked by process 9939.\nHINT: See server log for query details.\n: UPDATE \"katello_pools\" SET \"consumed\" = $1, \"updated_at\" = $2 WHERE \"katello_pools\".\"id\" = 3" Expected results: There should be no DB deadlock as it indicates logical issue in the application which might became more obvious/problematic in bigger scale Additional info: LOG: database system is ready to accept connections ERROR: deadlock detected DETAIL: Process 9939 waits for ShareLock on transaction 2727946; blocked by process 648. Process 648 waits for ExclusiveLock on tuple (0,33) of relation 19660 of database 17944; blocked by process 662. Process 662 waits for ShareLock on transaction 2727938; blocked by process 9939. Process 9939: UPDATE "katello_pools" SET "consumed" = $1, "updated_at" = $2 WHERE "katello_pools"."id" = 3 Process 648: UPDATE "katello_pools" SET "consumed" = $1, "updated_at" = $2 WHERE "katello_pools"."id" = 2 Process 662: UPDATE "katello_pools" SET "consumed" = $1, "updated_at" = $2 WHERE "katello_pools"."id" = 2 HINT: See server log for query details. STATEMENT: UPDATE "katello_pools" SET "consumed" = $1, "updated_at" = $2 WHERE "katello_pools"."id" = 3 ERROR: deadlock detected DETAIL: Process 648 waits for ShareLock on transaction 2727950; blocked by process 662. Process 662 waits for ShareLock on transaction 2727946; blocked by process 648. Process 648: UPDATE "katello_pools" SET "consumed" = $1, "updated_at" = $2 WHERE "katello_pools"."id" = 2 Process 662: UPDATE "katello_pools" SET "consumed" = $1, "updated_at" = $2 WHERE "katello_pools"."id" = 3 HINT: See server log for query details. STATEMENT: UPDATE "katello_pools" SET "consumed" = $1, "updated_at" = $2 WHERE "katello_pools"."id" = 2