Bug 1454762 - DBDeadlock While using QuotaClient in test_load_balancer_tree_minimal
Summary: DBDeadlock While using QuotaClient in test_load_balancer_tree_minimal
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-octavia
Version: 11.0 (Ocata)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ga
: 12.0 (Pike)
Assignee: Nir Magnezi
QA Contact: Alexander Stafeyev
URL:
Whiteboard:
Depends On: 1433537 1450631
Blocks: 1451829
TreeView+ depends on / blocked
 
Reported: 2017-05-23 12:45 UTC by Nir Magnezi
Modified: 2019-09-10 14:09 UTC (History)
10 users (show)

Fixed In Version: openstack-octavia-1.0.0-0.20170628055307.3ccd8a3.el7ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-13 21:28:17 UTC
Target Upstream Version:
nmagnezi: needinfo-
nmagnezi: needinfo-


Attachments (Terms of Use)


Links
System ID Priority Status Summary Last Updated
Launchpad 1664807 None None None 2017-05-23 12:45:15 UTC
Red Hat Product Errata RHEA-2017:3462 normal SHIPPED_LIVE Red Hat OpenStack Platform 12.0 Enhancement Advisory 2018-02-16 01:43:25 UTC

Description Nir Magnezi 2017-05-23 12:45:15 UTC
Description of problem:
=======================
First Discovered in bug 1451829 comment #1 (see DBDeadlock issue).
I got the exact same issue as shown in https://bugs.launchpad.net/octavia/+bug/1664807

Proposed Backport: https://review.openstack.org/#/c/467091/


More Details:
(Pdb) lock_session.query(models.Quotas).filter_by(project_id=project_id).with_for_update().first()
*** DBDeadlock: (pymysql.err.InternalError) (1205, u'Lock wait timeout exceeded; try restarting transaction') [SQL: u'SELECT quotas.project_id AS quotas_project_id, quotas.health_monitor AS quotas_health_monitor, quotas.listener AS quotas_listener, quotas.load_balancer AS quotas_load_balancer, quotas.member AS quotas_member, quotas.pool AS quotas_pool, quotas.in_use_health_monitor AS quotas_in_use_health_monitor, quotas.in_use_listener AS quotas_in_use_listener, quotas.in_use_load_balancer AS quotas_in_use_load_balancer, quotas.in_use_member AS quotas_in_use_member, quotas.in_use_pool AS quotas_in_use_pool \nFROM quotas \nWHERE quotas.project_id = %(project_id_1)s \n LIMIT %(param_1)s FOR UPDATE'] [parameters: {u'project_id_1': '33ee71cd44e44ca0b2f8d0189c78d307', u'param_1': 1}]


The code in which this happens: https://github.com/openstack/octavia/blob/stable/ocata/octavia/db/repositories.py#L286-L287


Version-Release number of selected component (if applicable):
=============================================================
OSP11

How reproducible:
================
100%

Steps to Reproduce:
===================
1. Run octavia.tests.tempest.v1.scenario.test_load_balancer_tree_minimal.TestLoadBalancerTreeMinimal.test_load_balancer_tree_minimal

Comment 1 Red Hat Bugzilla Rules Engine 2017-05-23 12:46:16 UTC
This bugzilla has been removed from the release and needs to be reviewed and Triaged for another Target Release.

Comment 2 Red Hat Bugzilla Rules Engine 2017-05-23 12:47:09 UTC
This bugzilla has been removed from the release and needs to be reviewed and Triaged for another Target Release.

Comment 3 Nir Magnezi 2017-06-20 09:18:51 UTC
This fix is blocked by bug 1450631, since we cannot run tempest scenarios without the gunicorn issue resolved.

Comment 5 Ihar Hrachyshka 2017-11-07 19:15:21 UTC
Seems like we have the patch landed. Is there a job that we could look at to prove it fixed the failure and verify the bug?

Comment 7 Nir Magnezi 2017-11-19 09:05:51 UTC
I agree with QE here.
I think the validation should only happen once we finalize the TripleO deployment and as an outcome - have proper CI for it.

What we currently have is CI with many workarounds to compensate for known deployment issues.

Comment 16 errata-xmlrpc 2017-12-13 21:28:17 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:3462


Note You need to log in before you can comment on or make changes to this bug.