Hide Forgot
Description of problem: ======================= First Discovered in bug 1451829 comment #1 (see DBDeadlock issue). I got the exact same issue as shown in https://bugs.launchpad.net/octavia/+bug/1664807 Proposed Backport: https://review.openstack.org/#/c/467091/ More Details: (Pdb) lock_session.query(models.Quotas).filter_by(project_id=project_id).with_for_update().first() *** DBDeadlock: (pymysql.err.InternalError) (1205, u'Lock wait timeout exceeded; try restarting transaction') [SQL: u'SELECT quotas.project_id AS quotas_project_id, quotas.health_monitor AS quotas_health_monitor, quotas.listener AS quotas_listener, quotas.load_balancer AS quotas_load_balancer, quotas.member AS quotas_member, quotas.pool AS quotas_pool, quotas.in_use_health_monitor AS quotas_in_use_health_monitor, quotas.in_use_listener AS quotas_in_use_listener, quotas.in_use_load_balancer AS quotas_in_use_load_balancer, quotas.in_use_member AS quotas_in_use_member, quotas.in_use_pool AS quotas_in_use_pool \nFROM quotas \nWHERE quotas.project_id = %(project_id_1)s \n LIMIT %(param_1)s FOR UPDATE'] [parameters: {u'project_id_1': '33ee71cd44e44ca0b2f8d0189c78d307', u'param_1': 1}] The code in which this happens: https://github.com/openstack/octavia/blob/stable/ocata/octavia/db/repositories.py#L286-L287 Version-Release number of selected component (if applicable): ============================================================= OSP11 How reproducible: ================ 100% Steps to Reproduce: =================== 1. Run octavia.tests.tempest.v1.scenario.test_load_balancer_tree_minimal.TestLoadBalancerTreeMinimal.test_load_balancer_tree_minimal
This bugzilla has been removed from the release and needs to be reviewed and Triaged for another Target Release.
This fix is blocked by bug 1450631, since we cannot run tempest scenarios without the gunicorn issue resolved.
Seems like we have the patch landed. Is there a job that we could look at to prove it fixed the failure and verify the bug?
I agree with QE here. I think the validation should only happen once we finalize the TripleO deployment and as an outcome - have proper CI for it. What we currently have is CI with many workarounds to compensate for known deployment issues.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:3462