Description of problem: Tests in octavia_tempest_plugin.tests.api.v2.test_pool.PoolAPITest.test_UDP_**_pool_create fail with pool update timeout The test runs fail with the error: Traceback (most recent call last): File "/home/stack/plugins/octavia/octavia_tempest_plugin/tests/api/v2/test_pool.py", line 238, in test_UDP_SI_pool_with_listener_create algorithm=const.LB_ALGORITHM_SOURCE_IP) File "/home/stack/plugins/octavia/octavia_tempest_plugin/tests/api/v2/test_pool.py", line 445, in _test_pool_create CONF.load_balancer.build_timeout) File "/home/stack/plugins/octavia/octavia_tempest_plugin/tests/waiters.py", line 96, in wait_for_status raise exceptions.TimeoutException(message) tempest.lib.exceptions.TimeoutException: Request timed out Details: (PoolAPITest:test_UDP_SI_pool_with_listener_create) show_pool operating_status failed to update to ONLINE within the required time 300. Current status of show_pool: OFFLINE Version-Release number of selected component (if applicable): 16.1 (Train) 13 (Queen) How reproducible: 100% Steps to Reproduce: 1. python3 testtools.run octavia_tempest_plugin.tests.api.v2.test_pool.PoolAPITest.test_UDP_**_pool_create (** can be LC or RR) Actual results: Traceback (most recent call last): File "/home/stack/plugins/octavia/octavia_tempest_plugin/tests/api/v2/test_pool.py", line 238, in test_UDP_SI_pool_with_listener_create algorithm=const.LB_ALGORITHM_SOURCE_IP) File "/home/stack/plugins/octavia/octavia_tempest_plugin/tests/api/v2/test_pool.py", line 445, in _test_pool_create CONF.load_balancer.build_timeout) File "/home/stack/plugins/octavia/octavia_tempest_plugin/tests/waiters.py", line 96, in wait_for_status raise exceptions.TimeoutException(message) tempest.lib.exceptions.TimeoutException: Request timed out Details: (PoolAPITest:test_UDP_SI_pool_with_listener_create) show_pool operating_status failed to update to ONLINE within the required time 300. Current status of show_pool: OFFLINE Expected results: Successful run Additional info:
*** This bug has been marked as a duplicate of bug 1907788 ***
Was closed by mistake, the issue still affects 16.1
*** Bug 1961597 has been marked as a duplicate of this bug. ***
Greg's comment from https://bugzilla.redhat.com/show_bug.cgi?id=1961597#c1 : Pool deletion timed out because of a lost of connectivity between the octavia services and the amphora: 2021-05-09 13:01:25.590 31 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-b97ab21c-dd6e-4484-8b2d-e502b9073c43 - 62a072e951cb40c78122401fe79b9668 - - -] Could not connect to instance. Retrying.: requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='172.24.1.139', port=9443): Read timed out. (read timeout=60.0) 2021-05-09 13:01:40.615 31 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-b97ab21c-dd6e-4484-8b2d-e502b9073c43 - 62a072e951cb40c78122401fe79b9668 - - -] Could not connect to instance. Retrying.: requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='172.24.1.139', port=9443): Read timed out. (read timeout=10.0) 2021-05-09 13:01:55.634 31 WARNING octavia.amphorae.drivers.haproxy.rest_api_driver [req-b97ab21c-dd6e-4484-8b2d-e502b9073c43 - 62a072e951cb40c78122401fe79b9668 - - -] Could not connect to instance. Retrying.: requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='172.24.1.139', port=9443): Read timed out. (read timeout=10.0)
I believe the issue is related to BZ 1975790 Fixing 1975790 would fix the CI jobs
Note: This BZ can be set as VERIFIED when BZ 1975790 is VERIFIED
now ON_QA as the "Depends On" BZ is ON_QA
Bug cannot be verified until 16.1 z7 puddle is available.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Red Hat OpenStack Platform 16.1.9 bug fix and enhancement advisory), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:8795