+++ This bug was initially created as a clone of Bug #1975790 +++ Description of problem: When using a load balancer in ACTIVE_STANDBY topology, the haproxy instance that runs in the amphora is prone to memory allocation errors, because each haproxy worker consumes a lot of memory, and multiple workers are running at the same time after a configuration update. The visible side effects in the octavia worker are some exceptions thrown when calling the amphora-agent: ERROR oslo_messaging.rpc.server octavia.amphorae.drivers.haproxy.exceptions.InternalServerError: Internal Server Error ERROR octavia.amphorae.drivers.haproxy.exceptions [XXX - XXX - - -] Amphora agent returned unexpected result code 500 with response {'message': 'Error reloading haproxy', 'details': 'Redirecting to /bin/systemctl reload haproxy-XXX.service\nJob for haproxy-XXX.service canceled.\n'} Version-Release number of selected component (if applicable): 16.1 How reproducible: 100% Steps to Reproduce: 1. Create a LB in Active standby 2. Create a listener and a pool 3. Create many members Actual results: After each new member creation, there is one more haproxy processes that should be cleaned up, memory consumption can be important depending on the listener configuration Expected results: A configuration change should not reduce the available memory in the amphora Additional info: Detailed report can be found in the upstream story https://storyboard.openstack.org/#!/story/2009005
Test cases that failed before because of that issue have passed successfully when run with CI - OSP16.2 Active standby. Moving this BZ to verified status.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Release of components for Red Hat OpenStack Platform 16.2.2), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2022:1001