Bug 1976826 - HAProxy processes consume too much memory in ACTIVE_STANDBY topology
Summary: HAProxy processes consume too much memory in ACTIVE_STANDBY topology
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-octavia
Version: 16.2 (Train)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: z2
: 16.2 (Train on RHEL 8.4)
Assignee: Gregory Thiemonge
QA Contact: Bruna Bonguardo
URL:
Whiteboard:
Depends On: 1975790
Blocks: 1907965
TreeView+ depends on / blocked
 
Reported: 2021-06-28 10:29 UTC by Gregory Thiemonge
Modified: 2022-03-23 22:11 UTC (History)
9 users (show)

Fixed In Version: openstack-octavia-5.1.3-2.20210707174812.b1cc446.el8ost
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1975790
Environment:
Last Closed: 2022-03-23 22:10:48 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack Storyboard 2009005 0 None None None 2021-06-28 10:31:42 UTC
OpenStack gerrit 797882 0 None MERGED Enable lo interface in the amphora-haproxy netns 2021-07-01 08:16:27 UTC
OpenStack gerrit 798989 0 None MERGED Enable lo interface in the amphora-haproxy netns 2021-11-15 09:57:37 UTC
Red Hat Issue Tracker OSP-5492 0 None None None 2021-11-15 09:37:49 UTC
Red Hat Product Errata RHBA-2022:1001 0 None None None 2022-03-23 22:11:15 UTC

Description Gregory Thiemonge 2021-06-28 10:29:54 UTC
+++ This bug was initially created as a clone of Bug #1975790 +++

Description of problem:

When using a load balancer in ACTIVE_STANDBY topology, the haproxy instance that runs in the amphora is prone to memory allocation errors, because each haproxy worker consumes a lot of memory, and multiple workers are running at the same time after a configuration update.

The visible side effects in the octavia worker are some exceptions thrown when calling the amphora-agent:

ERROR oslo_messaging.rpc.server octavia.amphorae.drivers.haproxy.exceptions.InternalServerError: Internal Server Error

ERROR octavia.amphorae.drivers.haproxy.exceptions [XXX - XXX - - -] Amphora agent returned unexpected result code 500 with response {'message': 'Error reloading haproxy', 'details': 'Redirecting to /bin/systemctl reload haproxy-XXX.service\nJob for haproxy-XXX.service canceled.\n'}


Version-Release number of selected component (if applicable):
16.1

How reproducible:
100%

Steps to Reproduce:
1. Create a LB in Active standby
2. Create a listener and a pool
3. Create many members

Actual results:
After each new member creation, there is one more haproxy processes that should be cleaned up, memory consumption can be important depending on the listener configuration


Expected results:
A configuration change should not reduce the available memory in the amphora


Additional info:

Detailed report can be found in the upstream story https://storyboard.openstack.org/#!/story/2009005

Comment 8 Omer Schwartz 2022-02-03 08:44:48 UTC
Test cases that failed before because of that issue have passed successfully when run with CI - OSP16.2 Active standby.
Moving this BZ to verified status.

Comment 15 errata-xmlrpc 2022-03-23 22:10:48 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Release of components for Red Hat OpenStack Platform 16.2.2), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:1001


Note You need to log in before you can comment on or make changes to this bug.