Bug 1724480 - Memory usage issue in octavia amphora with haproxy 1.8.x and 2 listeners
Summary: Memory usage issue in octavia amphora with haproxy 1.8.x and 2 listeners
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-octavia
Version: 15.0 (Stein)
Hardware: Unspecified
OS: Unspecified
urgent
urgent
Target Milestone: rc
: 15.0 (Stein)
Assignee: Michael Johnson
QA Contact: Bruna Bonguardo
URL:
Whiteboard:
Depends On:
Blocks: 1693268
TreeView+ depends on / blocked
 
Reported: 2019-06-27 07:35 UTC by Gregory Thiemonge
Modified: 2019-09-26 10:53 UTC (History)
12 users (show)

Fixed In Version: openstack-octavia-4.0.2-0.20190828154421.4b7fe7f.el8ost
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-09-21 11:23:46 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack Storyboard 2005412 0 None None None 2019-06-27 07:35:10 UTC
OpenStack gerrit 668068 0 None MERGED Fix multi-listener load balancers 2021-02-02 12:32:20 UTC
OpenStack gerrit 673518 0 None MERGED Fix multi-listener load balancers 2021-02-02 12:32:20 UTC
OpenStack gerrit 674229 0 None MERGED Fix listener deletion in ACTIVE/STANDBY topology 2021-02-02 12:32:20 UTC
OpenStack gerrit 675063 0 None MERGED Fix listener deletion in ACTIVE/STANDBY topology 2021-02-02 12:32:20 UTC
Red Hat Product Errata RHEA-2019:2811 0 None None None 2019-09-21 11:24:14 UTC

Description Gregory Thiemonge 2019-06-27 07:35:11 UTC
Description of problem:
Since 1.8.x version (RHEL8), haproxy consumes at least 160MB at init time using the default configuration provided by octavia. When a LB is updated, haproxy configuration is updated and a signal is sent to haproxy to reload the configuration.

When haproxy reloads its configuration, it creates a new worker, does some allocations (up to 160MB), then destroys the previous worker. So during a short time, memory consumption is increased, and if 2 processes reload a the same time, it may fail with a "Cannot fork" error:


Apr 09 12:03:43 amphora-24dd8f5a-4e09-4b8b-a794-58ea4607363d.novalocal systemd[1]: Reloading.                                                                     
Apr 09 12:03:43 amphora-24dd8f5a-4e09-4b8b-a794-58ea4607363d.novalocal systemd[1]: Reloading.                                                                                                                                                 
Apr 09 12:03:43 amphora-24dd8f5a-4e09-4b8b-a794-58ea4607363d.novalocal systemd[1]: Reloading HAProxy Load Balancer.                                                                                                                            
Apr 09 12:03:43 amphora-24dd8f5a-4e09-4b8b-a794-58ea4607363d.novalocal haproxy[4203]: Configuration file is valid                                                                                                                 
Apr 09 12:03:43 amphora-24dd8f5a-4e09-4b8b-a794-58ea4607363d.novalocal ip[3908]: [WARNING] 098/120329 (3908) : Reexecuting Master process                                                                                                      
Apr 09 12:03:43 amphora-24dd8f5a-4e09-4b8b-a794-58ea4607363d.novalocal haproxy[3908]: Proxy f1075bcd-09df-496f-b5e1-ad9d1f7946e2 started.
Apr 09 12:03:43 amphora-24dd8f5a-4e09-4b8b-a794-58ea4607363d.novalocal ip[3908]: [WARNING] 098/120343 (3908) : [/usr/sbin/haproxy.main()] Cannot raise FD limit to 2500032, limit is 2097152.
Apr 09 12:03:43 amphora-24dd8f5a-4e09-4b8b-a794-58ea4607363d.novalocal haproxy[3908]: Proxy f1075bcd-09df-496f-b5e1-ad9d1f7946e2 started.
Apr 09 12:03:43 amphora-24dd8f5a-4e09-4b8b-a794-58ea4607363d.novalocal haproxy[3908]: Proxy 4bd6bad2-fc01-4aba-8bea-523a218c49dc started.
Apr 09 12:03:43 amphora-24dd8f5a-4e09-4b8b-a794-58ea4607363d.novalocal haproxy[3908]: Proxy 4bd6bad2-fc01-4aba-8bea-523a218c49dc started.
Apr 09 12:03:43 amphora-24dd8f5a-4e09-4b8b-a794-58ea4607363d.novalocal haproxy[4108]: Stopping frontend f1075bcd-09df-496f-b5e1-ad9d1f7946e2 in 0 ms.
Apr 09 12:03:43 amphora-24dd8f5a-4e09-4b8b-a794-58ea4607363d.novalocal ip[3908]: [WARNING] 098/120343 (3908) : [/usr/sbin/haproxy.main()] FD limit (2097152) too low for maxconn=1000000/maxsock=2500032. Please raise 'ulimit-n' to 2500032 or more to avoid any trouble.
Apr 09 12:03:43 amphora-24dd8f5a-4e09-4b8b-a794-58ea4607363d.novalocal ip[3908]: [ALERT] 098/120343 (3908) : [/usr/sbin/haproxy.main()] Cannot fork.
Apr 09 12:03:43 amphora-24dd8f5a-4e09-4b8b-a794-58ea4607363d.novalocal ip[3908]: [WARNING] 098/120343 (3908) : Reexecuting Master process in waitpid mode
Apr 09 12:03:43 amphora-24dd8f5a-4e09-4b8b-a794-58ea4607363d.novalocal ip[3908]: [WARNING] 098/120343 (3908) : Reexecuting Master process
Apr 09 12:03:43 amphora-24dd8f5a-4e09-4b8b-a794-58ea4607363d.novalocal haproxy[4108]: Stopping frontend f1075bcd-09df-496f-b5e1-ad9d1f7946e2 in 0 ms.
Apr 09 12:03:43 amphora-24dd8f5a-4e09-4b8b-a794-58ea4607363d.novalocal haproxy[4108]: Stopping backend 4bd6bad2-fc01-4aba-8bea-523a218c49dc in 0 ms.
Apr 09 12:03:43 amphora-24dd8f5a-4e09-4b8b-a794-58ea4607363d.novalocal haproxy[4108]: Stopping backend 4bd6bad2-fc01-4aba-8bea-523a218c49dc in 0 ms.


Version-Release number of selected component (if applicable):
OSP15

How reproducible:
100%

Steps to Reproduce:
1. create 2 listeners that use the same pool
2. add a new member in the pool
3. both listeners are restarted, which causes memory allocation issues

Actual results:
haproxy crashes, amphora doesn't work anymore

Expected results:


Additional info:

Comment 7 Michael Johnson 2019-07-09 18:03:00 UTC
This issue can be worked around by limiting the per-listener connection limits.

By default this setting is -1, which is "unlimited connections".

To work around this issue, set the individual listener connection limit to a setting appropriate for the application.
The lower the per-listener connection limits, the more listeners a load balancer can handle.

For example, a load balancer with twenty listeners can support each listener using 100,000 for the connection limit.

Via the API this setting is "connection_limit" on the listener.
Via the CLI this setting is "--connection-limit" on the listener.

This is an interim work around until a patch can be made available for HAProxy 1.8 based amphora.

Comment 34 errata-xmlrpc 2019-09-21 11:23:46 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2019:2811


Note You need to log in before you can comment on or make changes to this bug.