Bug 2151893 - customer see "TCP: out of memory -- consider tuning tcp_mem" in the amphora
Summary: customer see "TCP: out of memory -- consider tuning tcp_mem" in the amphora
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-octavia
Version: 16.2 (Train)
Hardware: Unspecified
OS: Unspecified
urgent
high
Target Milestone: z5
: 16.2 (Train on RHEL 8.4)
Assignee: Gregory Thiemonge
QA Contact: Omer Schwartz
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-12-08 13:41 UTC by Gregory Thiemonge
Modified: 2023-04-26 12:17 UTC (History)
10 users (show)

Fixed In Version: openstack-octavia-5.1.3-2.20221208224819.1b737b8.el8ost
Doc Type: Bug Fix
Doc Text:
Before this update, inadequate TCP buffer sizes resulted in out of memory warnings for TCP in amphora. The smaller TCP buffer size had a potential negative impact on TCP flows with large payloads. This update increases the size of the TCP buffers in amphora, improving the reliability of the TCP connections. This resolves the issue.
Clone Of:
Environment:
Last Closed: 2023-04-26 12:17:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 866712 0 None MERGED Increase TCP buffer maximum and MTU discovery 2022-12-08 13:41:05 UTC
Red Hat Issue Tracker OSP-20815 0 None None None 2022-12-08 13:42:15 UTC
Red Hat Product Errata RHBA-2023:1763 0 None None None 2023-04-26 12:17:51 UTC

Description Gregory Thiemonge 2022-12-08 13:41:06 UTC
Description of problem:
Customer on 16.2.2 observes packet drops in the amphora and the following message in the logs:
"TCP: out of memory -- consider tuning tcp_mem"

Version-Release number of selected component (if applicable):
16.2(z2)

How reproducible:
it happens in prod env, during busy hours

Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:
There's an upstream commit that defines better default for the tcp_*mem values:https://review.opendev.org/c/openstack/octavia/+/866712

Comment 3 Gregory Thiemonge 2022-12-08 14:05:37 UTC
Current values:

[cloud-user@amphora-e3059233-128f-4254-acbe-31f78d6a7ce0 ~]$ sudo sysctl -a | grep -e tcp_.mem -e core..mem_max
net.core.rmem_max = 212992
net.core.wmem_max = 212992
net.ipv4.tcp_rmem = 16384       65536   524288
net.ipv4.tcp_wmem = 16384       349520  699040

Comment 5 Gregory Thiemonge 2022-12-08 14:55:14 UTC
(In reply to Gregory Thiemonge from comment #3)
> Current values:
> 
> [cloud-user@amphora-e3059233-128f-4254-acbe-31f78d6a7ce0 ~]$ sudo sysctl -a
> | grep -e tcp_.mem -e core..mem_max
> net.core.rmem_max = 212992
> net.core.wmem_max = 212992
> net.ipv4.tcp_rmem = 16384       65536   524288
> net.ipv4.tcp_wmem = 16384       349520  699040

New values should be:

[cloud-user@amphora-67fdda2f-fff4-4597-ae67-994fa7bfddfa ~]$ sudo sysctl -a | grep -e tcp_.mem -e core..mem_max
net.core.rmem_max = 67108864
net.core.wmem_max = 67108864
net.ipv4.tcp_rmem = 4096        87380   33554432
net.ipv4.tcp_wmem = 4096        87380   33554432

Comment 25 errata-xmlrpc 2023-04-26 12:17:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenStack Platform 16.2.5 (Train) bug fix and enhancement advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2023:1763


Note You need to log in before you can comment on or make changes to this bug.