Bug 1804412

Summary: Add parameter to control Octavia amphora connection_logging
Product: Red Hat OpenStack Reporter: Michael Johnson <michjohn>
Component: openstack-octaviaAssignee: Michael Johnson <michjohn>
Status: CLOSED ERRATA QA Contact: Bruna Bonguardo <bbonguar>
Severity: medium Docs Contact:
Priority: high    
Version: 13.0 (Queens)CC: cgoncalves, ihrachys, jamsmith, lpeer, majopela, scohen
Target Milestone: z12Keywords: Triaged, ZStream
Target Release: 13.0 (Queens)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: openstack-octavia-2.1.2-3.el7ost Doc Type: Release Note
Doc Text:
This update lets you disable connection logging inside the amphora.
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-06-24 11:53:05 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 1759254    

Description Michael Johnson 2020-02-18 18:45:48 UTC
Amphorae currently log connections. Logging the traffic flows impact performance. It would be nice if Director could expose a parameter to enable/disable connection flow logging in amphorae.

This feature became available in upstream Rocky, so backporting downstream to Queens.

Comment 11 Bruna Bonguardo 2020-05-29 15:07:14 UTC
(overcloud) [stack@undercloud-0 ~]$ rpm -qa | grep openstack-tripleo-heat-templates
openstack-tripleo-heat-templates-8.4.1-58.el7ost.noarch
(overcloud) [stack@undercloud-0 ~]$ rpm -qa | grep puppet-octavia-
puppet-octavia-12.4.0-16.el7ost.noarch
(overcloud) [stack@undercloud-0 ~]$ cat /var/lib/rhos-release/latest-installed
13  -p 2020-05-19.2


# "connection_logging" is True in the octavia.conf file (inside the controllers):

[root@controller-0 ~]# cat /var/lib/config-data/puppet-generated/octavia/etc/octavia/octavia.conf | grep connection_logging
# connection_logging = True
connection_logging=True

# In undercloud, traffic was sent to the loadbalancer's VIP:

[stack@undercloud-0 ~]$ req='curl 10.0.0.235'; for i in {1..10}; do $req; echo; done
lbtreevmshttp-server1-3pajopi2af5i
lbtreevmshttp-server2-dhjb3zqoakrn
lbtreevmshttp-server1-3pajopi2af5i
lbtreevmshttp-server2-dhjb3zqoakrn
lbtreevmshttp-server1-3pajopi2af5i
lbtreevmshttp-server2-dhjb3zqoakrn
lbtreevmshttp-server1-3pajopi2af5i
lbtreevmshttp-server2-dhjb3zqoakrn
lbtreevmshttp-server1-3pajopi2af5i
lbtreevmshttp-server2-dhjb3zqoakrn


# Inside the amphora, the connections run in the previous step are logged:

[cloud-user@amphora-1ece9c77-eea5-47b6-a6c8-777ada305cc7 ~]$ sudo journalctl | grep haproxy

May 29 10:10:46 amphora-1ece9c77-eea5-47b6-a6c8-777ada305cc7 haproxy[1817]: 10.0.0.78:48158 [29/May/2020:10:10:46.319] e2cf723c-220f-477d-81c5-3a0c1c993d3d a6c8663f-cca9-461e-a79a-14c6947d3601/06d8d50e-c42e-4912-b9fe-ed963561be78 0/0/3/15/18 200 73 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"
May 29 10:10:46 amphora-1ece9c77-eea5-47b6-a6c8-777ada305cc7 haproxy[1817]: 10.0.0.78:48160 [29/May/2020:10:10:46.349] e2cf723c-220f-477d-81c5-3a0c1c993d3d a6c8663f-cca9-461e-a79a-14c6947d3601/533ad006-d430-41ab-9d8d-91081dcc930a 0/0/0/17/17 200 73 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"
May 29 10:10:46 amphora-1ece9c77-eea5-47b6-a6c8-777ada305cc7 haproxy[1817]: 10.0.0.78:48162 [29/May/2020:10:10:46.375] e2cf723c-220f-477d-81c5-3a0c1c993d3d a6c8663f-cca9-461e-a79a-14c6947d3601/06d8d50e-c42e-4912-b9fe-ed963561be78 0/0/0/16/16 200 73 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"
May 29 10:10:46 amphora-1ece9c77-eea5-47b6-a6c8-777ada305cc7 haproxy[1817]: 10.0.0.78:48164 [29/May/2020:10:10:46.402] e2cf723c-220f-477d-81c5-3a0c1c993d3d a6c8663f-cca9-461e-a79a-14c6947d3601/533ad006-d430-41ab-9d8d-91081dcc930a 0/0/1/21/22 200 73 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"
May 29 10:10:46 amphora-1ece9c77-eea5-47b6-a6c8-777ada305cc7 haproxy[1817]: 10.0.0.78:48166 [29/May/2020:10:10:46.434] e2cf723c-220f-477d-81c5-3a0c1c993d3d a6c8663f-cca9-461e-a79a-14c6947d3601/06d8d50e-c42e-4912-b9fe-ed963561be78 0/0/0/17/17 200 73 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"
May 29 10:10:46 amphora-1ece9c77-eea5-47b6-a6c8-777ada305cc7 haproxy[1817]: 10.0.0.78:48168 [29/May/2020:10:10:46.462] e2cf723c-220f-477d-81c5-3a0c1c993d3d a6c8663f-cca9-461e-a79a-14c6947d3601/533ad006-d430-41ab-9d8d-91081dcc930a 0/0/1/17/18 200 73 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"
May 29 10:10:46 amphora-1ece9c77-eea5-47b6-a6c8-777ada305cc7 haproxy[1817]: 10.0.0.78:48170 [29/May/2020:10:10:46.490] e2cf723c-220f-477d-81c5-3a0c1c993d3d a6c8663f-cca9-461e-a79a-14c6947d3601/06d8d50e-c42e-4912-b9fe-ed963561be78 0/0/1/17/18 200 73 - - ---- 1/1/0/1/0 0/0 "GET / HTTP/1.1"


# "connection_logging" was changed to False in the octavia.conf file (inside the controllers):

[root@controller-0 ~]# cat /var/lib/config-data/puppet-generated/octavia/etc/octavia/octavia.conf | grep connection_logging
# connection_logging = True
connection_logging=False

# A failover was triggered to the load balancer, as the setting of connection_logging is passed on to the amphora on amphora creation:
(overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer failover lbtreevmshttp-loadbalancer-wvdesgex2x7o
[2020-05-29 10:51:19] (overcloud) [stack@undercloud-0 ~]$ 


# In undercloud, traffic was sent to the loadbalancer's VIP:

[stack@undercloud-0 ~]$ req='curl 10.0.0.235'; for i in {1..10}; do $req; echo; done
lbtreevmshttp-server1-3pajopi2af5i
lbtreevmshttp-server2-dhjb3zqoakrn
lbtreevmshttp-server1-3pajopi2af5i
lbtreevmshttp-server2-dhjb3zqoakrn
lbtreevmshttp-server1-3pajopi2af5i
lbtreevmshttp-server2-dhjb3zqoakrn
lbtreevmshttp-server1-3pajopi2af5i
lbtreevmshttp-server2-dhjb3zqoakrn
lbtreevmshttp-server1-3pajopi2af5i
lbtreevmshttp-server2-dhjb3zqoakrn


# Inside the amphora, no more connection logging:
[cloud-user@amphora-c9329006-56f4-4381-87bd-b916c02a745d ~]$ sudo journalctl | grep haproxy
May 29 10:52:27 amphora-c9329006-56f4-4381-87bd-b916c02a745d amphora-agent[814]: 2020-05-29 10:52:27.697 1201 WARNING octavia.amphorae.backends.agent.api_server.haproxy_compatibility [-] Found 1.5 version of haproxy. Disabling external checks. Health monitor of type PING will revert to TCP.: error: [Errno 11] Resource temporarily unavailable
May 29 10:52:27 amphora-c9329006-56f4-4381-87bd-b916c02a745d systemd[1]: Starting Configure amphora-haproxy network namespace...
May 29 10:52:28 amphora-c9329006-56f4-4381-87bd-b916c02a745d systemd[1]: Started Configure amphora-haproxy network namespace.
May 29 10:52:28 amphora-c9329006-56f4-4381-87bd-b916c02a745d ip[1271]: haproxy-systemd-wrapper: executing /usr/sbin/haproxy -f /var/lib/octavia/e2cf723c-220f-477d-81c5-3a0c1c993d3d/haproxy.cfg -f /var/lib/octavia/haproxy-default-user-group.conf -p /var/lib/octavia/e2cf723c-220f-477d-81c5-3a0c1c993d3d/e2cf723c-220f-477d-81c5-3a0c1c993d3d.pid -L BKNsJFgU-04GMTUxYv55oBEV34g -Ds
[cloud-user@amphora-c9329006-56f4-4381-87bd-b916c02a745d ~]$ 
[cloud-user@amphora-c9329006-56f4-4381-87bd-b916c02a745d ~]$ 

Moving the bug to VERIFIED.

Comment 13 errata-xmlrpc 2020-06-24 11:53:05 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2724