Bugzilla (bugzilla.redhat.com) will be under maintenance for infrastructure upgrades and will not be unavailable on July 31st between 12:30 AM - 05:30 AM UTC. We appreciate your understanding and patience. You can follow status.redhat.com for details.
Bug 1881362 - [Octavia][16.1] Amphora tenant flow log messages join administrative log file instead of having their own file
Summary: [Octavia][16.1] Amphora tenant flow log messages join administrative log file...
Keywords:
Status: VERIFIED
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-tripleo-heat-templates
Version: 16.1 (Train)
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: beta
: 16.2 (Train on RHEL 8.4)
Assignee: Michael Johnson
QA Contact: Omer Schwartz
URL:
Whiteboard:
Depends On: 1856835 1883536
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-22 09:21 UTC by Omer Schwartz
Modified: 2021-07-01 10:21 UTC (History)
9 users (show)

Fixed In Version: openstack-tripleo-heat-templates-11.4.1-2.20210323012110.c3396e2.el8ost.1
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed:
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack gerrit 754224 0 None MERGED Fix Octavia OctaviaTenantLogFacility default 2021-02-10 07:41:39 UTC
OpenStack gerrit 758148 0 None MERGED Fix Octavia OctaviaTenantLogFacility default 2021-05-05 14:55:14 UTC
Red Hat Bugzilla 1883954 0 unspecified CLOSED Amphora tenant flow log messages join administrative log file instead of having their own file 2021-02-22 00:41:40 UTC

Internal Links: 1883954

Description Omer Schwartz 2020-09-22 09:21:21 UTC
Description of problem:
Using the Octavia Log offloading feature, tenant flow log messages join the administrative log messages at 'octavia-amphora.log' file instead of having their own file named 'octavia-tenant-traffic.log'.

For example, I sent some traffic and if I am seeing the last log message of 'octavia-amphora.log' file, I see a tenant flow message:

[root@controller-0 octavia]# tail -n 1 octavia-amphora.log
Sep 22 05:13:37 amphora-ce6df23f-5c40-40ec-b20f-3217a581189c haproxy[5315]: {{ project_id }} {{ lb_id }} f9878496-ef26-40a2-9827-bfe304bd8af2 10.0.0.29 33218 22/Sep/2020:05:13:37.085 r 200 69 74 - [ssl_c_s_dn] 5213aaaa-b2a8-4edb-b397-d55544f3adbf:f9878496-ef26-40a2-9827-bfe304bd8af2 8bc46576-d6e7-44d1-a45d-2c64b9495448 2 ----
[root@controller-0 octavia]#

There is no 'octavia-tenant-traffic.log' file:

[root@controller-0 octavia]# ls -la | grep tenant
[root@controller-0 octavia]#

Version-Release number of selected component (if applicable):
(overcloud) [stack@undercloud-0 ~]$ cat /var/lib/rhos-release/latest-installed
16.1  -p RHOS-16.1-RHEL-8-20200813.n.0

How reproducible:
100%, I managed to reproduce it in my environment.

Steps to Reproduce:
1. Deploy OSP 16.1 in HA
2. Change the flag of the OctaviaLogOffload: true in /home/stack/virt/extra_templates.yaml
3. Due to bug https://bugzilla.redhat.com/show_bug.cgi?id=1856835, copy the templates folder in the following way:
sudo cp -r /usr/share/ansible/roles/octavia_controller_post_config/templates /usr/share/ansible/roles/octavia-controller-post-config/templates
4. run overcloud_deploy.sh
5. Create a LB and send some traffic.
6. Check if there is any 'octavia-tenant-traffic.log' file in the controller.


Actual results:
There is no 'octavia-tenant-traffic.log' file in the controller, but the tenant flow log messages joined the 'octavia-amphora.log' file.

Expected results:
Tenant flow logs should be in 'octavia-tenant-traffic.log' file.

Comment 4 Omer Schwartz 2020-09-24 09:46:18 UTC
Hi Michael,

Thanks for the information.

Actually, I do use an environment file (am I wrong?)

Steps to Reproduce:
.
2. Change the flag of the OctaviaLogOffload: true in /home/stack/virt/extra_templates.yaml
.
.


I set the 'OctaviaLogOffload: true' flag under 'parameter_defaults:' on extra_templates.yaml.

Afterwards I re-run overcloud_deploy.sh which contains the extra_templates.yaml by default:
(This is the overcloud_deploy.sh file I use)

openstack overcloud deploy \
.
.
-e /home/stack/virt/extra_templates.yaml \
.
.


Anyway, for now, due to this bug https://bugzilla.redhat.com/show_bug.cgi?id=1856835, I can't deploy the amphora log offloading feature, so I don't have an environment to provide until I find a kind of a workaround.

Comment 5 Omer Schwartz 2020-09-24 14:13:40 UTC
A workaround for the bug I mentioned in comment number 4: https://review.opendev.org/#/c/754079

I provided Michael my host/system.

Comment 11 Omer Schwartz 2021-07-01 10:21:59 UTC
Fix is verified.

Verification steps:

1. I deployed OSP 16.2 (as mentioned in the Target Release)

2. Added the flag "OctaviaLogOffload: true" in /home/stack/virt/extra_templates.yaml

3. ran overcloud_deploy.sh -> Ansible passed.Overcloud configuration completed.

4. Due to https://bugzilla.redhat.com/show_bug.cgi?id=1976115, I had to restart the octavia_rsyslog container so it would be properly configured:

[root@controller-0 octavia-amphorae]# podman restart octavia_rsyslog
e7b99ddac68ad12c4074174f9bedbd36e5154a9c3e2e26e6513261f14b24fdb4

5. I create an LB and sent some traffic.

(overcloud) [stack@undercloud-0 ~]$ req="curl $LB_FIP"; for i in {1..10}; do $req;echo; done
lb-tree-server2-um5eygv73ncg
lb-tree-server1-pee4qmrxrxzh
lb-tree-server2-um5eygv73ncg
lb-tree-server1-pee4qmrxrxzh
lb-tree-server2-um5eygv73ncg
lb-tree-server1-pee4qmrxrxzh
lb-tree-server2-um5eygv73ncg
lb-tree-server1-pee4qmrxrxzh
lb-tree-server2-um5eygv73ncg
lb-tree-server1-pee4qmrxrxzh

6. Made sure there are 2 files in the log offloading (new dir, since OSP 16.2), one for administrative amphora logs, and another one for tenant traffic logs:

[root@controller-0 /]# ll /var/log/containers/octavia-amphorae/
total 24
-rw-r--r--. 1 root root 17977 Jul  1 09:41 octavia-amphora.log
-rw-r--r--. 1 root root  3700 Jul  1 09:54 octavia-tenant-traffic.log

(overcloud) [stack@undercloud-0 ~]$ cat /var/lib/rhos-release/latest-installed
16.2  -p RHOS-16.2-RHEL-8-20210630.n.0

I am moving the BZ status to VERIFIED.


Note You need to log in before you can comment on or make changes to this bug.