Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1996756

Summary: LB members in ERROR become ONLINE when adding new members
Product: Red Hat OpenStack Reporter: Joaquín Veira <jveiraca>
Component: openstack-octaviaAssignee: Gregory Thiemonge <gthiemon>
Status: CLOSED ERRATA QA Contact: Bruna Bonguardo <bbonguar>
Severity: medium Docs Contact:
Priority: medium    
Version: 16.1 (Train)CC: gthiemon, jelynch, joflynn, lpeer, majopela, scohen
Target Milestone: z9Keywords: Triaged
Target Release: 16.1 (Train on RHEL 8.2)   
Hardware: x86_64   
OS: Linux   
Whiteboard:
Fixed In Version: openstack-octavia-5.0.3-1.20220223073829.8c32d2e.el8ost Doc Type: Bug Fix
Doc Text:
Before this update, members in the ERROR operating status might have been updated briefly to ONLINE during a Load Balancer configuration change. With this update, the issue is fixed.
Story Points: ---
Clone Of:
: 1997128 2057007 (view as bug list) Environment:
Last Closed: 2022-12-07 20:25:25 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On:    
Bug Blocks: 2057007    

Description Joaquín Veira 2021-08-23 15:24:47 UTC
Description of problem:
We run OpenShift (4.6) on OpenStack (16.1).
We notice that the load balancer pool members contains only 2 ONLINE members:
```
$ openstack loadbalancer member list 56bcc29c-7624-4b2d-8935-229472ca2316 -c operating_status -f value | sort | uniq -c
    132 ERROR
      2 ONLINE
```
In this particular case, the loadbalancer is expected to balance the load between 2 servers, yet there are 132 useless workers in the member list.

The problem arise in case of failover:
- when OpenStack triggers a failover of the master amphora, the traffic is taken by the backup amphora, until the master amphora becomes active again. When the traffic comes back to the master amphora, octavia consider all the pool members as ONLINE, and is not able to balance the load properly
- when a new loadbalancer member is added to the pool, every single pool member is marked as ONLINE, until the amphora mark them as ERROR again
- in our case, we have a load balancer in ERROR state for 2 months. We just triggered a failover of this loadbalancer: OpenShift is triggering a lot of addition/deletion of members into the pool, causing the loadbalancer to go to PENDING_UPDATE state, and all the pool members to go ONLINE. This has been going for more than 1hour now.
- also the loadbalancer always shows in DEGRADED operating_status because some of the pool members are in ERROR.

I think the problem is twofold:
- OpenShift should only add into the loadbalancer the workers that can support the traffic
- Octavia should try its best to keep the member state after a haproxy restart (through something like haproxy server-state-file option)

The Octavia behavior is very easy to reproduce, just create a load balancer with a health monitor, add a couple of  pool members, and you see the member that were in ERROR going back to ONLINE for a short while.

Version-Release number of selected component (if applicable):
RHOSP 16.1

How reproducible:
OCP on OSP with Octavia

Steps to Reproduce:
1.
2.
3.

Actual results:
Loadbalancer always shows as DEGRADED.

Expected results:


Additional info:

Comment 21 errata-xmlrpc 2022-12-07 20:25:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenStack Platform 16.1.9 bug fix and enhancement advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2022:8795