Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 2237251

Summary: Health monitor shows "provisioning_status": "ACTIVE" & "operating_status": "ONLINE" to non existing members
Product: Red Hat OpenStack Reporter: Omer Schwartz <oschwart>
Component: python-ovn-octavia-providerAssignee: Fernando Royo <froyo>
Status: CLOSED ERRATA QA Contact: Omer Schwartz <oschwart>
Severity: medium Docs Contact:
Priority: high    
Version: 17.1 (Wallaby)CC: averdagu, bbonguar, froyo, gbrinn, gregraka, gthiemon, mariel, tweining
Target Milestone: z2Keywords: Triaged
Target Release: 17.1   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: python-ovn-octavia-provider-1.0.3-17.1.20231025110804.d779786.el9ost Doc Type: Bug Fix
Doc Text:
Before this update, RHOSP environments that used the Load-balancing service (octavia) with the OVN provider and a health monitor caused the load-balancing pool to display a fake member’s status as `ONLINE`. With this update, if you use a health monitor for a pool, the fake load-balancing pool member now has the `ERROR` operating status and the Load Balancer/Listener/Pool operating statuses are updated accordingly.
Story Points: ---
Clone Of: Environment:
Last Closed: 2024-01-16 14:30:51 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Omer Schwartz 2023-09-04 12:05:25 UTC
Description of problem:
Deploying Octavia with ovn provider, I can deploy members with an invalid / invented ip address (no real servers with that address) and the LB shows that everything is ok with them (running `openstack loadbalancer status show <lb>` will show that the members have "provisioning_status": "ACTIVE" & "operating_status": "ONLINE".

An example: I deployed the following:
{
    "loadbalancer": {
        "id": "c50b7cb3-6b8f-434b-9a47-a10a27d0a9b5",
        "name": "tobiko_octavia_ovn_lb",
        "operating_status": "ONLINE",
        "provisioning_status": "ACTIVE",
        "listeners": [
            {
                "id": "87bafdda-0ac6-438f-8824-cb75f9e014df",
                "name": "tobiko_octavia_tcp_listener",
                "operating_status": "ONLINE",
                "provisioning_status": "ACTIVE",
                "pools": [
                    {
                        "id": "aa6ed64c-4d19-448b-969d-6cc686385162",
                        "name": "tobiko_octavia_tcp_pool",
                        "provisioning_status": "ACTIVE",
                        "operating_status": "ONLINE",
                        "health_monitor": {
                            "id": "cc72e7eb-722b-49be-b3d2-3857f880346d",
                            "name": "hm_ovn_provider",
                            "type": "TCP",
                            "provisioning_status": "ACTIVE",
                            "operating_status": "ONLINE"
                        },
                        "members": [
                            {
                                "id": "648b9d51-115a-4312-b92e-cc59af0d0401",
                                "name": "fake_member",
                                "operating_status": "ONLINE",
                                "provisioning_status": "ACTIVE",
                                "address": "10.100.0.204",
                                "protocol_port": 80
                            },
                            {
                                "id": "8dae11a2-e2d5-45f9-9e85-50f61fa07753",
                                "name": "tobiko_de3f2a06",
                                "operating_status": "ONLINE",
                                "provisioning_status": "ACTIVE",
                                "address": "10.0.64.34",
                                "protocol_port": 80
                            },
                            {
                                "id": "9b044180-71b4-4fa6-83df-4d0f99b4a3f7",
                                "name": "fake_member2",
                                "operating_status": "ONLINE",
                                "provisioning_status": "ACTIVE",
                                "address": "10.100.0.205",
                                "protocol_port": 80
                            },
                            {
                                "id": "fe9ce8ca-e6b7-4c5b-807c-8e295156df85",
                                "name": "tobiko_6c186a80",
                                "operating_status": "ONLINE",
                                "provisioning_status": "ACTIVE",
                                "address": "10.0.64.39",
                                "protocol_port": 80
                            }
                        ]
                    }
                ]
            }
        ]
    }
}

when the existing servers are the following:
+--------------------------------------+-----------------+--------+----------------------------------------------------------+----------------------------------------------------+------------+
| ID                                   | Name            | Status | Networks                                                 | Image                                              | Flavor     |
+--------------------------------------+-----------------+--------+----------------------------------------------------------+----------------------------------------------------+------------+
| 1e4a4464-4bbf-4107-94e4-974e87c31074 | tobiko_8941a208 | ACTIVE | private=10.0.64.34, fd47:e41c:f56e:0:f816:3eff:fe9f:67f4 | tobiko.openstack.stacks._ubuntu.UbuntuImageFixture | octavia_65 |
| 1a0de4d2-d9ea-4d60-85ff-018bcc00d285 | tobiko_44801dfe | ACTIVE | private=10.0.64.39, fd47:e41c:f56e:0:f816:3eff:fea2:7af9 | tobiko.openstack.stacks._ubuntu.UbuntuImageFixture | octavia_65 |
+--------------------------------------+-----------------+--------+----------------------------------------------------------+----------------------------------------------------+------------+

as an Octavia tester/user, it happened to me a few times that I had typos when I created the member (I wrote the ip address wrong), but with the amphora provider I had the LB in degraded status and I had the members in error status. Without that feedback, it would have been much more difficult to me to understand what happened.

Version-Release number of selected component (if applicable):
RHOS-17.1-RHEL-9-20230824.n.1 with python-ovn-octavia-provider-1.0.3-1.20230506160960

How reproducible:
100%

Steps to Reproduce:
1. Deploy a LB with ovn provider, listener, pool, real members (that use real servers) and fake (invented) members
2. Create a healthmonitor
3. Wait for a few minutes
4. Run `openstack loadbalancer status show <lb>`

Actual results:
All members are on "operating_status": "ONLINE" & "provisioning_status": "ACTIVE"

Expected results:
I should see error in at least one of the statuses, and degraded on the LB and pool.

Comment 20 errata-xmlrpc 2024-01-16 14:30:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Red Hat OpenStack Platform 17.1.2 bug fix and enhancement advisory), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2024:0209