Before this update, RHOSP environments that used the Load-balancing service (octavia) with the OVN provider and a health monitor caused the load-balancing pool to display a fake member’s status as `ONLINE`. With this update, if you use a health monitor for a pool, the fake load-balancing pool member now has the `ERROR` operating status and the Load Balancer/Listener/Pool operating statuses are updated accordingly.
Description of problem:
Deploying Octavia with ovn provider, I can deploy members with an invalid / invented ip address (no real servers with that address) and the LB shows that everything is ok with them (running `openstack loadbalancer status show <lb>` will show that the members have "provisioning_status": "ACTIVE" & "operating_status": "ONLINE".
An example: I deployed the following:
{
"loadbalancer": {
"id": "c50b7cb3-6b8f-434b-9a47-a10a27d0a9b5",
"name": "tobiko_octavia_ovn_lb",
"operating_status": "ONLINE",
"provisioning_status": "ACTIVE",
"listeners": [
{
"id": "87bafdda-0ac6-438f-8824-cb75f9e014df",
"name": "tobiko_octavia_tcp_listener",
"operating_status": "ONLINE",
"provisioning_status": "ACTIVE",
"pools": [
{
"id": "aa6ed64c-4d19-448b-969d-6cc686385162",
"name": "tobiko_octavia_tcp_pool",
"provisioning_status": "ACTIVE",
"operating_status": "ONLINE",
"health_monitor": {
"id": "cc72e7eb-722b-49be-b3d2-3857f880346d",
"name": "hm_ovn_provider",
"type": "TCP",
"provisioning_status": "ACTIVE",
"operating_status": "ONLINE"
},
"members": [
{
"id": "648b9d51-115a-4312-b92e-cc59af0d0401",
"name": "fake_member",
"operating_status": "ONLINE",
"provisioning_status": "ACTIVE",
"address": "10.100.0.204",
"protocol_port": 80
},
{
"id": "8dae11a2-e2d5-45f9-9e85-50f61fa07753",
"name": "tobiko_de3f2a06",
"operating_status": "ONLINE",
"provisioning_status": "ACTIVE",
"address": "10.0.64.34",
"protocol_port": 80
},
{
"id": "9b044180-71b4-4fa6-83df-4d0f99b4a3f7",
"name": "fake_member2",
"operating_status": "ONLINE",
"provisioning_status": "ACTIVE",
"address": "10.100.0.205",
"protocol_port": 80
},
{
"id": "fe9ce8ca-e6b7-4c5b-807c-8e295156df85",
"name": "tobiko_6c186a80",
"operating_status": "ONLINE",
"provisioning_status": "ACTIVE",
"address": "10.0.64.39",
"protocol_port": 80
}
]
}
]
}
]
}
}
when the existing servers are the following:
+--------------------------------------+-----------------+--------+----------------------------------------------------------+----------------------------------------------------+------------+
| ID | Name | Status | Networks | Image | Flavor |
+--------------------------------------+-----------------+--------+----------------------------------------------------------+----------------------------------------------------+------------+
| 1e4a4464-4bbf-4107-94e4-974e87c31074 | tobiko_8941a208 | ACTIVE | private=10.0.64.34, fd47:e41c:f56e:0:f816:3eff:fe9f:67f4 | tobiko.openstack.stacks._ubuntu.UbuntuImageFixture | octavia_65 |
| 1a0de4d2-d9ea-4d60-85ff-018bcc00d285 | tobiko_44801dfe | ACTIVE | private=10.0.64.39, fd47:e41c:f56e:0:f816:3eff:fea2:7af9 | tobiko.openstack.stacks._ubuntu.UbuntuImageFixture | octavia_65 |
+--------------------------------------+-----------------+--------+----------------------------------------------------------+----------------------------------------------------+------------+
as an Octavia tester/user, it happened to me a few times that I had typos when I created the member (I wrote the ip address wrong), but with the amphora provider I had the LB in degraded status and I had the members in error status. Without that feedback, it would have been much more difficult to me to understand what happened.
Version-Release number of selected component (if applicable):
RHOS-17.1-RHEL-9-20230824.n.1 with python-ovn-octavia-provider-1.0.3-1.20230506160960
How reproducible:
100%
Steps to Reproduce:
1. Deploy a LB with ovn provider, listener, pool, real members (that use real servers) and fake (invented) members
2. Create a healthmonitor
3. Wait for a few minutes
4. Run `openstack loadbalancer status show <lb>`
Actual results:
All members are on "operating_status": "ONLINE" & "provisioning_status": "ACTIVE"
Expected results:
I should see error in at least one of the statuses, and degraded on the LB and pool.
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.
For information on the advisory (Red Hat OpenStack Platform 17.1.2 bug fix and enhancement advisory), and where to find the updated
files, follow the link below.
If the solution does not work for you, open a new bug report.
https://access.redhat.com/errata/RHBA-2024:0209
Description of problem: Deploying Octavia with ovn provider, I can deploy members with an invalid / invented ip address (no real servers with that address) and the LB shows that everything is ok with them (running `openstack loadbalancer status show <lb>` will show that the members have "provisioning_status": "ACTIVE" & "operating_status": "ONLINE". An example: I deployed the following: { "loadbalancer": { "id": "c50b7cb3-6b8f-434b-9a47-a10a27d0a9b5", "name": "tobiko_octavia_ovn_lb", "operating_status": "ONLINE", "provisioning_status": "ACTIVE", "listeners": [ { "id": "87bafdda-0ac6-438f-8824-cb75f9e014df", "name": "tobiko_octavia_tcp_listener", "operating_status": "ONLINE", "provisioning_status": "ACTIVE", "pools": [ { "id": "aa6ed64c-4d19-448b-969d-6cc686385162", "name": "tobiko_octavia_tcp_pool", "provisioning_status": "ACTIVE", "operating_status": "ONLINE", "health_monitor": { "id": "cc72e7eb-722b-49be-b3d2-3857f880346d", "name": "hm_ovn_provider", "type": "TCP", "provisioning_status": "ACTIVE", "operating_status": "ONLINE" }, "members": [ { "id": "648b9d51-115a-4312-b92e-cc59af0d0401", "name": "fake_member", "operating_status": "ONLINE", "provisioning_status": "ACTIVE", "address": "10.100.0.204", "protocol_port": 80 }, { "id": "8dae11a2-e2d5-45f9-9e85-50f61fa07753", "name": "tobiko_de3f2a06", "operating_status": "ONLINE", "provisioning_status": "ACTIVE", "address": "10.0.64.34", "protocol_port": 80 }, { "id": "9b044180-71b4-4fa6-83df-4d0f99b4a3f7", "name": "fake_member2", "operating_status": "ONLINE", "provisioning_status": "ACTIVE", "address": "10.100.0.205", "protocol_port": 80 }, { "id": "fe9ce8ca-e6b7-4c5b-807c-8e295156df85", "name": "tobiko_6c186a80", "operating_status": "ONLINE", "provisioning_status": "ACTIVE", "address": "10.0.64.39", "protocol_port": 80 } ] } ] } ] } } when the existing servers are the following: +--------------------------------------+-----------------+--------+----------------------------------------------------------+----------------------------------------------------+------------+ | ID | Name | Status | Networks | Image | Flavor | +--------------------------------------+-----------------+--------+----------------------------------------------------------+----------------------------------------------------+------------+ | 1e4a4464-4bbf-4107-94e4-974e87c31074 | tobiko_8941a208 | ACTIVE | private=10.0.64.34, fd47:e41c:f56e:0:f816:3eff:fe9f:67f4 | tobiko.openstack.stacks._ubuntu.UbuntuImageFixture | octavia_65 | | 1a0de4d2-d9ea-4d60-85ff-018bcc00d285 | tobiko_44801dfe | ACTIVE | private=10.0.64.39, fd47:e41c:f56e:0:f816:3eff:fea2:7af9 | tobiko.openstack.stacks._ubuntu.UbuntuImageFixture | octavia_65 | +--------------------------------------+-----------------+--------+----------------------------------------------------------+----------------------------------------------------+------------+ as an Octavia tester/user, it happened to me a few times that I had typos when I created the member (I wrote the ip address wrong), but with the amphora provider I had the LB in degraded status and I had the members in error status. Without that feedback, it would have been much more difficult to me to understand what happened. Version-Release number of selected component (if applicable): RHOS-17.1-RHEL-9-20230824.n.1 with python-ovn-octavia-provider-1.0.3-1.20230506160960 How reproducible: 100% Steps to Reproduce: 1. Deploy a LB with ovn provider, listener, pool, real members (that use real servers) and fake (invented) members 2. Create a healthmonitor 3. Wait for a few minutes 4. Run `openstack loadbalancer status show <lb>` Actual results: All members are on "operating_status": "ONLINE" & "provisioning_status": "ACTIVE" Expected results: I should see error in at least one of the statuses, and degraded on the LB and pool.