This bug has been migrated to another issue tracking site. It has been closed here and may no longer be being monitored.

If you would like to get updates for this issue, or to participate in it, you may do so at Red Hat Issue Tracker .
Bug 2263550 - Multiple listener-pool-member on IPv6 LB getting second pool in ERROR Edit
Summary: Multiple listener-pool-member on IPv6 LB getting second pool in ERROR Edit
Keywords:
Status: CLOSED MIGRATED
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: python-networking-ovn
Version: 16.2 (Train)
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Fernando Royo
QA Contact: Eran Kuris
URL:
Whiteboard:
Depends On:
Blocks: 2263552
TreeView+ depends on / blocked
 
Reported: 2024-02-09 16:54 UTC by Fernando Royo
Modified: 2025-01-10 09:39 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Known Issue
Doc Text:
Cause: Using Load-balancing service (octavia) with the OVN service provider and a second listener+pool+member is added to a LV IPv6, the pool to which the member is attached remains in an ERROR state. Consequence: Traffic will not be load-balanced to the members on pool in ERROR state. Workaround (if any): There is no workaround.
Clone Of:
: 2263552 (view as bug list)
Environment:
Last Closed: 2025-01-10 09:38:44 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Launchpad 2052628 0 None None None 2024-02-09 17:08:51 UTC
Red Hat Issue Tracker   OSP-31416 0 None None None 2025-01-10 09:38:44 UTC
Red Hat Issue Tracker OSP-33381 0 None None None 2025-01-10 09:39:29 UTC

Description Fernando Royo 2024-02-09 16:54:33 UTC
Description of problem:
When an LB IPv6 is created using a bulk command or when multiple listener-pool-member are added sequentially, the second listener-pool managed fails when adding a member, resulting in the pool's state being in an ERROR state.

The error in traceback is:

2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver Traceback (most recent call last):
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver File "/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 1883, in member_create
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver self._add_member(member, ovn_lb, pool_key)
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver File "/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 1841, in _add_member
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver self._refresh_lb_vips(ovn_lb.uuid, external_ids)
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver File "/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 1051, in _refresh_lb_vips
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver vip_ips = self._frame_vip_ips(lb_external_ids)
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver File "/usr/lib/python3.6/site-packages/networking_ovn/octavia/ovn_driver.py", line 1039, in _frame_vip_ips
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver if netaddr.IPNetwork(lb_vip).version == 6:
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver File "/usr/lib/python3.6/site-packages/netaddr/ip/__init__.py", line 938, in __init__
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver raise AddrFormatError('invalid IPNetwork %s' % addr)
2024-02-07 12:40:56.166 13 ERROR networking_ovn.octavia.ovn_driver netaddr.core.AddrFormatError: invalid IPNetwork [fd2e:6f44:5dd8:c956::1a]

So apparently, the LB_VIP is enclosed in additional brackets [ ] when adding the member to the second listener-pool-member.


Version-Release number of selected component (if applicable):
RHOSP 16.2

How reproducible:


Steps to Reproduce:
1. Create a LB over a IPv6 with more than a listener-pool-member
2.
3.

Actual results:
Second pool keeps in ERROR state

Expected results:
LB created correctly


Note You need to log in before you can comment on or make changes to this bug.