Bug 1582145 - Listener's "operating status" is not transitioning to ONLINE even when pool and members are configured for it.
Summary: Listener's "operating status" is not transitioning to ONLINE even when pool a...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat OpenStack
Classification: Red Hat
Component: openstack-octavia
Version: 13.0 (Queens)
Hardware: Unspecified
OS: Unspecified
medium
high
Target Milestone: ---
: 13.0 (Queens)
Assignee: Nir Magnezi
QA Contact: Alexander Stafeyev
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-05-24 10:47 UTC by Alexander Stafeyev
Modified: 2022-07-09 10:57 UTC (History)
7 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-03-14 13:33:12 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
OpenStack Storyboard 2002114 0 None None None 2018-05-28 11:06:37 UTC
Red Hat Issue Tracker OSP-17080 0 None None None 2022-07-09 10:57:26 UTC
Red Hat Product Errata RHSA-2019:0567 0 None None None 2019-03-14 13:33:19 UTC

Description Alexander Stafeyev 2018-05-24 10:47:41 UTC
Description of problem:
The Listener operational status is not transitioning to ONLINE

Version-Release number of selected component (if applicable):


How reproducible:
100

Steps to Reproduce:
1. Create loadbalancer , listener (check listener operational status) , create pool and member . 
2. Check listener operational status 

Actual results:
listener operational status Always OFFLINE

Expected results:
Should be online 

Additional info:
Found with tempest test : 
octavia_tempest_plugin.tests.api.v2.test_listener.ListenerAPITest.test_listener_create

Comment 1 Nir Magnezi 2018-05-28 13:13:32 UTC
We need to fix this and backport to OSP13 z-stream.
Testing show that in upstream master (Ubuntu based amphora), operating_status is ONLINE.


$ openstack loadbalancer listener show c0b3a275-1827-44ca-a0e7-1e14478dd5c4
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| connection_limit          | -1                                   |
| created_at                | 2018-05-28T12:30:31                  |
| default_pool_id           | None                                 |
| default_tls_container_ref | None                                 |
| description               |                                      |
| id                        | c0b3a275-1827-44ca-a0e7-1e14478dd5c4 |
| insert_headers            | None                                 |
| l7policies                |                                      |
| loadbalancers             | c3c82ea7-3bbf-43d2-a8d5-aa97053bf8e0 |
| name                      | listener1                            |
| operating_status          | ONLINE                               |
| project_id                | 4bd6048798264d5c8b28442443ed35d1     |
| protocol                  | HTTP                                 |
| protocol_port             | 80                                   |
| provisioning_status       | ACTIVE                               |
| sni_container_refs        | []                                   |
| updated_at                | 2018-05-28T12:30:36                  |
+---------------------------+--------------------------------------+


Yet, when I used an RHEL based amphora I get: operating_status OFFLINE 

$ openstack loadbalancer listener show 1e79e879-61cb-4c77-a323-083719794eea
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| connection_limit          | -1                                   |
| created_at                | 2018-05-27T09:46:00                  |
| default_pool_id           | None                                 |
| default_tls_container_ref | None                                 |
| description               |                                      |
| id                        | 1e79e879-61cb-4c77-a323-083719794eea |
| insert_headers            | None                                 |
| l7policies                |                                      |
| loadbalancers             | f4b1b7ef-e46f-40e7-94dc-6e84bedc9fc4 |
| name                      | listener1                            |
| operating_status          | OFFLINE                              |
| project_id                | e1cf0785eacb4ab29d047f2d5d5c2e33     |
| protocol                  | HTTP                                 |
| protocol_port             | 80                                   |
| provisioning_status       | ACTIVE                               |
| sni_container_refs        | []                                   |
| timeout_client_data       | 1234                                 |
| timeout_member_connect    | 5678                                 |
| timeout_member_data       | 1111                                 |
| timeout_tcp_inspect       | 0                                    |
| updated_at                | 2018-05-27T09:46:23                  |
+---------------------------+--------------------------------------+

Comment 8 errata-xmlrpc 2019-03-14 13:33:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:0567


Note You need to log in before you can comment on or make changes to this bug.