Bug 1582145

Summary: Listener's "operating status" is not transitioning to ONLINE even when pool and members are configured for it.
Product: Red Hat OpenStack Reporter: Alexander Stafeyev <astafeye>
Component: openstack-octaviaAssignee: Nir Magnezi <nmagnezi>
Status: CLOSED ERRATA QA Contact: Alexander Stafeyev <astafeye>
Severity: high Docs Contact:
Priority: medium    
Version: 13.0 (Queens)CC: amuller, astafeye, cgoncalves, ihrachys, lmiccini, lpeer, majopela
Target Milestone: ---Keywords: TestOnly, Triaged, ZStream
Target Release: 13.0 (Queens)   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-03-14 13:33:12 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Alexander Stafeyev 2018-05-24 10:47:41 UTC
Description of problem:
The Listener operational status is not transitioning to ONLINE

Version-Release number of selected component (if applicable):


How reproducible:
100

Steps to Reproduce:
1. Create loadbalancer , listener (check listener operational status) , create pool and member . 
2. Check listener operational status 

Actual results:
listener operational status Always OFFLINE

Expected results:
Should be online 

Additional info:
Found with tempest test : 
octavia_tempest_plugin.tests.api.v2.test_listener.ListenerAPITest.test_listener_create

Comment 1 Nir Magnezi 2018-05-28 13:13:32 UTC
We need to fix this and backport to OSP13 z-stream.
Testing show that in upstream master (Ubuntu based amphora), operating_status is ONLINE.


$ openstack loadbalancer listener show c0b3a275-1827-44ca-a0e7-1e14478dd5c4
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| connection_limit          | -1                                   |
| created_at                | 2018-05-28T12:30:31                  |
| default_pool_id           | None                                 |
| default_tls_container_ref | None                                 |
| description               |                                      |
| id                        | c0b3a275-1827-44ca-a0e7-1e14478dd5c4 |
| insert_headers            | None                                 |
| l7policies                |                                      |
| loadbalancers             | c3c82ea7-3bbf-43d2-a8d5-aa97053bf8e0 |
| name                      | listener1                            |
| operating_status          | ONLINE                               |
| project_id                | 4bd6048798264d5c8b28442443ed35d1     |
| protocol                  | HTTP                                 |
| protocol_port             | 80                                   |
| provisioning_status       | ACTIVE                               |
| sni_container_refs        | []                                   |
| updated_at                | 2018-05-28T12:30:36                  |
+---------------------------+--------------------------------------+


Yet, when I used an RHEL based amphora I get: operating_status OFFLINE 

$ openstack loadbalancer listener show 1e79e879-61cb-4c77-a323-083719794eea
+---------------------------+--------------------------------------+
| Field                     | Value                                |
+---------------------------+--------------------------------------+
| admin_state_up            | True                                 |
| connection_limit          | -1                                   |
| created_at                | 2018-05-27T09:46:00                  |
| default_pool_id           | None                                 |
| default_tls_container_ref | None                                 |
| description               |                                      |
| id                        | 1e79e879-61cb-4c77-a323-083719794eea |
| insert_headers            | None                                 |
| l7policies                |                                      |
| loadbalancers             | f4b1b7ef-e46f-40e7-94dc-6e84bedc9fc4 |
| name                      | listener1                            |
| operating_status          | OFFLINE                              |
| project_id                | e1cf0785eacb4ab29d047f2d5d5c2e33     |
| protocol                  | HTTP                                 |
| protocol_port             | 80                                   |
| provisioning_status       | ACTIVE                               |
| sni_container_refs        | []                                   |
| timeout_client_data       | 1234                                 |
| timeout_member_connect    | 5678                                 |
| timeout_member_data       | 1111                                 |
| timeout_tcp_inspect       | 0                                    |
| updated_at                | 2018-05-27T09:46:23                  |
+---------------------------+--------------------------------------+

Comment 8 errata-xmlrpc 2019-03-14 13:33:12 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2019:0567