Description of problem: After doing an IPI Kuryr install of OCP on OSP the *-kuryr-api-loadbalancer-pool loadbalancer pool reports a DEGRADED status: $ openstack loadbalancer pool show ocpra-z5ng9-kuryr-api-loadbalancer-pool +---------------------+-----------------------------------------+ | Field | Value | +---------------------+-----------------------------------------+ | admin_state_up | True | | created_at | 2020-04-30T07:15:04 | | description | | | healthmonitor_id | b1e3c31c-f96f-4e9e-968f-b9a719c98ff4 | | id | a0955a1f-5460-4421-8513-9db6f8ba4fce | | lb_algorithm | ROUND_ROBIN | | listeners | 71fb6a88-005a-4620-b77b-e06fb388511f | | loadbalancers | dc316505-2e3b-4dcc-b07b-0269c894017f | | members | e4515b67-86ee-4528-913c-3d8abdd09bdd | | | 3c097ee3-b38c-48c9-868d-57d2489f9196 | | | 115d33b7-8435-45e1-9538-b8b48b250efd | | | 112d026e-c15d-46cc-a41b-cd1ba9fe8edf | | name | ocpra-z5ng9-kuryr-api-loadbalancer-pool | | operating_status | DEGRADED | | project_id | e04621aad22840e4915c2cb37c11301d | | protocol | HTTPS | | provisioning_status | ACTIVE | | session_persistence | None | | updated_at | 2020-04-30T07:40:59 | +---------------------+-----------------------------------------+ And the *-bootstrap-port member of that pool reports an ERROR status: $ openstack loadbalancer member list ocpra-z5ng9-kuryr-api-loadbalancer-pool +--------------------------------------+----------------------------+----------------------------------+---------------------+-----------+---------------+------------------+--------+ | id | name | project_id | provisioning_status | address | protocol_port | operating_status | weight | +--------------------------------------+----------------------------+----------------------------------+---------------------+-----------+---------------+------------------+--------+ | e4515b67-86ee-4528-913c-3d8abdd09bdd | ocpra-z5ng9-master-port-1 | e04621aad22840e4915c2cb37c11301d | ACTIVE | 10.0.0.17 | 6443 | ONLINE | 100 | | 3c097ee3-b38c-48c9-868d-57d2489f9196 | ocpra-z5ng9-bootstrap-port | e04621aad22840e4915c2cb37c11301d | ACTIVE | 10.0.0.18 | 6443 | ERROR | 1 | | 115d33b7-8435-45e1-9538-b8b48b250efd | ocpra-z5ng9-master-port-0 | e04621aad22840e4915c2cb37c11301d | ACTIVE | 10.0.0.23 | 6443 | ONLINE | 100 | | 112d026e-c15d-46cc-a41b-cd1ba9fe8edf | ocpra-z5ng9-master-port-2 | e04621aad22840e4915c2cb37c11301d | ACTIVE | 10.0.0.37 | 6443 | ONLINE | 100 | +--------------------------------------+----------------------------+----------------------------------+---------------------+-----------+---------------+------------------+--------+ The API VIP functions correctly and traffic is properly routed to the production cluster. However the ERROR and DEGRADED status exist. Version-Release number of the following components: $ ./openshift-install version ./openshift-install 4.4.0-rc.13 built from commit 78b817ceb7657f81176bbe182cc6efc73004c841 release image quay.io/openshift-release-dev/ocp-release@sha256:a3fe9de9b338abc80e2afafdf38dff0c2de3efb61c6896e8e16495f59e717f53 $ cat /etc/rhosp-release Red Hat OpenStack Platform release 13.0.11 (Queens) How reproducible: Steps to Reproduce: 1. Install OSP with Octavia 2. Install OCP onto OSP with networkType: Kuryr 3. View Loadbalancer status in OpenStack Actual results: $ openstack loadbalancer pool show ocpra-z5ng9-kuryr-api-loadbalancer-pool +---------------------+-----------------------------------------+ | Field | Value | +---------------------+-----------------------------------------+ | admin_state_up | True | | created_at | 2020-04-30T07:15:04 | | description | | | healthmonitor_id | b1e3c31c-f96f-4e9e-968f-b9a719c98ff4 | | id | a0955a1f-5460-4421-8513-9db6f8ba4fce | | lb_algorithm | ROUND_ROBIN | | listeners | 71fb6a88-005a-4620-b77b-e06fb388511f | | loadbalancers | dc316505-2e3b-4dcc-b07b-0269c894017f | | members | e4515b67-86ee-4528-913c-3d8abdd09bdd | | | 3c097ee3-b38c-48c9-868d-57d2489f9196 | | | 115d33b7-8435-45e1-9538-b8b48b250efd | | | 112d026e-c15d-46cc-a41b-cd1ba9fe8edf | | name | ocpra-z5ng9-kuryr-api-loadbalancer-pool | | operating_status | DEGRADED | | project_id | e04621aad22840e4915c2cb37c11301d | | protocol | HTTPS | | provisioning_status | ACTIVE | | session_persistence | None | | updated_at | 2020-04-30T07:40:59 | +---------------------+-----------------------------------------+ $ openstack loadbalancer member list ocpra-z5ng9-kuryr-api-loadbalancer-pool +--------------------------------------+----------------------------+----------------------------------+---------------------+-----------+---------------+------------------+--------+ | id | name | project_id | provisioning_status | address | protocol_port | operating_status | weight | +--------------------------------------+----------------------------+----------------------------------+---------------------+-----------+---------------+------------------+--------+ | e4515b67-86ee-4528-913c-3d8abdd09bdd | ocpra-z5ng9-master-port-1 | e04621aad22840e4915c2cb37c11301d | ACTIVE | 10.0.0.17 | 6443 | ONLINE | 100 | | 3c097ee3-b38c-48c9-868d-57d2489f9196 | ocpra-z5ng9-bootstrap-port | e04621aad22840e4915c2cb37c11301d | ACTIVE | 10.0.0.18 | 6443 | ERROR | 1 | | 115d33b7-8435-45e1-9538-b8b48b250efd | ocpra-z5ng9-master-port-0 | e04621aad22840e4915c2cb37c11301d | ACTIVE | 10.0.0.23 | 6443 | ONLINE | 100 | | 112d026e-c15d-46cc-a41b-cd1ba9fe8edf | ocpra-z5ng9-master-port-2 | e04621aad22840e4915c2cb37c11301d | ACTIVE | 10.0.0.37 | 6443 | ONLINE | 100 | +--------------------------------------+----------------------------+----------------------------------+---------------------+-----------+---------------+------------------+--------+ $ openstack loadbalancer member show ocpra-z5ng9-kuryr-api-loadbalancer-pool 3c097ee3-b38c-48c9-868d-57d2489f9196 +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | address | 10.0.0.18 | | admin_state_up | True | | created_at | 2020-04-30T07:15:12 | | id | 3c097ee3-b38c-48c9-868d-57d2489f9196 | | name | ocpra-z5ng9-bootstrap-port | | operating_status | ERROR | | project_id | e04621aad22840e4915c2cb37c11301d | | protocol_port | 6443 | | provisioning_status | ACTIVE | | subnet_id | 28b09695-dfc9-47e3-9b13-f515063e8272 | | updated_at | 2020-04-30T07:40:59 | | weight | 1 | | monitor_port | None | | monitor_address | None | +---------------------+--------------------------------------+ Expected results: The loadbalancer pool should not report a DEGRADED status when everything is functioning correctly. Additional info: Since everything is working looking deeper shows the issue is due the fact that the *-bootstrap-port member of the *-kuryr-api-loadbalancer-pool pool remains after an install even thought the bootstrap instance and cluster has been terminated. Since the instance is gone the member isn't needed. Manually removing it returns the operating_status of the pool to ONLINE: $ openstack loadbalancer member delete ocpra-z5ng9-kuryr-api-loadbalancer-pool 3c097ee3-b38c-48c9-868d-57d2489f9196 $ openstack loadbalancer member list ocpra-z5ng9-kuryr-api-loadbalancer-pool +--------------------------------------+---------------------------+----------------------------------+---------------------+-----------+---------------+------------------+--------+ | id | name | project_id | provisioning_status | address | protocol_port | operating_status | weight | +--------------------------------------+---------------------------+----------------------------------+---------------------+-----------+---------------+------------------+--------+ | e4515b67-86ee-4528-913c-3d8abdd09bdd | ocpra-z5ng9-master-port-1 | e04621aad22840e4915c2cb37c11301d | ACTIVE | 10.0.0.17 | 6443 | ONLINE | 100 | | 115d33b7-8435-45e1-9538-b8b48b250efd | ocpra-z5ng9-master-port-0 | e04621aad22840e4915c2cb37c11301d | ACTIVE | 10.0.0.23 | 6443 | ONLINE | 100 | | 112d026e-c15d-46cc-a41b-cd1ba9fe8edf | ocpra-z5ng9-master-port-2 | e04621aad22840e4915c2cb37c11301d | ACTIVE | 10.0.0.37 | 6443 | ONLINE | 100 | +--------------------------------------+---------------------------+----------------------------------+---------------------+-----------+---------------+------------------+--------+ $ openstack loadbalancer pool show ocpra-z5ng9-kuryr-api-loadbalancer-pool +---------------------+-----------------------------------------+ | Field | Value | +---------------------+-----------------------------------------+ | admin_state_up | True | | created_at | 2020-04-30T07:15:04 | | description | | | healthmonitor_id | b1e3c31c-f96f-4e9e-968f-b9a719c98ff4 | | id | a0955a1f-5460-4421-8513-9db6f8ba4fce | | lb_algorithm | ROUND_ROBIN | | listeners | 71fb6a88-005a-4620-b77b-e06fb388511f | | loadbalancers | dc316505-2e3b-4dcc-b07b-0269c894017f | | members | e4515b67-86ee-4528-913c-3d8abdd09bdd | | | 115d33b7-8435-45e1-9538-b8b48b250efd | | | 112d026e-c15d-46cc-a41b-cd1ba9fe8edf | | name | ocpra-z5ng9-kuryr-api-loadbalancer-pool | | operating_status | ONLINE | | project_id | e04621aad22840e4915c2cb37c11301d | | protocol | HTTPS | | provisioning_status | ACTIVE | | session_persistence | None | | updated_at | 2020-04-30T09:42:26 | +---------------------+-----------------------------------------+ Leaving the member makes the install appear to leave an error when there is not one. Would be great if the installer could remove once the instance is gone and the cluster is running on the masters.
Verified on: openshift_puddle: 4.5.0-0.nightly-2020-05-08-015855 core_puddle: RHOS_TRUNK-16.0-RHEL-8-20200506.n.2 Bootstrap port was deleted after bootstrap server destroy: (overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer member list ostest-6hh74-kuryr-api-loadbalancer-pool +--------------------------------------+----------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ | id | name | project_id | provisioning_status | address | protocol_port | operating_status | weight | +--------------------------------------+----------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ | 2f69539e-7f9b-4eb4-87f3-bb10faf147b1 | ostest-6hh74-master-port-0 | bd75aa0bb30d44b08428ec24d08600fb | ACTIVE | 10.196.2.121 | 6443 | ONLINE | 100 | | 30bec695-52e0-4724-89c1-4649e33fb324 | ostest-6hh74-master-port-2 | bd75aa0bb30d44b08428ec24d08600fb | ACTIVE | 10.196.2.73 | 6443 | ONLINE | 100 | | 5b7b8dec-b815-4e35-b110-081be1f07265 | ostest-6hh74-master-port-1 | bd75aa0bb30d44b08428ec24d08600fb | ACTIVE | 10.196.2.29 | 6443 | ONLINE | 100 | +--------------------------------------+----------------------------+----------------------------------+---------------------+--------------+---------------+------------------+--------+ So now the the loadbalancer pool shows operating_status=ONLINE: (overcloud) [stack@undercloud-0 ~]$ openstack loadbalancer pool show ostest-6hh74-kuryr-api-loadbalancer-pool +----------------------+------------------------------------------+ | Field | Value | +----------------------+------------------------------------------+ | admin_state_up | True | | created_at | 2020-05-08T05:55:37 | | description | | | healthmonitor_id | d6186d83-51bf-4b4b-b810-118cff52fdd4 | | id | 0d908d6d-f6d6-4db9-b938-c6344aa42f56 | | lb_algorithm | ROUND_ROBIN | | listeners | 03f2d413-df2e-4cd9-8e81-ecfa23ca9fb2 | | loadbalancers | c54041a4-2760-4184-b05d-7a6b367c1d3e | | members | 2f69539e-7f9b-4eb4-87f3-bb10faf147b1 | | | 5b7b8dec-b815-4e35-b110-081be1f07265 | | | 30bec695-52e0-4724-89c1-4649e33fb324 | | name | ostest-6hh74-kuryr-api-loadbalancer-pool | | operating_status | ONLINE | | project_id | bd75aa0bb30d44b08428ec24d08600fb | | protocol | HTTPS | | provisioning_status | ACTIVE | | session_persistence | None | | updated_at | 2020-05-08T08:35:09 | | tls_container_ref | None | | ca_tls_container_ref | None | | crl_container_ref | None | | tls_enabled | False | +----------------------+------------------------------------------+
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2020:2409