Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1504122

Summary: [RFE] Octavia Load balancing for OpenShift APIs when provisioned by OpenStack
Product: OpenShift Container Platform Reporter: Antoni Segura Puimedon <asegurap>
Component: RFEAssignee: Tomas Sedovic <tsedovic>
Status: CLOSED CURRENTRELEASE QA Contact: Jon Uriarte <juriarte>
Severity: high Docs Contact:
Priority: high    
Version: unspecifiedCC: aos-bugs, asimonel, jokerman, juriarte, lpeer, mmccomas, myllynen, tsedovic, tzumainn
Target Milestone: ---Keywords: Triaged
Target Release: 3.10.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-12-20 21:41:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1503667    
Bug Blocks:    

Description Antoni Segura Puimedon 2017-10-19 14:06:33 UTC
Description of problem:



How reproducible:

OpenStack has Octavia as its native LBaaS service. This Feature request is about leveraging the native load balancer to give access to the OpenShift API masters. In order to that, the OpenShift Ansible templates need to add:
- LB
- Listener
- Pool
- A member for each master

Steps to Reproduce:
1. Deploy OpenShift on OpenStack
2. neutron lbaas-loadbalancer-list
3. neutron lbaas-listener-list
4. neutron lbaas-pool-list
5. curl the openshift API server VIP

Actual results:
No Neutron entities for Load Balancing the OpenShift master service

Expected results:
THere's a loadbalancer, a listener and a pool for the master service as well as the a pool member for each master node.

Comment 2 Tomas Sedovic 2018-05-04 16:45:41 UTC
How to test:

Prerequisites:

1. A tenant access to an OpenStack with Octavia


Steps:
1. Configure the inventory as described in: https://bugzilla.redhat.com/show_bug.cgi?id=1503667#c2
2. Add `openshift_openstack_use_lbaas_load_balancer: true` to your inventory/group_vars/all.yml
3. Run the provision_install playbook
   * The playbook will print out `openshift_openstack_public_api_ip` at the end
   * Note the IP address


Validation:
1. The playbook must finish without any errors
2. The `api_lb` load balancer was created: `openstack loadbalancer list`
3. The `openshift_openstack_public_api_ip` is NOT an IP address of any of the servers in `openstack server list` but it corresponds to a floating IP address attached to a port of the load balancer
4. `oc login <openshift_openstack_public_api_ip>` succeeds

Comment 3 Jon Uriarte 2018-06-25 12:24:56 UTC
Verified in openshift-ansible-3.10.0-0.67.0 over OSP 13 2018-05-23.1 puddle with Octavia.

Verification steps:
1. Deploy OpenStack (OSP13) with Octavia
2. Deploy an Ansible-host and a DNS server on the overcloud
3. Get OCP openshift-ansible downstream rpm
4. Configure OSP (all.yml) and OCP (OSEv3.yml) inventory files
   - Set 'openshift_openstack_use_lbaas_load_balancer: true' in inventory/group_vars/all.yml
5. Run from the Ansible-host:
ansible-playbook --user openshift -i /usr/share/ansible/openshift-ansible/playbooks/openstack/inventory.py -i inventory /usr/share/ansible/openshift-ansible/playbooks/openstack/openshift-cluster/prerequisites.yml

ansible-playbook --user openshift -i /usr/share/ansible/openshift-ansible/playbooks/openstack/inventory.py -i inventory /usr/share/ansible/openshift-ansible/playbooks/openstack/openshift-cluster/provision.yml

ansible-playbook --user openshift -i /usr/share/ansible/openshift-ansible/playbooks/openstack/inventory.py -i inventory red-hat-ca.yml

ansible-playbook --user openshift -i /usr/share/ansible/openshift-ansible/playbooks/openstack/inventory.py -i inventory /usr/share/ansible/openshift-ansible/playbooks/openstack/openshift-cluster/repos.yml

ansible-playbook --user openshift -i /usr/share/ansible/openshift-ansible/playbooks/openstack/inventory.py -i inventory /usr/share/ansible/openshift-ansible/playbooks/openstack/openshift-cluster/install.yml

6. Check the installer finishes without errors, and note the `openshift_openstack_public_api_ip` at the end of the playbook print-out

TASK [Print the API / UI Public IP Address] ***********************************************************************************************************************************************************************
ok: [localhost] => {
    "openshift_openstack_public_api_ip": "172.20.0.219"
}

7. Check vms deployed in the overcloud
(shiftstack) [cloud-user@ansible-host ~]$ openstack server list
+--------------------------------------+------------------------------------+--------+-------------------------------------------------------------------------+--------+-----------+
| ID                                   | Name                               | Status | Networks                                                                | Image  | Flavor    |
+--------------------------------------+------------------------------------+--------+-------------------------------------------------------------------------+--------+-----------+
| 13a916fa-3648-4dd6-a67c-1615fdfb2256 | infra-node-0.openshift.example.com | ACTIVE | openshift-ansible-openshift.example.com-net=192.168.99.6, 172.20.0.222  | rhel75 | m1.node   |
| eeb506d4-67d5-4514-a8b0-f204b9927fad | master-0.openshift.example.com     | ACTIVE | openshift-ansible-openshift.example.com-net=192.168.99.9, 172.20.0.237  | rhel75 | m1.master |
| 26bcf59f-c511-4299-b593-2b88bb3edeec | app-node-1.openshift.example.com   | ACTIVE | openshift-ansible-openshift.example.com-net=192.168.99.14, 172.20.0.235 | rhel75 | m1.node   |
| baf06e87-14e4-4a89-b9ff-055b7781e3e8 | app-node-0.openshift.example.com   | ACTIVE | openshift-ansible-openshift.example.com-net=192.168.99.8, 172.20.0.223  | rhel75 | m1.node   |
+--------------------------------------+------------------------------------+--------+-------------------------------------------------------------------------+--------+-----------+

8. Check the `api_lb` load balancer was created (`openstack loadbalancer list`)
(shiftstack) [cloud-user@ansible-host ~]$ openstack loadbalancer list
+--------------------------------------+------------------------------------------------+----------------------------------+--------------+---------------------+----------+
| id                                   | name                                           | project_id                       | vip_address  | provisioning_status | provider |
+--------------------------------------+------------------------------------------------+----------------------------------+--------------+---------------------+----------+
| 2dbab69d-3e70-4e87-9924-e3de7281da79 | openshift-cluster-router_lb-aflibr67snng       | a02185177ac246529e69bb252f021683 | 192.168.99.7 | ACTIVE              | octavia  |
| b33ee86b-0c04-4f20-8361-23cfdd7d5c56 | openshift-ansible-openshift.example.com-api-lb | a02185177ac246529e69bb252f021683 | 172.30.0.1   | ACTIVE              | octavia  |
+--------------------------------------+------------------------------------------------+----------------------------------+--------------+---------------------+----------+

9. Check the `openshift_openstack_public_api_ip` is NOT an IP address of any of the servers in `openstack server list` but it corresponds to a floating IP address attached to a port of the load balancer
(shiftstack) [cloud-user@ansible-host ~]$ openstack floating ip list | grep 172.30.0.1
| ff2690f6-4745-4ad0-b4bb-e0c563480290 | 172.20.0.219        | 172.30.0.1       | 13fb29a4-b24e-43f9-a680-5f0f72cc3d30 | dd5a700a-a0bf-4e18-b6db-a59f4063f7b4 | a02185177ac246529e69bb252f021683 |

LB's fixed IP (172.30.0.1) and floating IP (172.20.0.219). The floating IP is the one showed in the playbook print-out.

10. Login from the Ansible-host to the deployed OpenShift (`oc login <openshift_openstack_public_api_ip>`):
(shiftstack) [cloud-user@ansible-host ~]$ oc login 172.20.0.219
The server is using a certificate that does not match its hostname: x509: certificate is valid for 172.20.0.237, 172.30.0.1, 192.168.99.9, not 172.20.0.219
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y

Authentication required for https://172.20.0.219:443 (openshift)
Username: <kerberos id>
Password: <kerberos password>
Login successful.

You don't have any projects. You can try to create a new project, by running

    oc new-project <projectname>

(shiftstack) [cloud-user@ansible-host ~]$