Bug 1503667 - [RFE] Integrate Openshift-on-OpenStack Heat templates into Openshift-Ansible
Summary: [RFE] Integrate Openshift-on-OpenStack Heat templates into Openshift-Ansible
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: RFE
Version: unspecified
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.10.0
Assignee: Tomas Sedovic
QA Contact: Jon Uriarte
URL:
Whiteboard: DFG:OpenShiftonOpenStack
Depends On:
Blocks: 1503708 1504122
TreeView+ depends on / blocked
 
Reported: 2017-10-18 14:26 UTC by Tzu-Mainn Chen
Modified: 2018-12-20 21:41 UTC (History)
5 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
Feature: This allows openshift-ansible to automate an OpenShift deployment on top of an existing OpenStack cloud. It will create the necessary resources (servers, networks, storage, etc.) by talking to the OpenStack APIs and prepare them for OpenShift installation. Reason: openshift-ansible has started adding direct support for the various cloud providers (AWS, GCE, etc.) recently and this adds OpenStack to the mix. Result:
Clone Of:
Environment:
Last Closed: 2018-12-20 21:41:26 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift openshift-ansible pull 6039 0 None None None 2018-01-08 11:05:09 UTC

Description Tzu-Mainn Chen 2017-10-18 14:26:18 UTC
Description of problem:

The current Openshift-on-OpenStack installation procedure is Heat-driven. We'd like to integrate with openshift-ansible for a more native installation procedure.

Expected results:

* Code ported to the openshift-ansible installer
* Can deploy an equivalent OCP on OSP environment to that which can be done with the Heat templates.

Comment 1 Tzu-Mainn Chen 2018-02-16 21:17:40 UTC
Steps to Reproduce:

1. Configure the inventory as documented in https://github.com/openshift/openshift-ansible/blob/master/playbooks/openstack/advanced-configuration.md
2. Run the playbooks as documented in https://github.com/openshift/openshift-ansible/blob/master/playbooks/openstack/README.md

Comment 2 Tomas Sedovic 2018-05-04 13:12:04 UTC
The advanced-configuration.md link is now dead, the README is a good starting point:

https://github.com/openshift/openshift-ansible/blob/master/playbooks/openstack/README.md

Steps to install

1. Have an OpenStack (preferably OSP 13) deployment ready with tenant credentials
2. source the credentials (usually from a file called keystonerc or overcloudrc) into your shell environment
3. Copy the inventory from openshift-ansible/playbooks/openstack/sample-inventory
4. Add your OpenStack configuration in the inventory/group_vars/all.yml file
5. Add your OpenShift configuration in the inventory/group_vars/OSEv3.yml file
6. Install OpenShift by running the provision_install playbook:

ansible-playbook -i openshift-ansible/playbooks/openstack/inventory.py -i inventory openshift-ansible/playbooks/openstack/openshift-cluster/provision_install.yml


Validate:
1. Verify that the install succeeds without any errors
2. Verify that new VMs were created in OpenStack tenant by running `openstack server list`. There should be at least 1 master, 1 infra and 1 app node
3. Verify that you can log in to the OpenShift cluster by running `oc login <master ip>`

Comment 3 Jon Uriarte 2018-06-06 13:40:05 UTC
Verified in openshift-ansible-3.10.0-0.58.0.git.0.d8f6377.el7.noarch.

Verification steps:

1. Deploy OSP 13 with Octavia in a hybrid environment (compute node must be BM)
2. Deploy a DNS server and the Ansible host in the overcloud
3. Download OCP rpm and configure:
   - OpenStack (inventory/group_vars/all.yml)
       . Configure Kuryr SDN
   - OpenShift (inventory/group_vars/OSEv3.yml)
       . Configure the Red Hat LDAP identity provider
4. Install OpenShift by running the playbooks for OpenStack (deployed 1 master, 1 infra and 2 app nodes) and verify the installer succeeds without any errors
5. Verify that new VMs for OpenShift nodes were created in OpenStack tenant
$ openstack server list
+--------------------------------------+------------------------------------+--------+-------------------------------------------------------------------------+---------+-----------+
| ID                                   | Name                               | Status | Networks                                                                | Image   | Flavor    |
+--------------------------------------+------------------------------------+--------+-------------------------------------------------------------------------+---------+-----------+
| 96b76b8e-c15e-4ef2-86de-2df51ec5299e | master-0.openshift.example.com     | ACTIVE | openshift-ansible-openshift.example.com-net=192.168.99.8, 172.20.0.228  | rhel75  | m1.master |
| eab99b8c-cd89-4d81-903d-31c9082b4607 | app-node-0.openshift.example.com   | ACTIVE | openshift-ansible-openshift.example.com-net=192.168.99.14, 172.20.0.214 | rhel75  | m1.node   |
| 00c5eea2-8bf4-492b-96ef-40b6b9001d83 | app-node-1.openshift.example.com   | ACTIVE | openshift-ansible-openshift.example.com-net=192.168.99.6, 172.20.0.222  | rhel75  | m1.node   |
| 98e2b8b5-e851-4499-a366-89b8bae070f6 | infra-node-0.openshift.example.com | ACTIVE | openshift-ansible-openshift.example.com-net=192.168.99.7, 172.20.0.223  | rhel75  | m1.node   |
| 19ea6449-8fe3-41b7-bff8-ce973357ccfe | openshift-dns                      | ACTIVE | openshift-dns=192.168.23.3, 172.20.0.218                                | centos7 | m1.small  |
| 490056e5-a0b0-4af8-8cb2-f7b7321dd604 | ansible-host                       | ACTIVE | ansible-host=172.16.0.6, 172.20.0.212                                   | rhel75  | m1.small  |
+--------------------------------------+------------------------------------+--------+-------------------------------------------------------------------------+---------+-----------+

6. Verify from the master node that kuryr controller and cni pods are ready and running:
  [openshift@master-0 ~]$ oc get pod -n openshift-infra
  NAME                                READY     STATUS    RESTARTS   AGE
  bootstrap-autoapprover-0            1/1       Running   0          16h
  kuryr-cni-ds-bcvrp                  1/1       Running   0          16h
  kuryr-cni-ds-hnqw2                  1/1       Running   0          16h
  kuryr-cni-ds-jvn5x                  1/1       Running   0          16h
  kuryr-cni-ds-kjxnv                  1/1       Running   0          16h
  kuryr-controller-65c98f7444-vv5l8   1/1       Running   0          16h

7. Log in to the Openshift cluster:
$ oc login master-0.openshift.example.com:8443
(enter LDAP user credentials)

8. Create an new project/deployment/service and test its functionality:
$ oc new-project test
$ oc run --image kuryr/demo demo
$ oc scale dc/demo --replicas=2
$ oc get pods -o wide
NAME           READY     STATUS    RESTARTS   AGE       IP           NODE
demo-1-74h2k   1/1       Running   0          36s       10.11.0.8    app-node-0.openshift.example.com
demo-1-lhbkn   1/1       Running   0          24s       10.11.0.12   app-node-1.openshift.example.com

$ curl 10.11.0.8:8080
demo-1-74h2k: HELLO! I AM ALIVE!!!

$ curl 10.11.0.12:8080                                                                                                                                                                   
demo-1-lhbkn: HELLO! I AM ALIVE!!!

$ oc expose dc/demo --port 80 --target-port 8080 --type LoadBalancer

$ oc get svc
NAME      TYPE           CLUSTER-IP      EXTERNAL-IP                     PORT(S)        AGE
demo      LoadBalancer   172.30.208.50   172.29.195.107,172.29.195.107   80:31750/TCP   9s

$ curl 172.30.208.50
demo-1-74h2k: HELLO! I AM ALIVE!!!

$ curl 172.30.208.50
demo-1-lhbkn: HELLO! I AM ALIVE!!!

$ openstack loadbalancer list
+--------------------------------------+------------------------------------------------+----------------------------------+----------------+---------------------+----------+
| id                                   | name                                           | project_id                       | vip_address    | provisioning_status | provider |
+--------------------------------------+------------------------------------------------+----------------------------------+----------------+---------------------+----------+
| c0be5523-6e6a-4d5f-99ff-6dcca3c28cf5 | openshift-ansible-openshift.example.com-api-lb | e0a285d3e26f42d4b8945f0f21aaf107 | 172.30.0.1     | ACTIVE              | octavia  |
| a2ba4c57-4d25-41a2-910a-ccec029e0883 | default/router                                 | e0a285d3e26f42d4b8945f0f21aaf107 | 172.30.240.127 | ACTIVE              | octavia  |
| 16b4147b-6a19-4669-9ddb-3e9d7a363fb4 | test/demo                                      | e0a285d3e26f42d4b8945f0f21aaf107 | 172.30.208.50  | ACTIVE              | octavia  |
+--------------------------------------+------------------------------------------------+----------------------------------+----------------+---------------------+----------+

$ openstack floating ip list
+--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+
| ID                                   | Floating IP Address | Fixed IP Address | Port                                 | Floating Network                     | Project                          |
+--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+
...
| 93477ad1-0276-4e41-86ad-b03dce7d6df1 | 172.20.0.221        | 172.30.208.50    | 3b7f65eb-a326-45f7-99dd-fdd8f229374b | dd5a700a-a0bf-4e18-b6db-a59f4063f7b4 | e0a285d3e26f42d4b8945f0f21aaf107 |
...
+--------------------------------------+---------------------+------------------+--------------------------------------+--------------------------------------+----------------------------------+

$ curl 172.20.0.221
demo-1-lhbkn: HELLO! I AM ALIVE!!!
$ curl 172.20.0.221
demo-1-74h2k: HELLO! I AM ALIVE!!!


Note You need to log in before you can comment on or make changes to this bug.