Bug 1553070 - [RFE] Support multi driver pools for hybrid deployments
Summary: [RFE] Support multi driver pools for hybrid deployments
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.10.0
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: 3.11.z
Assignee: Luis Tomas Bolivar
QA Contact: Jon Uriarte
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-03-08 08:43 UTC by Luis Tomas Bolivar
Modified: 2019-01-10 09:04 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: Release Note
Doc Text:
A new multi pool driver has been added to Kuryr-Kubernetes to support hybrid environments where some nodes are Bare Metal while others are running inside VMs, therefore having different pod VIF drivers (e.g., neutron and nested-vlan). To make use of this new feature the available configuration mappings for the different pools and pod_vif drivers needs to be specified at kuryr.conf configmap. In addition, the nodes needs to be annotated with the right information about the pod_vif to be used, otherwise the default one is used
Clone Of:
Environment:
Last Closed: 2019-01-10 09:03:57 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift openshift-ansible pull 7441 0 None None None 2018-03-08 10:30:14 UTC
Launchpad 1747406 0 None None None 2018-03-08 08:48:02 UTC
OpenStack gerrit 528345 0 None None None 2018-03-08 08:47:35 UTC
Red Hat Product Errata RHBA-2019:0024 0 None None None 2019-01-10 09:04:04 UTC

Description Luis Tomas Bolivar 2018-03-08 08:43:37 UTC
A new multi pool driver has been added to Kuryr-Kubernetes to support hybrid environments where some nodes are Bare Metal while others are running inside VMs, therefore having different pod VIF drivers (e.g., neutron and nested-vlan)

To make use of this new feature the available configuration mappings for the different pools and pod_vif drivers needs to be specified at kuryr.conf configmap. In addition, as this patch depends on the pod_vif driver type being specified as a label into the nodes, we need to ensure they are tagged with the right labels.

Comment 1 Scott Dodson 2018-04-16 14:01:56 UTC
The referenced PR has merged, moving ON_QA.

Comment 2 Scott Dodson 2018-04-16 14:02:20 UTC
Need to know if this is a 3.10 blocker or not, if not please move the target release to 3.11

Comment 3 Luis Tomas Bolivar 2018-04-17 07:20:14 UTC
(In reply to Scott Dodson from comment #2)
> Need to know if this is a 3.10 blocker or not, if not please move the target
> release to 3.11

This is not a blocker for 3.10, but also note the PR (and the related kuryr-kubernetes code) was already merged -- more than a month ago.

Comment 4 N. Harrison Ripps 2018-09-21 20:16:50 UTC
Per OCP program call on 21-SEP-2018 we are deferring Kuryr-related bugs to 3.11.z

Comment 5 Jon Uriarte 2018-12-20 14:33:11 UTC
Verified in openshift-ansible-3.11.59-1.git.0.ba8e948.el7.noarch on top of
OSP 13 2018-12-13.4 puddle.

Verification steps:
- Deploy OCP 3.11 on OSP 13 (with ansible 2.5), do not enable the kuryr multi-pool driver support in OSEv3.yml:

      ## Kuryr label configuration
      #kuryr_openstack_pool_driver: multi
      #
      #openshift_node_groups:
      #  - name: node-config-master
      #    labels:
      #      - 'node-role.kubernetes.io/master=true'
      #      - 'pod_vif=nested-vlan'
      #    edits: []
      #  - name: node-config-infra
      #    labels:
      #      - 'node-role.kubernetes.io/infra=true'
      #      - 'pod_vif=nested-vlan'
      #    edits: []
      #  - name: node-config-compute
      #    labels:
      #      - 'node-role.kubernetes.io/compute=true'
      #      - 'pod_vif=nested-vlan'
      #    edits: []

- Check kuryr.conf configmap:
    · vif_pool_driver = nested

- Check Openshift nodes are not tagged with pod_vif=nested-vlan:

   [openshift@master-0 ~]$ ocs get nodes --show-labels
   NAME                                 STATUS    ROLES     AGE       VERSION           LABELS
   app-node-0.openshift.example.com     Ready     compute   18h       v1.11.0+d4cacc0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=app-node-0.openshift.example.com,node-role.kubernetes.io/compute=true
   app-node-1.openshift.example.com     Ready     compute   18h       v1.11.0+d4cacc0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=app-node-1.openshift.example.com,node-role.kubernetes.io/compute=true
   infra-node-0.openshift.example.com   Ready     infra     18h       v1.11.0+d4cacc0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=infra-node-0.openshift.example.com,node-role.kubernetes.io/infra=true
   master-0.openshift.example.com       Ready     master    18h       v1.11.0+d4cacc0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master-0.openshift.example.com,node-role.kubernetes.io/master=true

- Deploy OCP 3.11 on OSP 13 (with ansible 2.5), enable the kuryr multi-pool driver support in OSEv3.yml:

      kuryr_openstack_pool_driver: multi

      openshift_node_groups:
        - name: node-config-master
          labels:
            - 'node-role.kubernetes.io/master=true'
            - 'pod_vif=nested-vlan'
          edits: []
        - name: node-config-infra
          labels:
            - 'node-role.kubernetes.io/infra=true'
            - 'pod_vif=nested-vlan'
          edits: []
        - name: node-config-compute
          labels:
            - 'node-role.kubernetes.io/compute=true'
            - 'pod_vif=nested-vlan'
          edits: []

- Check kuryr.conf configmap:
    · vif_pool_driver = multi

- Check Openshift nodes are tagged with pod_vif=nested-vlan:

   [openshift@master-0 ~]$ ocs get nodes --show-labels
   NAME                                 STATUS    ROLES     AGE       VERSION           LABELS
   app-node-0.openshift.example.com     Ready     compute   20h       v1.11.0+d4cacc0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=app-node-0.openshift.example.com,node-role.kubernetes.io/compute=true,pod_vif=nested-vlan
   app-node-1.openshift.example.com     Ready     compute   20h       v1.11.0+d4cacc0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=app-node-1.openshift.example.com,node-role.kubernetes.io/compute=true,pod_vif=nested-vlan
   infra-node-0.openshift.example.com   Ready     infra     20h       v1.11.0+d4cacc0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=infra-node-0.openshift.example.com,node-role.kubernetes.io/infra=true,pod_vif=nested-vlan
   master-0.openshift.example.com       Ready     master    20h       v1.11.0+d4cacc0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master-0.openshift.example.com,node-role.kubernetes.io/master=true,pod_vif=nested-vlan

Comment 7 errata-xmlrpc 2019-01-10 09:03:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0024


Note You need to log in before you can comment on or make changes to this bug.