Bug 1553070
| Summary: | [RFE] Support multi driver pools for hybrid deployments | ||
|---|---|---|---|
| Product: | OpenShift Container Platform | Reporter: | Luis Tomas Bolivar <ltomasbo> |
| Component: | Installer | Assignee: | Luis Tomas Bolivar <ltomasbo> |
| Status: | CLOSED ERRATA | QA Contact: | Jon Uriarte <juriarte> |
| Severity: | unspecified | Docs Contact: | |
| Priority: | unspecified | ||
| Version: | 3.10.0 | CC: | aos-bugs, jokerman, mmccomas |
| Target Milestone: | --- | ||
| Target Release: | 3.11.z | ||
| Hardware: | Unspecified | ||
| OS: | Unspecified | ||
| Whiteboard: | |||
| Fixed In Version: | Doc Type: | Release Note | |
| Doc Text: |
A new multi pool driver has been added to Kuryr-Kubernetes to support hybrid environments where some nodes are Bare Metal while others are running inside VMs, therefore having different pod VIF drivers (e.g., neutron and nested-vlan). To make use of this new feature the available configuration mappings for the different pools and pod_vif drivers needs to be specified at kuryr.conf configmap. In addition, the nodes needs to be annotated with the right information about the pod_vif to be used, otherwise the default one is used
|
Story Points: | --- |
| Clone Of: | Environment: | ||
| Last Closed: | 2019-01-10 09:03:57 UTC | Type: | Bug |
| Regression: | --- | Mount Type: | --- |
| Documentation: | --- | CRM: | |
| Verified Versions: | Category: | --- | |
| oVirt Team: | --- | RHEL 7.3 requirements from Atomic Host: | |
| Cloudforms Team: | --- | Target Upstream Version: | |
| Embargoed: | |||
|
Description
Luis Tomas Bolivar
2018-03-08 08:43:37 UTC
The referenced PR has merged, moving ON_QA. Need to know if this is a 3.10 blocker or not, if not please move the target release to 3.11 (In reply to Scott Dodson from comment #2) > Need to know if this is a 3.10 blocker or not, if not please move the target > release to 3.11 This is not a blocker for 3.10, but also note the PR (and the related kuryr-kubernetes code) was already merged -- more than a month ago. Per OCP program call on 21-SEP-2018 we are deferring Kuryr-related bugs to 3.11.z Verified in openshift-ansible-3.11.59-1.git.0.ba8e948.el7.noarch on top of
OSP 13 2018-12-13.4 puddle.
Verification steps:
- Deploy OCP 3.11 on OSP 13 (with ansible 2.5), do not enable the kuryr multi-pool driver support in OSEv3.yml:
## Kuryr label configuration
#kuryr_openstack_pool_driver: multi
#
#openshift_node_groups:
# - name: node-config-master
# labels:
# - 'node-role.kubernetes.io/master=true'
# - 'pod_vif=nested-vlan'
# edits: []
# - name: node-config-infra
# labels:
# - 'node-role.kubernetes.io/infra=true'
# - 'pod_vif=nested-vlan'
# edits: []
# - name: node-config-compute
# labels:
# - 'node-role.kubernetes.io/compute=true'
# - 'pod_vif=nested-vlan'
# edits: []
- Check kuryr.conf configmap:
· vif_pool_driver = nested
- Check Openshift nodes are not tagged with pod_vif=nested-vlan:
[openshift@master-0 ~]$ ocs get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
app-node-0.openshift.example.com Ready compute 18h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=app-node-0.openshift.example.com,node-role.kubernetes.io/compute=true
app-node-1.openshift.example.com Ready compute 18h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=app-node-1.openshift.example.com,node-role.kubernetes.io/compute=true
infra-node-0.openshift.example.com Ready infra 18h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=infra-node-0.openshift.example.com,node-role.kubernetes.io/infra=true
master-0.openshift.example.com Ready master 18h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master-0.openshift.example.com,node-role.kubernetes.io/master=true
- Deploy OCP 3.11 on OSP 13 (with ansible 2.5), enable the kuryr multi-pool driver support in OSEv3.yml:
kuryr_openstack_pool_driver: multi
openshift_node_groups:
- name: node-config-master
labels:
- 'node-role.kubernetes.io/master=true'
- 'pod_vif=nested-vlan'
edits: []
- name: node-config-infra
labels:
- 'node-role.kubernetes.io/infra=true'
- 'pod_vif=nested-vlan'
edits: []
- name: node-config-compute
labels:
- 'node-role.kubernetes.io/compute=true'
- 'pod_vif=nested-vlan'
edits: []
- Check kuryr.conf configmap:
· vif_pool_driver = multi
- Check Openshift nodes are tagged with pod_vif=nested-vlan:
[openshift@master-0 ~]$ ocs get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
app-node-0.openshift.example.com Ready compute 20h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=app-node-0.openshift.example.com,node-role.kubernetes.io/compute=true,pod_vif=nested-vlan
app-node-1.openshift.example.com Ready compute 20h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=app-node-1.openshift.example.com,node-role.kubernetes.io/compute=true,pod_vif=nested-vlan
infra-node-0.openshift.example.com Ready infra 20h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=infra-node-0.openshift.example.com,node-role.kubernetes.io/infra=true,pod_vif=nested-vlan
master-0.openshift.example.com Ready master 20h v1.11.0+d4cacc0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master-0.openshift.example.com,node-role.kubernetes.io/master=true,pod_vif=nested-vlan
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:0024 |