Bug 1631792 - OpenShift ansible installer doesn't work with Calico network plugin - OpenShift 3.10
Summary: OpenShift ansible installer doesn't work with Calico network plugin - OpenSh...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.10.0
Hardware: All
OS: Linux
unspecified
high
Target Milestone: ---
: 3.10.z
Assignee: Scott Dodson
QA Contact: Meng Bo
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-09-21 15:04 UTC by mikolaj.ciecierski
Modified: 2019-01-10 09:27 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
The calico network plugin playbooks have been updated to work properly in 3.10 environments.
Clone Of:
Environment:
Last Closed: 2019-01-10 09:27:09 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github https://github.com/openshift openshift-ansible pull 9435 0 None None None 2020-08-10 04:29:45 UTC
Red Hat Product Errata RHBA-2019:0026 0 None None None 2019-01-10 09:27:16 UTC

Description mikolaj.ciecierski 2018-09-21 15:04:55 UTC
Description of problem:
OpenShift ansible installer doesn't work with Calico network plugin in release 3.10.

Version-Release number of the following components:
rpm -q openshift-ansible - openshift-ansible-3.10.47-1.git.0.95bc2d2.el7_5.noarch
rpm -q ansible - ansible-2.4.3.0-1.el7ae.noarch
ansible --version ansible 2.4.3.0

How reproducible:

Steps to Reproduce:
1.Set openshift_common_release: "3.10" to deploy OpenShift version 3.10
2.Use Calico for OpenShift overlay network
2.Run playbook responsible for deploying OpenShift


Actual results:
Ansible fails on task "Wait for node to be ready"
Nodes don't come up with error "NetworkPlugin cni failed to set up"
Calico-node pods don't spawn due to missing projectcalico.org/ds-ready=true nodeselector on nodes.


Expected results:
All nodes should be running calico-node pods. 
Nodes should be in ready Status.
Installation should proceed to next steps.

Additional info:

Comment 2 Scott Dodson 2018-10-12 18:11:46 UTC
The correct set of patches is 

https://github.com/openshift/openshift-ansible/pull/9621
https://github.com/openshift/openshift-ansible/pull/9868
https://github.com/openshift/openshift-ansible/pull/10198

All of these changes are merged into openshift-ansible-3.10.51-1 which is going through QE now.

Comment 3 Meng Bo 2018-10-16 08:08:28 UTC
Tested with openshift-ansible-3.10.57-1.git.0.787bf7c.el7.noarch

The cluster with calico network can be setup without issues.

# oc get po -n kube-system -o wide 
NAME                                              READY     STATUS      RESTARTS   AGE       IP              NODE
calico-kube-controllers-868f5c896d-v7t9l          1/1       Running     0          2h        172.18.4.13     ip-172-18-4-13.ec2.internal
calico-node-csgtd                                 2/2       Running     0          2h        172.18.4.13     ip-172-18-4-13.ec2.internal
calico-node-s44sq                                 2/2       Running     0          2h        172.18.15.188   ip-172-18-15-188.ec2.internal
calico-node-tmp5m                                 2/2       Running     0          2h        172.18.10.41    ip-172-18-10-41.ec2.internal

# oc get po -n kube-proxy-and-dns -o wide 
NAME                  READY     STATUS    RESTARTS   AGE       IP              NODE
proxy-and-dns-vzwlg   1/1       Running   0          2h        172.18.4.13     ip-172-18-4-13.ec2.internal
proxy-and-dns-x7ds2   1/1       Running   0          2h        172.18.10.41    ip-172-18-10-41.ec2.internal
proxy-and-dns-xvndl   1/1       Running   0          2h        172.18.15.188   ip-172-18-15-188.ec2.internal

# oc get node --show-labels 
NAME                            STATUS    ROLES     AGE       VERSION           LABELS
ip-172-18-10-41.ec2.internal    Ready     master    2h        v1.10.0+b81c8f8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.large,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ip-172-18-10-41.ec2.internal,node-role.kubernetes.io/master=true,projectcalico.org/ds-ready=true
ip-172-18-15-188.ec2.internal   Ready     <none>    2h        v1.10.0+b81c8f8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.large,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ip-172-18-15-188.ec2.internal,projectcalico.org/ds-ready=true,registry=enabled,role=node,router=enabled
ip-172-18-4-13.ec2.internal     Ready     compute   2h        v1.10.0+b81c8f8   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=m3.large,beta.kubernetes.io/os=linux,kubernetes.io/hostname=ip-172-18-4-13.ec2.internal,node-role.kubernetes.io/compute=true,projectcalico.org/ds-ready=true,role=node

Comment 5 errata-xmlrpc 2019-01-10 09:27:09 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0026


Note You need to log in before you can comment on or make changes to this bug.