Bug 1427789 - [3.5] Pod may get the duplicate IP if it is created after the node service restarted on containerized env
Summary: [3.5] Pod may get the duplicate IP if it is created after the node service re...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.5.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: ---
Assignee: Dan Williams
QA Contact: Meng Bo
URL:
Whiteboard:
: 1429029 (view as bug list)
Depends On:
Blocks: 1429029 1429030
TreeView+ depends on / blocked
 
Reported: 2017-03-01 08:47 UTC by Meng Bo
Modified: 2017-07-24 14:11 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: Bug Fix
Doc Text:
In containerized environments the CNI data directory located at /var/lib/cni was not properly configured to persist on the node host. The installer has been updated to ensure that pod IP allocation data is persisted when restarting containerized nodes.
Clone Of:
: 1429029 1429030 (view as bug list)
Environment:
Last Closed: 2017-04-12 19:02:51 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2017:0903 0 normal SHIPPED_LIVE OpenShift Container Platform atomic-openshift-utils bug fix and enhancement 2017-04-12 22:45:42 UTC

Description Meng Bo 2017-03-01 08:47:13 UTC
Description of problem:
Setup multi-node env with containerized installation. Create some pods on it and restart the node service on node. Try to create some more pods. Check the IP address of the pods. Different pods may get the same IP.

Version-Release number of selected component (if applicable):
openshift v3.5.0.35
kubernetes v1.5.2+43a9be4
etcd 3.1.0


How reproducible:
always

Steps to Reproduce:
1. Setup multi-node env with containerized installation
2. Create some pod on the node
$ oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/networking/list_for_pods.json
3. Restart the node service on node
4. Create some more pods
oc scale rc test-rc --replicas=4
5. Check the pod IP.

Actual results:
The new pods may get a duplicated IP.
# oc get po -o wide
NAME            READY     STATUS    RESTARTS   AGE       IP           NODE
hello-pod       1/1       Running   0          3m        10.129.0.2   host-8-174-57.host.centralci.eng.rdu2.redhat.com
test-rc-291zd   1/1       Running   0          2m        10.129.0.3   host-8-174-57.host.centralci.eng.rdu2.redhat.com
test-rc-32p8d   1/1       Running   0          24s       10.129.0.2   host-8-174-57.host.centralci.eng.rdu2.redhat.com
test-rc-8349g   1/1       Running   0          24s       10.129.0.3   host-8-174-57.host.centralci.eng.rdu2.redhat.com
test-rc-xhkl8   1/1       Running   0          2m        10.129.0.4   host-8-174-57.host.centralci.eng.rdu2.redhat.com


Expected results:
The IP should be unique for each pod.

Additional info:
Cause:
Restart the node service will destroy the atomic-openshift-node container and re-create a new one. But the IPAM info will not be recovered once the new container created.

# systemctl restart atomic-openshift-node
# docker exec atomic-openshift-node ls /var/lib/cni/networks/openshift-sdn/
ls: cannot access /var/lib/cni/networks/openshift-sdn/: No such file or directory

So this will not happen on a RPM installation env.

Comment 1 Dan Williams 2017-03-02 21:57:42 UTC
The data directory must be persistent, so we'll need to map that directory into the container so it sticks around across node process restarts.  Similar to how docker handles IPAM, it also has an on-disk database of IPAM allocations.

Comment 2 Eric Paris 2017-03-03 02:24:16 UTC
What directory? I'm assuming this will be as easy as another -v volume option in the definitions of how we launch the node container?

Comment 3 Dan Williams 2017-03-03 03:15:45 UTC
(In reply to Eric Paris from comment #2)
> What directory? I'm assuming this will be as easy as another -v volume
> option in the definitions of how we launch the node container?

/var/lib/cni/ should probably get persisted.  Does that only need to happen in ansible, like in roles/openshift_node/templates/openshift.docker.node.service ?

eg, is this just an ansible issue, or does something in origin itself need updating?

Comment 4 Eric Paris 2017-03-03 14:41:57 UTC
sdodson can either tell us everywhere we need the -v or he can tell us who knows...

Comment 5 Scott Dodson 2017-03-03 14:56:52 UTC
openshift-ansible's roles/openshift_node/templates/openshift.docker.node.service is what's actually used by the installer

origin's /contrib/systemd/containerized/ has some reference systemd units too


Giueseppe can explain how this needs to be done for system containers which we'll be switching to in the future.

Comment 6 Giuseppe Scrivano 2017-03-03 15:09:50 UTC
so the issue is that /var/lib/cni is recreated each time the node container restarts?

I think this should work with system containers, as there is already a binding from the host `/var/lib/cni` so that it is persisted across restarts of the node container:

https://github.com/openshift/origin/blob/master/images/node/system-container/config.json.template#L238

With system containers we enforce the image to be read only, that helps to ensure no state is left into the container itself.

Comment 7 Dan Williams 2017-03-03 15:33:38 UTC
(In reply to Giuseppe Scrivano from comment #6)
> so the issue is that /var/lib/cni is recreated each time the node container
> restarts?
> 
> I think this should work with system containers, as there is already a
> binding from the host `/var/lib/cni` so that it is persisted across restarts
> of the node container:
> 
> https://github.com/openshift/origin/blob/master/images/node/system-container/
> config.json.template#L238
> 
> With system containers we enforce the image to be read only, that helps to
> ensure no state is left into the container itself.

Ok, so that looks like it would do the right thing.  But are our customers using that right now when they set up a "containerized env" or is that happening via the ansible installer, or somehow else?

Comment 8 Scott Dodson 2017-03-03 15:37:53 UTC
(In reply to Dan Williams from comment #7)
> (In reply to Giuseppe Scrivano from comment #6)
> > so the issue is that /var/lib/cni is recreated each time the node container
> > restarts?
> > 
> > I think this should work with system containers, as there is already a
> > binding from the host `/var/lib/cni` so that it is persisted across restarts
> > of the node container:
> > 
> > https://github.com/openshift/origin/blob/master/images/node/system-container/
> > config.json.template#L238
> > 
> > With system containers we enforce the image to be read only, that helps to
> > ensure no state is left into the container itself.
> 
> Ok, so that looks like it would do the right thing.  But are our customers
> using that right now when they set up a "containerized env" or is that
> happening via the ansible installer, or somehow else?

No, that's future state. I just wanted to make sure we accounted for that so we don't regress as soon as we move to system containers.

Comment 9 Dan Williams 2017-03-03 20:56:16 UTC
Origin: https://github.com/openshift/origin/pull/13231
Ansible: https://github.com/openshift/openshift-ansible/pull/3556

Tested with containerized node in a docker-in-docker instance.  /var/lib/cni is preserved across "docker restart origin/node" invocations.

Comment 10 Eric Paris 2017-03-03 22:03:01 UTC
Since the ansible code merged for 3.5/master marking as MODIFIED

Comment 12 Meng Bo 2017-03-06 08:58:16 UTC
Tested with OCP build 3.5.0.39 and openshift-ansible-3.5.23-1 issue has been fixed.
The /var/lib/cni directory will be persistent when the node container restart.

Comment 13 Scott Dodson 2017-03-08 14:16:00 UTC
*** Bug 1429029 has been marked as a duplicate of this bug. ***

Comment 15 errata-xmlrpc 2017-04-12 19:02:51 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:0903


Note You need to log in before you can comment on or make changes to this bug.