Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1429029

Summary: [3.5] Pod may get the duplicate IP if it is created after the node service restarted on containerized env
Product: OpenShift Container Platform Reporter: Eric Paris <eparis>
Component: NetworkingAssignee: Ben Bennett <bbennett>
Status: CLOSED DUPLICATE QA Contact: Meng Bo <bmeng>
Severity: high Docs Contact:
Priority: high    
Version: 3.5.0CC: aos-bugs, bbennett, bmeng, dcbw, eparis, gscrivan, sdodson, wmeng
Target Milestone: ---   
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: 1427789 Environment:
Last Closed: 2017-03-03 21:58:33 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Bug Depends On: 1427789, 1429030    
Bug Blocks:    

Description Eric Paris 2017-03-03 21:53:10 UTC
+++ This bug was initially created as a clone of Bug #1427789 +++

Description of problem:
Setup multi-node env with containerized installation. Create some pods on it and restart the node service on node. Try to create some more pods. Check the IP address of the pods. Different pods may get the same IP.

Version-Release number of selected component (if applicable):
openshift v3.5.0.35
kubernetes v1.5.2+43a9be4
etcd 3.1.0


How reproducible:
always

Steps to Reproduce:
1. Setup multi-node env with containerized installation
2. Create some pod on the node
$ oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/networking/list_for_pods.json
3. Restart the node service on node
4. Create some more pods
oc scale rc test-rc --replicas=4
5. Check the pod IP.

Actual results:
The new pods may get a duplicated IP.
# oc get po -o wide
NAME            READY     STATUS    RESTARTS   AGE       IP           NODE
hello-pod       1/1       Running   0          3m        10.129.0.2   host-8-174-57.host.centralci.eng.rdu2.redhat.com
test-rc-291zd   1/1       Running   0          2m        10.129.0.3   host-8-174-57.host.centralci.eng.rdu2.redhat.com
test-rc-32p8d   1/1       Running   0          24s       10.129.0.2   host-8-174-57.host.centralci.eng.rdu2.redhat.com
test-rc-8349g   1/1       Running   0          24s       10.129.0.3   host-8-174-57.host.centralci.eng.rdu2.redhat.com
test-rc-xhkl8   1/1       Running   0          2m        10.129.0.4   host-8-174-57.host.centralci.eng.rdu2.redhat.com


Expected results:
The IP should be unique for each pod.

Additional info:
Cause:
Restart the node service will destroy the atomic-openshift-node container and re-create a new one. But the IPAM info will not be recovered once the new container created.

# systemctl restart atomic-openshift-node
# docker exec atomic-openshift-node ls /var/lib/cni/networks/openshift-sdn/
ls: cannot access /var/lib/cni/networks/openshift-sdn/: No such file or directory

So this will not happen on a RPM installation env.

--- Additional comment from Dan Williams on 2017-03-02 16:57:42 EST ---

The data directory must be persistent, so we'll need to map that directory into the container so it sticks around across node process restarts.  Similar to how docker handles IPAM, it also has an on-disk database of IPAM allocations.

--- Additional comment from Eric Paris on 2017-03-02 21:24:16 EST ---

What directory? I'm assuming this will be as easy as another -v volume option in the definitions of how we launch the node container?

--- Additional comment from Dan Williams on 2017-03-02 22:15:45 EST ---

(In reply to Eric Paris from comment #2)
> What directory? I'm assuming this will be as easy as another -v volume
> option in the definitions of how we launch the node container?

/var/lib/cni/ should probably get persisted.  Does that only need to happen in ansible, like in roles/openshift_node/templates/openshift.docker.node.service ?

eg, is this just an ansible issue, or does something in origin itself need updating?

--- Additional comment from Eric Paris on 2017-03-03 09:41:57 EST ---

sdodson can either tell us everywhere we need the -v or he can tell us who knows...

--- Additional comment from Scott Dodson on 2017-03-03 09:56:52 EST ---

openshift-ansible's roles/openshift_node/templates/openshift.docker.node.service is what's actually used by the installer

origin's /contrib/systemd/containerized/ has some reference systemd units too


Giueseppe can explain how this needs to be done for system containers which we'll be switching to in the future.

--- Additional comment from Giuseppe Scrivano on 2017-03-03 10:09:50 EST ---

so the issue is that /var/lib/cni is recreated each time the node container restarts?

I think this should work with system containers, as there is already a binding from the host `/var/lib/cni` so that it is persisted across restarts of the node container:

https://github.com/openshift/origin/blob/master/images/node/system-container/config.json.template#L238

With system containers we enforce the image to be read only, that helps to ensure no state is left into the container itself.

--- Additional comment from Dan Williams on 2017-03-03 10:33:38 EST ---

(In reply to Giuseppe Scrivano from comment #6)
> so the issue is that /var/lib/cni is recreated each time the node container
> restarts?
> 
> I think this should work with system containers, as there is already a
> binding from the host `/var/lib/cni` so that it is persisted across restarts
> of the node container:
> 
> https://github.com/openshift/origin/blob/master/images/node/system-container/
> config.json.template#L238
> 
> With system containers we enforce the image to be read only, that helps to
> ensure no state is left into the container itself.

Ok, so that looks like it would do the right thing.  But are our customers using that right now when they set up a "containerized env" or is that happening via the ansible installer, or somehow else?

--- Additional comment from Scott Dodson on 2017-03-03 10:37:53 EST ---

(In reply to Dan Williams from comment #7)
> (In reply to Giuseppe Scrivano from comment #6)
> > so the issue is that /var/lib/cni is recreated each time the node container
> > restarts?
> > 
> > I think this should work with system containers, as there is already a
> > binding from the host `/var/lib/cni` so that it is persisted across restarts
> > of the node container:
> > 
> > https://github.com/openshift/origin/blob/master/images/node/system-container/
> > config.json.template#L238
> > 
> > With system containers we enforce the image to be read only, that helps to
> > ensure no state is left into the container itself.
> 
> Ok, so that looks like it would do the right thing.  But are our customers
> using that right now when they set up a "containerized env" or is that
> happening via the ansible installer, or somehow else?

No, that's future state. I just wanted to make sure we accounted for that so we don't regress as soon as we move to system containers.

--- Additional comment from Dan Williams on 2017-03-03 15:56:16 EST ---

Origin: https://github.com/openshift/origin/pull/13231
Ansible: https://github.com/openshift/openshift-ansible/pull/3556

Tested with containerized node in a docker-in-docker instance.  /var/lib/cni is preserved across "docker restart origin/node" invocations.

Comment 1 Scott Dodson 2017-03-08 14:16:00 UTC

*** This bug has been marked as a duplicate of bug 1427789 ***