Description of problem: Setup multi-node env with containerized installation. Create some pods on it and restart the node service on node. Try to create some more pods. Check the IP address of the pods. Different pods may get the same IP. Version-Release number of selected component (if applicable): openshift v3.5.0.35 kubernetes v1.5.2+43a9be4 etcd 3.1.0 How reproducible: always Steps to Reproduce: 1. Setup multi-node env with containerized installation 2. Create some pod on the node $ oc create -f https://raw.githubusercontent.com/openshift-qe/v3-testfiles/master/networking/list_for_pods.json 3. Restart the node service on node 4. Create some more pods oc scale rc test-rc --replicas=4 5. Check the pod IP. Actual results: The new pods may get a duplicated IP. # oc get po -o wide NAME READY STATUS RESTARTS AGE IP NODE hello-pod 1/1 Running 0 3m 10.129.0.2 host-8-174-57.host.centralci.eng.rdu2.redhat.com test-rc-291zd 1/1 Running 0 2m 10.129.0.3 host-8-174-57.host.centralci.eng.rdu2.redhat.com test-rc-32p8d 1/1 Running 0 24s 10.129.0.2 host-8-174-57.host.centralci.eng.rdu2.redhat.com test-rc-8349g 1/1 Running 0 24s 10.129.0.3 host-8-174-57.host.centralci.eng.rdu2.redhat.com test-rc-xhkl8 1/1 Running 0 2m 10.129.0.4 host-8-174-57.host.centralci.eng.rdu2.redhat.com Expected results: The IP should be unique for each pod. Additional info: Cause: Restart the node service will destroy the atomic-openshift-node container and re-create a new one. But the IPAM info will not be recovered once the new container created. # systemctl restart atomic-openshift-node # docker exec atomic-openshift-node ls /var/lib/cni/networks/openshift-sdn/ ls: cannot access /var/lib/cni/networks/openshift-sdn/: No such file or directory So this will not happen on a RPM installation env.
The data directory must be persistent, so we'll need to map that directory into the container so it sticks around across node process restarts. Similar to how docker handles IPAM, it also has an on-disk database of IPAM allocations.
What directory? I'm assuming this will be as easy as another -v volume option in the definitions of how we launch the node container?
(In reply to Eric Paris from comment #2) > What directory? I'm assuming this will be as easy as another -v volume > option in the definitions of how we launch the node container? /var/lib/cni/ should probably get persisted. Does that only need to happen in ansible, like in roles/openshift_node/templates/openshift.docker.node.service ? eg, is this just an ansible issue, or does something in origin itself need updating?
sdodson can either tell us everywhere we need the -v or he can tell us who knows...
openshift-ansible's roles/openshift_node/templates/openshift.docker.node.service is what's actually used by the installer origin's /contrib/systemd/containerized/ has some reference systemd units too Giueseppe can explain how this needs to be done for system containers which we'll be switching to in the future.
so the issue is that /var/lib/cni is recreated each time the node container restarts? I think this should work with system containers, as there is already a binding from the host `/var/lib/cni` so that it is persisted across restarts of the node container: https://github.com/openshift/origin/blob/master/images/node/system-container/config.json.template#L238 With system containers we enforce the image to be read only, that helps to ensure no state is left into the container itself.
(In reply to Giuseppe Scrivano from comment #6) > so the issue is that /var/lib/cni is recreated each time the node container > restarts? > > I think this should work with system containers, as there is already a > binding from the host `/var/lib/cni` so that it is persisted across restarts > of the node container: > > https://github.com/openshift/origin/blob/master/images/node/system-container/ > config.json.template#L238 > > With system containers we enforce the image to be read only, that helps to > ensure no state is left into the container itself. Ok, so that looks like it would do the right thing. But are our customers using that right now when they set up a "containerized env" or is that happening via the ansible installer, or somehow else?
(In reply to Dan Williams from comment #7) > (In reply to Giuseppe Scrivano from comment #6) > > so the issue is that /var/lib/cni is recreated each time the node container > > restarts? > > > > I think this should work with system containers, as there is already a > > binding from the host `/var/lib/cni` so that it is persisted across restarts > > of the node container: > > > > https://github.com/openshift/origin/blob/master/images/node/system-container/ > > config.json.template#L238 > > > > With system containers we enforce the image to be read only, that helps to > > ensure no state is left into the container itself. > > Ok, so that looks like it would do the right thing. But are our customers > using that right now when they set up a "containerized env" or is that > happening via the ansible installer, or somehow else? No, that's future state. I just wanted to make sure we accounted for that so we don't regress as soon as we move to system containers.
Origin: https://github.com/openshift/origin/pull/13231 Ansible: https://github.com/openshift/openshift-ansible/pull/3556 Tested with containerized node in a docker-in-docker instance. /var/lib/cni is preserved across "docker restart origin/node" invocations.
Since the ansible code merged for 3.5/master marking as MODIFIED
Tested with OCP build 3.5.0.39 and openshift-ansible-3.5.23-1 issue has been fixed. The /var/lib/cni directory will be persistent when the node container restart.
*** Bug 1429029 has been marked as a duplicate of this bug. ***
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2017:0903