Description of problem: When connecting the VMI via `bridge` mode and IP delegation to the pod network, CNI status checks fail and as a consequence the kubelet stops all containers of that pod when the kubelet is restarted. I checked it on k8s 1.10.4 and k8s 1.13.3. I *think* that the hint in the logs is this: ``` Mar 04 09:28:43 node01 kubelet[29530]: W0304 09:28:43.167534 29530 docker_sandbox.go:384] failed to read pod IP from plugin/docker: NetworkPlugin cni failed on the status hook for pod "virt-launcher-vmi-nocloud-v4j2m_default": Unexpected address output ``` I did not try it with ovn-cni, crio, just flannel. Version-Release number of selected component (if applicable): How reproducible: ```bash $ cluster/kubectl.sh create -f cluster/examples/vmi-nocloud.yaml $ cluster/kubectl.sh create -f cluster/examples/vmi-slirp.yaml $ cluster/kubectl.sh get vmis NAME AGE PHASE IP NODENAME vmi-nocloud 15s Running 10.244.0.32 node01 vmi-slirp 1m Running 10.244.0.31 node01 $ cluster/cli.sh ssh node01 $ sudo systemctl restart kubelet $ cluster/kubectl.sh get vmis NAME AGE PHASE IP NODENAME vmi-nocloud 34s Failed 10.244.0.32 node01 vmi-slirp 1m Running 10.244.0.31 node01 ``` Steps to Reproduce: 1. 2. 3. Actual results: The VMI with the default `bridge` options gets stopped. Expected results: No VMI gets restarted if the kubelet restarts. Additional info: https://github.com/kubevirt/kubevirt/issues/2076
Given our plans to eliminate the "bridge" binding mechanism for the Pod network ( https://jira.coreos.com/browse/KNIP-570 ) there is not reason track this bug further.