Bug 1820118 - Kuryr-cni restarts during conformance tests due to namespace not found
Summary: Kuryr-cni restarts during conformance tests due to namespace not found
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 4.4
Hardware: Unspecified
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.5.0
Assignee: Peter Hunt
QA Contact: Sunil Choudhary
URL:
Whiteboard:
Depends On:
Blocks: 1825339
TreeView+ depends on / blocked
 
Reported: 2020-04-02 10:10 UTC by Jon Uriarte
Modified: 2020-07-13 17:25 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1825339 1838116 (view as bug list)
Environment:
Last Closed: 2020-07-13 17:25:01 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2020:2409 0 None None None 2020-07-13 17:25:22 UTC

Description Jon Uriarte 2020-04-02 10:10:05 UTC
Description of problem:

Kuryr-cni pods are restarted during NP tests:

$ oc -n openshift-kuryr get pods
NAME                                   READY   STATUS    RESTARTS   AGE
kuryr-cni-7gtds                        1/1     Running   11         24h
kuryr-cni-h7lqz                        1/1     Running   13         24h
kuryr-cni-hpbrf                        1/1     Running   1          25h
kuryr-cni-smnzg                        1/1     Running   3          25h
kuryr-cni-ttr24                        1/1     Running   1          25h
kuryr-cni-zm5x5                        1/1     Running   15         24h
kuryr-controller-6945fccc87-q5dz9      1/1     Running   15         25h
kuryr-dns-admission-controller-4kxwc   1/1     Running   1          25h
kuryr-dns-admission-controller-6xj2n   1/1     Running   0          25h
kuryr-dns-admission-controller-v2ftz   1/1     Running   0          25h

kuryr-cni logs:
ERROR kuryr_kubernetes.cni.daemon.service [-] Error when processing addNetwork request. CNI Params: {'CNI_IFNAME': 'eth0', 'CNI_NETNS': '/proc/2612326/ns/net', 'CNI_PATH': '/opt/multu
I_COMMAND': 'ADD', 'CNI_CONTAINERID': 'b3f52d4b77c4b58f112bcfd14dd3f267c6b6456b23a9304f4fa9e01e511b825d', 'CNI_ARGS': 'IgnoreUnknown=true;K8S_POD_NAMESPACE=e2e-daemonsets-2649;K8S_POD
OD_INFRA_CONTAINER_ID=b3f52d4b77c4b58f112bcfd14dd3f267c6b6456b23a9304f4fa9e01e511b825d'}: FileNotFoundError: [Errno 2] No such file or directory: b'/host_proc/2612326/ns'
ERROR kuryr_kubernetes.cni.daemon.service Traceback (most recent call last):
ERROR kuryr_kubernetes.cni.daemon.service   File "/usr/lib/python3.6/site-packages/kuryr_kubernetes/cni/daemon/service.py", line 81, in add
ERROR kuryr_kubernetes.cni.daemon.service     vif = self.plugin.add(params)
ERROR kuryr_kubernetes.cni.daemon.service   File "/usr/lib/python3.6/site-packages/kuryr_kubernetes/cni/plugins/k8s_cni_registry.py", line 52, in add
ERROR kuryr_kubernetes.cni.daemon.service     vifs = self._do_work(params, b_base.connect)
ERROR kuryr_kubernetes.cni.daemon.service   File "/usr/lib/python3.6/site-packages/kuryr_kubernetes/cni/plugins/k8s_cni_registry.py", line 160, in _do_work
ERROR kuryr_kubernetes.cni.daemon.service     container_id=params.CNI_CONTAINERID)
ERROR kuryr_kubernetes.cni.daemon.service   File "/usr/lib/python3.6/site-packages/kuryr_kubernetes/cni/binding/base.py", line 151, in connect
ERROR kuryr_kubernetes.cni.daemon.service     driver.connect(vif, ifname, netns, container_id)
ERROR kuryr_kubernetes.cni.daemon.service   File "/usr/lib/python3.6/site-packages/kuryr_kubernetes/cni/binding/nested.py", line 56, in connect
ERROR kuryr_kubernetes.cni.daemon.service     with b_base.get_ipdb(netns) as c_ipdb:
ERROR kuryr_kubernetes.cni.daemon.service   File "/usr/lib/python3.6/site-packages/kuryr_kubernetes/cni/binding/base.py", line 72, in get_ipdb
ERROR kuryr_kubernetes.cni.daemon.service     ipdb = pyroute2.IPDB(nl=pyroute2.NetNS(netns))
ERROR kuryr_kubernetes.cni.daemon.service   File "/usr/lib/python3.6/site-packages/pyroute2/netns/nslink.py", line 172, in __init__
ERROR kuryr_kubernetes.cni.daemon.service     super(NetNS, self).__init__(trnsp_in, trnsp_out)
ERROR kuryr_kubernetes.cni.daemon.service   File "/usr/lib/python3.6/site-packages/pyroute2/iproute/linux.py", line 119, in __init__
ERROR kuryr_kubernetes.cni.daemon.service     super(RTNL_API, self).__init__(*argv, **kwarg)
ERROR kuryr_kubernetes.cni.daemon.service   File "/usr/lib/python3.6/site-packages/pyroute2/remote/__init__.py", line 225, in __init__
ERROR kuryr_kubernetes.cni.daemon.service     raise init['error']
ERROR kuryr_kubernetes.cni.daemon.service FileNotFoundError: [Errno 2] No such file or directory: b'/host_proc/2612326/ns'
ERROR kuryr_kubernetes.cni.daemon.service 
INFO werkzeug [-] 127.0.0.1 - - [01/Apr/2020 20:51:41] "POST /addNetwork HTTP/1.1" 500 - 

Version-Release number of selected component (if applicable):
OCP 4.4.0-0.nightly-2020-03-31-053841
OSP 13 2020-03-25.1


How reproducible: when running K8s NP tests

Steps to Reproduce:
1. Deploy OSP 13 or OSP 16
2. Install OCP 4.4 with Kuryr
3. Run NP tests

Actual results: kuryr-cni pods restart and show the error in the description

Expected results: kuryr-cni pods shouldn't restart

Comment 1 Jon Uriarte 2020-04-02 10:27:51 UTC
One correction, it's reproduced when running conformance tests (not NP tests)

Comment 2 Michał Dulko 2020-04-02 15:12:07 UTC
I believe this is a problem with cri-o. As you can see on CNI specification [1], a namespace must exists when CNI is called. Kuryr is merely an implementation of a CNI plugin, it requires those considerations the CNI spec provides to be held. Analyzing the full log [2] we can see that the namespace was not existing when CNI requests were handled.

[1] https://github.com/containernetworking/cni/blob/master/SPEC.md#general-considerations
[2] http://pastebin.test.redhat.com/851168

Comment 3 Peter Hunt 2020-04-02 15:19:35 UTC
This is part of what https://github.com/openshift/machine-config-operator/pull/1568 was trying to solve. Currently, CRI-O references the network namespace of a pod by the pid of its infra container. This is inherently racy, and prone to issues where the process has been cleaned up, but the container remains in CRI-O's state. A more reliable way is for CRI-O to manage the namespace lifecycle. 

This capability is in 4.4, but as found, it currently doesn't play well with third party networking plugins, so further work is needed.

Does this happen every test run, or is it intermittent (indicative of a race)?

Comment 4 Jon Uriarte 2020-04-03 07:34:07 UTC
(In reply to Peter Hunt from comment #3)
> This is part of what
> https://github.com/openshift/machine-config-operator/pull/1568 was trying to
> solve. Currently, CRI-O references the network namespace of a pod by the pid
> of its infra container. This is inherently racy, and prone to issues where
> the process has been cleaned up, but the container remains in CRI-O's state.
> A more reliable way is for CRI-O to manage the namespace lifecycle. 
> 
> This capability is in 4.4, but as found, it currently doesn't play well with
> third party networking plugins, so further work is needed.
> 
> Does this happen every test run, or is it intermittent (indicative of a
> race)?

Hei Peter, I could reproduce it each time I ran kubernetes/conformance tests -280 tests- from
origin repo. I noticed it as the kuryr-cni pod restarted like 6-8 times during the tests. I saw
it in two different environments so I believe it is reproducible.

Jon

Comment 5 Michał Dulko 2020-04-03 16:34:25 UTC
(In reply to Peter Hunt from comment #3)
> This is part of what
> https://github.com/openshift/machine-config-operator/pull/1568 was trying to
> solve. Currently, CRI-O references the network namespace of a pod by the pid
> of its infra container. This is inherently racy, and prone to issues where
> the process has been cleaned up, but the container remains in CRI-O's state.
> A more reliable way is for CRI-O to manage the namespace lifecycle. 
> 
> This capability is in 4.4, but as found, it currently doesn't play well with
> third party networking plugins, so further work is needed.
> 
> Does this happen every test run, or is it intermittent (indicative of a
> race)?

Alright, so I guess we'll need to figure out how to make Kuryr SDN work fine with that change - we had issues with getting SDN pods to access network namespaces when they're in another directory.

It's all interconnected it seems!

Comment 6 Peter Hunt 2020-04-03 17:09:46 UTC
Note: I am working on a PR (https://github.com/cri-o/cri-o/pull/3509) that will use /var/run/netns for network namespaces, instead of /var/run/crio/ns/. That *should* mean you won't need changes to Kuryr to accommodate CRI-O managing its namespace lifecycle

Comment 8 Michał Dulko 2020-04-06 15:42:01 UTC
(In reply to Peter Hunt from comment #6)
> Note: I am working on a PR (https://github.com/cri-o/cri-o/pull/3509) that
> will use /var/run/netns for network namespaces, instead of
> /var/run/crio/ns/. That *should* mean you won't need changes to Kuryr to
> accommodate CRI-O managing its namespace lifecycle

I don't think this will help with the clue of the issues we had with patch putting namespaces into /var/run/crio/ns/. In general we can mount whatever we like from the host into the kuryr-cni containers and I had a patch mounting /var/run/crio. The problem we had was with the permissions - somehow our code couldn't access the network namespaces due to file permissions issues even though kuryr-daemon runs as root. Also selinux logs shown no issues. Any idea why that could happen?

Comment 9 Peter Hunt 2020-04-06 17:14:28 UTC
Oop, I know what's happening there:
https://github.com/cri-o/cri-o/blob/33f0cafcd2e81eae9c1be723d6b1ccc44d70838b/pkg/config/config.go#L755
which will also be fixed by the PR changing the location for 1.17: https://github.com/cri-o/cri-o/pull/3530

Comment 10 Peter Hunt 2020-04-08 14:55:41 UTC
So the above PR merged. The status of this bug is as follows:

We are working on getting CRI-O to manage namespace lifecycle into 4.5. There's one known blocking bug there, which some version of https://github.com/openshift/cluster-network-operator/pull/573 will fix.
Once that gets in, and we switch CRI-O to do so, we're going to let it sit for a moment to make sure it does not break anything else.

once we know we didn't break anyone, we will also make the switch in 4.4

I would estimate that all can be done in the next two weeks.

Comment 14 Peter Hunt 2020-04-17 18:22:59 UTC
dup'ed this to 4.4.z, this version can be for 4.5

Comment 15 Peter Hunt 2020-05-14 18:18:10 UTC
CRI-O is now managing namespace lifecycle in 4.5 after https://github.com/openshift/machine-config-operator/pull/1689 merged. Moving this to modified

Comment 25 errata-xmlrpc 2020-07-13 17:25:01 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:2409


Note You need to log in before you can comment on or make changes to this bug.