Bug 1906741 - KeyError: 'nodeName' on NP deletion
Summary: KeyError: 'nodeName' on NP deletion
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.6
Hardware: Unspecified
OS: Unspecified
medium
medium
Target Milestone: ---
: 4.6.z
Assignee: Michał Dulko
QA Contact: GenadiC
URL:
Whiteboard:
Depends On: 1904973
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-12-11 10:48 UTC by OpenShift BugZilla Robot
Modified: 2021-02-08 13:51 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-02-08 13:50:52 UTC
Target Upstream Version:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift kuryr-kubernetes pull 428 0 None closed [release-4.6] Bug 1906741: Skip unscheduled pods when deleting NPs 2021-02-05 18:33:43 UTC
Red Hat Product Errata RHSA-2021:0308 0 None None None 2021-02-08 13:51:10 UTC

Description OpenShift BugZilla Robot 2020-12-11 10:48:01 UTC
+++ This bug was initially created as a clone of Bug #1904973 +++

Description of problem:
We might get that traceback when there are unscheduled matching pods on NP deletion:

2020-11-04 13:41:33.559 19857 ERROR kuryr_kubernetes.handlers.logging Traceback (most recent call last):
2020-11-04 13:41:33.559 19857 ERROR kuryr_kubernetes.handlers.logging File "/opt/stack/kuryr-kubernetes/kuryr_kubernetes/handlers/logging.py", line 37, in __call__
2020-11-04 13:41:33.559 19857 ERROR kuryr_kubernetes.handlers.logging self._handler(event, *args, **kwargs)
2020-11-04 13:41:33.559 19857 ERROR kuryr_kubernetes.handlers.logging File "/opt/stack/kuryr-kubernetes/kuryr_kubernetes/handlers/retry.py", line 81, in __call__
2020-11-04 13:41:33.559 19857 ERROR kuryr_kubernetes.handlers.logging self._handler(event, *args, **kwargs)
2020-11-04 13:41:33.559 19857 ERROR kuryr_kubernetes.handlers.logging File "/opt/stack/kuryr-kubernetes/kuryr_kubernetes/handlers/k8s_base.py", line 81, in __call__
2020-11-04 13:41:33.559 19857 ERROR kuryr_kubernetes.handlers.logging self.on_finalize(obj)
2020-11-04 13:41:33.559 19857 ERROR kuryr_kubernetes.handlers.logging File "/opt/stack/kuryr-kubernetes/kuryr_kubernetes/controller/handlers/kuryrnetworkpolicy.py", line 288, in on_finalize
2020-11-04 13:41:33.559 19857 ERROR kuryr_kubernetes.handlers.logging self._drv_vif_pool.update_vif_sgs(pod, pod_sgs)
2020-11-04 13:41:33.559 19857 ERROR kuryr_kubernetes.handlers.logging File "/opt/stack/kuryr-kubernetes/kuryr_kubernetes/controller/drivers/vif_pool.py", line 1224, in update_vif_sgs
2020-11-04 13:41:33.559 19857 ERROR kuryr_kubernetes.handlers.logging pod_vif_type = self._get_pod_vif_type(pod)
2020-11-04 13:41:33.559 19857 ERROR kuryr_kubernetes.handlers.logging File "/opt/stack/kuryr-kubernetes/kuryr_kubernetes/controller/drivers/vif_pool.py", line 1244, in _get_pod_vif_type
2020-11-04 13:41:33.559 19857 ERROR kuryr_kubernetes.handlers.logging node_name = pod['spec']['nodeName']
2020-11-04 13:41:33.559 19857 ERROR kuryr_kubernetes.handlers.logging KeyError: 'nodeName'
2020-11-04 13:41:33.559 19857 ERROR kuryr_kubernetes.handlers.logging

Version-Release number of selected component (if applicable):


How reproducible:
Always if hit.

Steps to Reproduce:
1. Disable kube-scheduler.
2. Create an NP.
3. Create a pod that matches this NP. Double check that it'll hang unscheduled due to kube-scheduler being disabled.
4. Delete NP from #2.

Actual results:
Traceback in kuryr-controller logs.

Expected results:
Unscheduled pod getting ignored.

Additional info:

Comment 2 rlobillo 2021-01-28 11:41:11 UTC
Verified on 4.6.0-0.nightly-2021-01-28-042639 over OSP16.1 (RHOS-16.1-RHEL-8-20201214.n.3) with OVN-Octavia.

#1. Disabling kube-scheduler:

$ openstack server list                                                                                                                                                                                   
+--------------------------------------+-----------------------------+--------+-------------------------------------+--------------------+--------+                                                                                          
| ID                                   | Name                        | Status | Networks                            | Image              | Flavor |
+--------------------------------------+-----------------------------+--------+-------------------------------------+--------------------+--------+                                                                                          
| 5e9b29f5-5ff1-48ae-bd3d-9bbf74841ac1 | ostest-mjts6-worker-0-tnz55 | ACTIVE | ostest-mjts6-openshift=10.196.0.216 | ostest-mjts6-rhcos |        |
| 30e46a58-9627-4258-b2b0-23f742323e56 | ostest-mjts6-worker-0-n9bbl | ACTIVE | ostest-mjts6-openshift=10.196.0.162 | ostest-mjts6-rhcos |        |                                                                                          
| 0c1058f1-f22b-4791-85b6-a6750a494d7d | ostest-mjts6-worker-0-2z569 | ACTIVE | ostest-mjts6-openshift=10.196.2.57  | ostest-mjts6-rhcos |        |                                                                                          
| 0b920580-74a0-4bfb-aec7-4eeb756e15c2 | ostest-mjts6-master-1       | ACTIVE | ostest-mjts6-openshift=10.196.0.57  | ostest-mjts6-rhcos |        |
| 1084b85d-6247-4997-914c-b1ef79b53a55 | ostest-mjts6-master-0       | ACTIVE | ostest-mjts6-openshift=10.196.2.163 | ostest-mjts6-rhcos |        |                                                                                          
| 8142f072-d805-460a-ab73-edd63b2fc9df | ostest-mjts6-master-2       | ACTIVE | ostest-mjts6-openshift=10.196.3.114 | ostest-mjts6-rhcos |        |
+--------------------------------------+-----------------------------+--------+-------------------------------------+--------------------+--------+

$ ssh -J core.22.103 core.2.163 sudo mv /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp/
$ ssh -J core.22.103 core.0.57 sudo mv /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp/
$ ssh -J core.22.103 core.3.114 sudo mv /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp/                                                                                              

#2. Create the NP and the pods:

$ oc new-project test
$ cat np_resource.yaml 
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
  name: np
spec:
  podSelector:
    matchLabels:
      run: demo
  ingress:
  - from:
    - podSelector:
        matchLabels:
          run: demo-allowed-caller
$ oc apply -f np_resource.yaml
$ oc run --image kuryr/demo demo-allowed-caller
$ oc run --image kuryr/demo demo

#3.Pods stuck on pending but no kuryr-controller restarts:

$ oc get pods -n openshift-kuryr
NAME                               READY   STATUS    RESTARTS   AGE
kuryr-cni-d8t7r                    1/1     Running   0          68m
kuryr-cni-fbwtb                    1/1     Running   0          48m
kuryr-cni-gf2td                    1/1     Running   0          68m
kuryr-cni-k9vfz                    1/1     Running   0          47m
kuryr-cni-l88nt                    1/1     Running   0          68m
kuryr-cni-nrwbs                    1/1     Running   0          47m
kuryr-controller-ddb697794-69b8l   1/1     Running   1          68m
(shiftstack) [stack@undercloud-0 ~]$ oc logs -n openshift-kuryr kuryr-controller-ddb697794-69b8l | grep -i KeyError
(shiftstack) [stack@undercloud-0 ~]$ oc get all
NAME                      READY   STATUS    RESTARTS   AGE
pod/demo                  0/1     Pending   0          35s
pod/demo-allowed-caller   0/1     Pending   0          41s


#4. restoring kube-scheduler:

$ ssh -J core.22.103 core.2.163 sudo mv /tmp/kube-scheduler-pod.yaml /etc/kubernetes/manifests/
$ ssh -J core.22.103 core.0.57 sudo mv /tmp/kube-scheduler-pod.yaml /etc/kubernetes/manifests/
$ ssh -J core.22.103 core.3.114 sudo mv /tmp/kube-scheduler-pod.yaml /etc/kubernetes/manifests/

#5. pods are deployed and behave normally. Kuryr-controller without restarts:

$ oc get all
NAME                      READY   STATUS    RESTARTS   AGE
pod/demo                  1/1     Running   0          2m21s
pod/demo-allowed-caller   1/1     Running   0          2m27s

$ oc expose pod/demo --port 80 --target-port 8080
service/demo exposed

$ oc get all
NAME                      READY   STATUS    RESTARTS   AGE
pod/demo                  1/1     Running   0          2m40s
pod/demo-allowed-caller   1/1     Running   0          2m46s

NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/demo   ClusterIP   172.30.59.184   <none>        80/TCP    7s
(shiftstack) [stack@undercloud-0 ~]$ oc rsh pod/demo-allowed-caller curl 172.30.59.184
^Ccommand terminated with exit code 130
(shiftstack) [stack@undercloud-0 ~]$ oc rsh pod/demo-allowed-caller curl 172.30.59.184
demo: HELLO! I AM ALIVE!!!

$ oc logs -n openshift-kuryr kuryr-controller-ddb697794-69b8l | grep -i KeyError
$ oc get pods -n openshift-kuryr
NAME                               READY   STATUS    RESTARTS   AGE
kuryr-cni-d8t7r                    1/1     Running   0          73m
kuryr-cni-fbwtb                    1/1     Running   0          52m
kuryr-cni-gf2td                    1/1     Running   0          73m
kuryr-cni-k9vfz                    1/1     Running   0          52m
kuryr-cni-l88nt                    1/1     Running   0          73m
kuryr-cni-nrwbs                    1/1     Running   0          51m
kuryr-controller-ddb697794-69b8l   1/1     Running   1          73m

Comment 5 errata-xmlrpc 2021-02-08 13:50:52 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Container Platform 4.6.16 security and bug fix update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2021:0308


Note You need to log in before you can comment on or make changes to this bug.