+++ This bug was initially created as a clone of Bug #1763278 +++ When a namespace is deleted, kuryr-controller is in charge of deleting its associated openstack resources (net, subnet and ports) as well as the associated KuryrNet CRD. As removing OpenStack resources may take some time, if the kuryr-controller is restarted by any different reason during that process, the resources are left behind as no new events for the namespace (already deleted) happen after the restart.
Verified on OCP 4.2.0-0.nightly-2019-10-25-021846 build on top of OSP 13 2019-10-01.1 puddle. release image: registry.svc.ci.openshift.org/ocp/release@sha256:8f97aa21e1c0b2815ec7c86e4138362940a5dcbc292840ab4d6d5b67fedb173f There are no leftovers after running openshift/origin e2e kubernetes/conformance tests. It has been manually verified as well - steps: 1. Create project (namespace): $ oc new-project e2e-test 2. Check Kuryr resources (kuryrnets and namespace) $ oc get kuryrnets | grep e2e | wc -l 1 $ oc get namespace | grep e2e | wc -l 1 3. Delete the project (namespace): $ oc delete project e2e-test 4. Delete Kuryr controller pod: $ oc -n openshift-kuryr delete pod kuryr-controller-6dfb5c77c9-gsx8m 5. Check Kuryr resources (kuryrnets and namespace) $ oc get kuryrnets | grep e2e | wc -l 1 $ oc get namespace | grep e2e | wc -l 0 kuryrnets was not deleted before the pod was deleted. 6. Check the pod is started: (shiftstack) [stack@undercloud-0 ~]$ oc -n openshift-kuryr get pods NAME READY STATUS RESTARTS AGE kuryr-cni-7l2hg 1/1 Running 1 5h24m kuryr-cni-hfv6m 1/1 Running 0 5h39m kuryr-cni-ngwb2 1/1 Running 3 5h24m kuryr-cni-r6rc8 1/1 Running 3 5h24m kuryr-cni-wfhmv 1/1 Running 0 5h39m kuryr-cni-x9g4k 1/1 Running 0 5h39m kuryr-controller-6dfb5c77c9-w7hqx 0/1 Running 0 20s kuryr-dns-admission-controller-hfxt6 1/1 Running 0 5h39m kuryr-dns-admission-controller-p6qzz 1/1 Running 0 5h39m kuryr-dns-admission-controller-x9qcj 1/1 Running 0 5h39m 7. Check kuryrnets was deleted: $ oc get kuryrnets | grep e2e | wc -l 0
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2019:3303