Description of problem: Restarting sdn-controller with existing egress IP assignments has the pod crashing with the following stack trace: I0103 11:54:57.180079 1 egressip.go:314] CloudPrivateIPConfig: 10.0.150.25 is being moved and still exists, enqueuing its creation E0103 11:54:57.180152 1 runtime.go:78] Observed a panic: "assignment to entry in nil map" (assignment to entry in nil map) goroutine 111 [running]: k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1756ce0, 0x1b987f0}) k8s.io/apimachinery.0-rc.0/pkg/util/runtime/runtime.go:74 +0x85 k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0001a80c0}) k8s.io/apimachinery.0-rc.0/pkg/util/runtime/runtime.go:48 +0x75 panic({0x1756ce0, 0x1b987f0}) runtime/panic.go:1038 +0x215 github.com/openshift/sdn/pkg/network/master.(*egressIPManager).ClaimEgressIP(0xc000118000, 0x2, {0xc0006b2910, 0xb}, {0xc0006b2920, 0x0}, {0x0, 0x0}) github.com/openshift/sdn/pkg/network/master/egressip.go:315 +0x3bc github.com/openshift/sdn/pkg/network/common.(*EgressIPTracker).syncEgressNodeState(0xc000560000, 0xc0005a61c0, 0x18) github.com/openshift/sdn/pkg/network/common/egressip.go:625 +0x471 github.com/openshift/sdn/pkg/network/common.(*EgressIPTracker).syncEgressIPs(0xc000560000) github.com/openshift/sdn/pkg/network/common/egressip.go:600 +0xeb github.com/openshift/sdn/pkg/network/common.(*EgressIPTracker).UpdateNetNamespaceEgress(0xc000560000, 0xc000612f68) github.com/openshift/sdn/pkg/network/common/egressip.go:560 +0x426 github.com/openshift/sdn/pkg/network/common.(*EgressIPTracker).handleAddOrUpdateNetNamespace(0x0, {0x195b860, 0xc000612f68}, {0x100000000000000, 0x0}, {0x19841b8, 0x5}) github.com/openshift/sdn/pkg/network/common/egressip.go:512 +0x1cc github.com/openshift/sdn/pkg/network/common.InformerFuncs.func1({0x195b860, 0xc000612f68}) github.com/openshift/sdn/pkg/network/common/informers.go:19 +0x39 k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...) k8s.io/client-go.0-rc.0/tools/cache/controller.go:231 k8s.io/client-go/tools/cache.(*processorListener).run.func1() k8s.io/client-go.0-rc.0/tools/cache/shared_informer.go:777 +0x9f k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7fdc2c166ba8) k8s.io/apimachinery.0-rc.0/pkg/util/wait/wait.go:155 +0x67 k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00066d738, {0x1ba1cc0, 0xc00051c030}, 0x1, 0xc0001822a0) k8s.io/apimachinery.0-rc.0/pkg/util/wait/wait.go:156 +0xb6 k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0, 0x3b9aca00, 0x0, 0x0, 0x0) k8s.io/apimachinery.0-rc.0/pkg/util/wait/wait.go:133 +0x89 k8s.io/apimachinery/pkg/util/wait.Until(...) k8s.io/apimachinery.0-rc.0/pkg/util/wait/wait.go:90 k8s.io/client-go/tools/cache.(*processorListener).run(0xc000178380) k8s.io/client-go.0-rc.0/tools/cache/shared_informer.go:771 +0x6b k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1() k8s.io/apimachinery.0-rc.0/pkg/util/wait/wait.go:73 +0x5a created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start k8s.io/apimachinery.0-rc.0/pkg/util/wait/wait.go:71 +0x88 panic: assignment to entry in nil map [recovered] panic: assignment to entry in nil map goroutine 111 [running]: k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc0001a80c0}) k8s.io/apimachinery.0-rc.0/pkg/util/runtime/runtime.go:55 +0xd8 panic({0x1756ce0, 0x1b987f0}) runtime/panic.go:1038 +0x215 github.com/openshift/sdn/pkg/network/master.(*egressIPManager).ClaimEgressIP(0xc000118000, 0x2, {0xc0006b2910, 0xb}, {0xc0006b2920, 0x0}, {0x0, 0x0}) github.com/openshift/sdn/pkg/network/master/egressip.go:315 +0x3bc github.com/openshift/sdn/pkg/network/common.(*EgressIPTracker).syncEgressNodeState(0xc000560000, 0xc0005a61c0, 0x18) github.com/openshift/sdn/pkg/network/common/egressip.go:625 +0x471 github.com/openshift/sdn/pkg/network/common.(*EgressIPTracker).syncEgressIPs(0xc000560000) github.com/openshift/sdn/pkg/network/common/egressip.go:600 +0xeb github.com/openshift/sdn/pkg/network/common.(*EgressIPTracker).UpdateNetNamespaceEgress(0xc000560000, 0xc000612f68) github.com/openshift/sdn/pkg/network/common/egressip.go:560 +0x426 github.com/openshift/sdn/pkg/network/common.(*EgressIPTracker).handleAddOrUpdateNetNamespace(0x0, {0x195b860, 0xc000612f68}, {0x100000000000000, 0x0}, {0x19841b8, 0x5}) github.com/openshift/sdn/pkg/network/common/egressip.go:512 +0x1cc github.com/openshift/sdn/pkg/network/common.InformerFuncs.func1({0x195b860, 0xc000612f68}) github.com/openshift/sdn/pkg/network/common/informers.go:19 +0x39 k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...) k8s.io/client-go.0-rc.0/tools/cache/controller.go:231 k8s.io/client-go/tools/cache.(*processorListener).run.func1() k8s.io/client-go.0-rc.0/tools/cache/shared_informer.go:777 +0x9f k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7fdc2c166ba8) k8s.io/apimachinery.0-rc.0/pkg/util/wait/wait.go:155 +0x67 k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00066d738, {0x1ba1cc0, 0xc00051c030}, 0x1, 0xc0001822a0) k8s.io/apimachinery.0-rc.0/pkg/util/wait/wait.go:156 +0xb6 k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0, 0x3b9aca00, 0x0, 0x0, 0x0) k8s.io/apimachinery.0-rc.0/pkg/util/wait/wait.go:133 +0x89 k8s.io/apimachinery/pkg/util/wait.Until(...) k8s.io/apimachinery.0-rc.0/pkg/util/wait/wait.go:90 k8s.io/client-go/tools/cache.(*processorListener).run(0xc000178380) k8s.io/client-go.0-rc.0/tools/cache/shared_informer.go:771 +0x6b k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1() k8s.io/apimachinery.0-rc.0/pkg/util/wait/wait.go:73 +0x5a created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start k8s.io/apimachinery.0-rc.0/pkg/util/wait/wait.go:71 +0x88 Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Perform egress IP assignments on a public cloud (ex: AWS) 2. Restart the leading sdn-controller pod (ex: by deleting it) 3. Actual results: Crashloops Expected results: Not crashlooping Additional info:
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory (Moderate: OpenShift Container Platform 4.10.3 security update), and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHSA-2022:0056
The VMware 2V0-21.20 inquiries dumps pdf design is straightforward and learn for the Professional VMware vSphere 7.x Exam 2V0-21.20 test. The VMware 2V0-21.20 genuine questions and answers are additionally replied by the specialists who themselves got passing marks in Data Center Virtualization 2020 - VCP test. One can without much of a stretch download the VMware 2V0-21.20 practice questions pdf any place one like and in as such, this will assist with saving investment to concentrate too. One can download Data Center Virtualization 2020 - VCP 2V0-21.20 inquiries dumps in PC, cell phones, the responses are correct and one can depend on it with practically no apprehension. It is critical to realize every one of the fundamental responses that are remembered for the Data Center Virtualization 2020 - VCP prospectus however alongside it, one ought to likewise upgrade abilities. Furthermore, for this reason, one ought to utilize Professional VMware vSphere 7.x Exam 2V0-21.20 practice test online motor. Frequently individuals might find it hard to go with rehearsing mode yet this is one of the prerequisites to go through the 2V0-21.20 Professional VMware vSphere 7.x Exam. Thus, one ought to search for the training from www.exactinside.com! The Professional VMware vSphere 7.x Exam 2V0-21.20 genuine questions and answers are ready by the VMware specialists that are important and as indicated by the prerequisite of the Data Center Virtualization 2020 - VCP test. The VMware 2V0-21.20 practice dumps confirmed by the specialists and the arrangement included is the refreshed one that unmistakable every one of the ideas that one can undoubtedly depend on. Visit Page: https://www.exactinside.com/2V0-21-20-exactdumps.html