Bug 2084249 - panic in ovn pod from an e2e-aws-single-node-serial nightly run
Summary: panic in ovn pod from an e2e-aws-single-node-serial nightly run
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Networking
Version: 4.11
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.11.0
Assignee: Surya Seetharaman
QA Contact: Anurag saxena
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2022-05-11 18:14 UTC by Aniket Bhat
Modified: 2022-08-10 11:11 UTC (History)
0 users

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2022-08-10 11:11:18 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift ovn-kubernetes pull 1090 0 None open Bug 2084249: [DownstreamMerge] 5-12-22 2022-05-12 19:29:21 UTC
Github ovn-org ovn-kubernetes pull 2980 0 None Merged Fix a crash on service update check 2022-05-12 18:08:06 UTC
Red Hat Product Errata RHSA-2022:5069 0 None None None 2022-08-10 11:11:45 UTC

Description Aniket Bhat 2022-05-11 18:14:26 UTC
Description of problem: panic in ovn pod from an e2e-aws-single-node-serial nightly run

https://prow.ci.openshift.org/view/gs/origin-ci-test/logs/openshift-kubernetes-1252-nightly-4.11-e2e-aws-single-node-serial/1523985587286052864

Crash-log trace: 

E0510 13:08:18.770517   41863 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 116 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x19c5ca0, 0x2e2b6e0})
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x7d
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x98})
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x75
panic({0x19c5ca0, 0x2e2b6e0})
	/usr/lib/golang/src/runtime/panic.go:1038 +0x215
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/node.serviceUpdateNotNeeded(0xc000f8c7f0, 0xc0011be4c8)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/node/gateway_shared_intf.go:479 +0x2d9
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/node.(*nodePortWatcher).UpdateService(0xc00059b0e0, 0xc000f8c7f0, 0xc0011be4c8)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/node/gateway_shared_intf.go:513 +0x6d
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/node.(*gateway).UpdateService(0xc000d2bc20, 0x1ed2378, 0x0)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/node/gateway.go:69 +0x87
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/node.(*gateway).Init.func2({0x1c0ca60, 0xc000f8c7f0}, {0x1c0ca60, 0xc0011be4c8})
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/node/gateway.go:151 +0x48
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnUpdate(...)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/client-go/tools/cache/controller.go:238
k8s.io/client-go/tools/cache.FilteringResourceEventHandler.OnUpdate({0xc0009dc3c0, {0x1ee5fa0, 0xc0002078c0}}, {0x1c0ca60, 0xc000f8c7f0}, {0x1c0ca60, 0xc0011be4c8})
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/client-go/tools/cache/controller.go:273 +0xe2
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/factory.(*Handler).OnUpdate(...)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/factory/handler.go:47
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/factory.(*informer).newFederatedHandler.func2.1(0xc00001fc98)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/factory/handler.go:323 +0x3c
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/factory.(*informer).forEachHandler(0xc00002e1b0, {0x1c0ca60, 0xc0011be4c8}, 0xc00001fd60)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/factory/handler.go:110 +0x296
github.com/ovn-org/ovn-kubernetes/go-controller/pkg/factory.(*informer).newFederatedHandler.func2({0x1c0ca60, 0xc000f8c7f0}, {0x1c0ca60, 0xc0011be4c8})
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/pkg/factory/handler.go:322 +0x145
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnUpdate(...)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/client-go/tools/cache/controller.go:238
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/client-go/tools/cache/shared_informer.go:785 +0x127
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x7fe1902100e8)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x67
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00014ff38, {0x1eba860, 0xc000162030}, 0x1, 0xc000136060)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000118de0, 0x3b9aca00, 0x0, 0x80, 0xc00014ff88)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0001ae600)
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/client-go/tools/cache/shared_informer.go:781 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	/go/src/github.com/openshift/ovn-kubernetes/go-controller/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x88
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x16ea5d9]


Version-Release number of selected component (if applicable): 4.11 nightly with kube rebase.


How reproducible: Happened at least once in the nightly job for single node.


Steps to Reproduce:
1. Run the ci job for aws-single-node-serial.
2.
3.

Actual results:
ovnkube-node pod crashed

Expected results:
No crashes should be seen during the e2e tests.

Additional info:

Comment 5 errata-xmlrpc 2022-08-10 11:11:18 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory (Important: OpenShift Container Platform 4.11.0 bug fix and security update), and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2022:5069


Note You need to log in before you can comment on or make changes to this bug.