Note: This bug is displayed in read-only format because the product is no longer active in Red Hat Bugzilla.

Bug 1821352

Summary: Uncaught Ginkgo testsuite failure in ExpectNoDisruption()
Product: OpenShift Container Platform Reporter: Dan Williams <dcbw>
Component: openshift-apiserverAssignee: Clayton Coleman <ccoleman>
Status: CLOSED WORKSFORME QA Contact: Xingxing Xia <xxia>
Severity: low Docs Contact:
Priority: medium    
Version: 4.5CC: aos-bugs, eparis, jokerman, mfojtik, mnewby, sttts
Target Milestone: ---   
Target Release: 4.5.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2020-05-19 20:32:22 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Dan Williams 2020-04-06 16:11:43 UTC
https://storage.googleapis.com/origin-ci-test/pr-logs/pull/24824/pull-ci-openshift-origin-master-e2e-gcp-upgrade/3501/build-log.txt

Apr 05 15:44:21.352 I kube-apiserver Kube API started responding to GET requests
Apr 05 16:04:43.766 E kube-apiserver Kube API started failing: Get https://api.ci-op-7h9hxwn4-db044.origin-ci-int-gce.dev.openshift.com:6443/api/v1/namespaces/kube-system?timeout=15s: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Apr 05 16:04:43.766 I openshift-apiserver OpenShift API stopped responding to GET requests: Get https://api.ci-op-7h9hxwn4-db044.origin-ci-int-gce.dev.openshift.com:6443/apis/image.openshift.io/v1/namespaces/openshift-apiserver/imagestreams/missing?timeout=15s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
Apr 05 16:04:44.766 - 194s  E kube-apiserver Kube API is not responding to GET requests
Apr 05 16:04:44.766 - 194s  E openshift-apiserver OpenShift API is not responding to GET requests
Apr 05 16:07:59.095 I kube-apiserver Kube API started responding to GET requests
Apr 05 16:07:59.095 I openshift-apiserver OpenShift API started responding to GET requests

Full Stack Trace
github.com/openshift/origin/test/extended/util/disruption.ExpectNoDisruption(0xc001217a20, 0x3fb47ae147ae147b, 0x22d04f532d0, 0xc001354000, 0xc, 0x1ca, 0x573b9df, 0x25)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/disruption/disruption.go:237 +0x68b
github.com/openshift/origin/test/extended/util/disruption/controlplane.(*AvailableTest).Test(0x9b2c7f8, 0xc001217a20, 0xc000c45c80, 0x2)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/disruption/controlplane/controlplane.go:45 +0x2e0
github.com/openshift/origin/test/extended/util/disruption.(*chaosMonkeyAdapter).Test(0xc001f517c0, 0xc00206e5a0)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/test/extended/util/disruption/disruption.go:145 +0x38b
github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc00206e5a0, 0xc00206c630)
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x93
created by github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xab
STEP: Wait for service to hasFinalizer=true
Apr  5 16:16:57.146: INFO: recover: 
Your test failed.
Ginkgo panics to prevent subsequent assertions from running.
Normally Ginkgo rescues this panic so you shouldn't see it.

But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
To circumvent this, you should call

	defer GinkgoRecover()

at the top of the goroutine that caused this panic.

Apr  5 16:16:57.146: INFO: "Kubernetes and OpenShift APIs remain available": panic: 
Your test failed.
Ginkgo panics to prevent subsequent assertions from running.
Normally Ginkgo rescues this panic so you shouldn't see it.

But, if you make an assertion in a goroutine, Ginkgo can't capture the panic.
To circumvent this, you should call

	defer GinkgoRecover()

at the top of the goroutine that caused this panic.

STEP: Check that service can be deleted with finalizer
STEP: Delete service with finalizer
STEP: Wait for service to disappear

Comment 1 Steve Kuznetsov 2020-04-22 14:48:55 UTC
The test infrastructure team does not own what looks like core k8s tests. Please find an appropriate owner.

Comment 3 Maru Newby 2020-05-19 20:32:22 UTC
The ginkgo bump included in the 1.18.2 rebase [1] should resolve the reported issue.

Comment 4 Red Hat Bugzilla 2023-09-14 05:55:07 UTC
The needinfo request[s] on this closed bug have been removed as they have been unresolved for 1000 days