Bug 1703881

Summary: [ci] Pod readiness gate test is failing in a few runs
Product: OpenShift Container Platform Reporter: Clayton Coleman <ccoleman>
Component: NodeAssignee: Seth Jennings <sjenning>
Status: CLOSED ERRATA QA Contact: Sunil Choudhary <schoudha>
Severity: high Docs Contact:
Priority: unspecified    
Version: 4.1.0CC: aos-bugs, gblomqui, jokerman, mmccomas, sjenning
Target Milestone: ---   
Target Release: 4.1.0   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: Doc Type: No Doc Update
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2019-06-04 10:48:10 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:

Description Clayton Coleman 2019-04-29 01:39:09 UTC
https://openshift-gce-devel.appspot.com/build/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-4.1/341#openshift-tests-k8sio-pods-should-support-pod-readiness-gates-nodefeaturepodreadinessgate-suiteopenshiftconformanceparallel-suitek8s

[k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate] [Suite:openshift/conformance/parallel] [Suite:k8s] 1m29s

fail [k8s.io/kubernetes/test/e2e/common/pods.go:750]: Expected error:
    <*errors.errorString | 0xc0002b03c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Occurs in a few recent runs. It looks like a 10s wait isn't enough, please check what is happening on the node at the time the test fails in case this is something more dramatic.

Comment 2 Seth Jennings 2019-04-30 14:31:18 UTC
Full e2e.log of one failure

[BeforeEach] [k8s.io] Pods
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:132
[It] should support pod readiness gates [NodeFeature:PodReadinessGate] [Suite:openshift/conformance/parallel] [Suite:k8s]
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/common/pods.go:717
STEP: submitting the pod to kubernetes
STEP: patching pod status with condition "k8s.io/test-condition1" to true
STEP: patching pod status with condition "k8s.io/test-condition2" to true
Apr 22 22:32:49.369: INFO: Expect the Ready condition of pod "pod-ready" to be true, but got false
Apr 22 22:32:50.357: INFO: Expect the Ready condition of pod "pod-ready" to be true, but got false
Apr 22 22:32:51.360: INFO: Expect the Ready condition of pod "pod-ready" to be true, but got false
Apr 22 22:32:52.361: INFO: Expect the Ready condition of pod "pod-ready" to be true, but got false
STEP: patching pod status with condition "k8s.io/test-condition1" to false
Apr 22 22:32:54.394: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true
Apr 22 22:32:55.394: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true
Apr 22 22:32:56.397: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true
Apr 22 22:32:57.394: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true
Apr 22 22:32:58.394: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true
Apr 22 22:32:59.402: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true
Apr 22 22:33:00.394: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true
Apr 22 22:33:01.396: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true
Apr 22 22:33:02.403: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true
Apr 22 22:33:03.400: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true
Apr 22 22:33:03.433: INFO: Expect the Ready condition of pod "pod-ready" to be false, but got true
[AfterEach] [k8s.io] Pods
  /go/src/github.com/openshift/origin/_output/local/go/src/github.com/openshift/origin/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:154
STEP: Collecting events from namespace "e2e-tests-pods-dxwt5".
STEP: Found 5 events.
Apr 22 22:33:03.456: INFO: At 2019-04-22 22:32:22 +0000 UTC - event for pod-ready: {default-scheduler } Scheduled: Successfully assigned e2e-tests-pods-dxwt5/pod-ready to ip-10-0-135-231.ec2.internal
Apr 22 22:33:03.456: INFO: At 2019-04-22 22:32:22 +0000 UTC - event for pod-ready: {default-scheduler } FailedScheduling: Binding rejected: Operation cannot be fulfilled on pods/binding "pod-ready": pod pod-ready is already assigned to node "ip-10-0-135-231.ec2.internal"
Apr 22 22:33:03.456: INFO: At 2019-04-22 22:32:31 +0000 UTC - event for pod-ready: {kubelet ip-10-0-135-231.ec2.internal} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine
Apr 22 22:33:03.456: INFO: At 2019-04-22 22:32:31 +0000 UTC - event for pod-ready: {kubelet ip-10-0-135-231.ec2.internal} Created: Created container pod-readiness-gate
Apr 22 22:33:03.456: INFO: At 2019-04-22 22:32:31 +0000 UTC - event for pod-ready: {kubelet ip-10-0-135-231.ec2.internal} Started: Started container pod-readiness-gate
Apr 22 22:33:03.518: INFO: skipping dumping cluster info - cluster too large
Apr 22 22:33:03.518: INFO: Waiting up to 3m0s for all (but 100) nodes to be ready
STEP: Destroying namespace "e2e-tests-pods-dxwt5" for this suite.
Apr 22 22:33:49.620: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Apr 22 22:33:51.390: INFO: namespace e2e-tests-pods-dxwt5 deletion completed in 47.836925791s
Apr 22 22:33:51.394: INFO: Running AfterSuite actions on all nodes
Apr 22 22:33:51.394: INFO: Running AfterSuite actions on node 1
fail [k8s.io/kubernetes/test/e2e/common/pods.go:750]: Expected error:
    <*errors.errorString | 0xc4202d4210>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
not to have occurred

Aligned with node logs

Roughly when the first patch happens
Apr 22 22:32:48 ip-10-0-135-231 hyperkube[999]: E0422 22:32:48.336909     999 status_manager.go:180] Kubelet is trying to update pod condition "k8s.io/test-condition2" for pod "pod-ready_e2e-tests-pods-dxwt5(7ed5346a-654e-11e9-89d5-121a2aa020c2)". But it is not owned by kubelet.
Apr 22 22:32:48 ip-10-0-135-231 hyperkube[999]: E0422 22:32:48.336952     999 status_manager.go:180] Kubelet is trying to update pod condition "k8s.io/test-condition1" for pod "pod-ready_e2e-tests-pods-dxwt5(7ed5346a-654e-11e9-89d5-121a2aa020c2)". But it is not owned by kubelet.

Roughly when the second patch happens
Apr 22 22:32:53 ip-10-0-135-231 hyperkube[999]: E0422 22:32:53.375112     999 status_manager.go:180] Kubelet is trying to update pod condition "k8s.io/test-condition2" for pod "pod-ready_e2e-tests-pods-dxwt5(7ed5346a-654e-11e9-89d5-121a2aa020c2)". But it is not owned by kubelet.
Apr 22 22:32:53 ip-10-0-135-231 hyperkube[999]: E0422 22:32:53.375151     999 status_manager.go:180] Kubelet is trying to update pod condition "k8s.io/test-condition1" for pod "pod-ready_e2e-tests-pods-dxwt5(7ed5346a-654e-11e9-89d5-121a2aa020c2)". But it is not owned by kubelet.

After the test has already timed out
Apr 22 22:33:08 ip-10-0-135-231 hyperkube[999]: E0422 22:33:08.941214     999 status_manager.go:180] Kubelet is trying to update pod condition "k8s.io/test-condition2" for pod "pod-ready_e2e-tests-pods-dxwt5(7ed5346a-654e-11e9-89d5-121a2aa020c2)". But it is not owned by kubelet.
Apr 22 22:33:08 ip-10-0-135-231 hyperkube[999]: E0422 22:33:08.941248     999 status_manager.go:180] Kubelet is trying to update pod condition "k8s.io/test-condition1" for pod "pod-ready_e2e-tests-pods-dxwt5(7ed5346a-654e-11e9-89d5-121a2aa020c2)". But it is not owned by kubelet.

Comment 3 Seth Jennings 2019-04-30 15:16:34 UTC
https://github.com/openshift/origin/blob/master/vendor/k8s.io/kubernetes/test/e2e/common/pods.go#L50-L52

It is acknowledged that heavier loading of the kubelet can cause delay.

Opening a carry to increase the test timeout.

Comment 4 Seth Jennings 2019-04-30 15:21:01 UTC
Origin PR:
https://github.com/openshift/origin/pull/22723

Comment 7 Seth Jennings 2019-05-06 16:51:04 UTC
https://testgrid.k8s.io/redhat-openshift-release-blocking#redhat-release-openshift-origin-installer-e2e-aws-4.1&sort-by-flakiness=

This test has not flaked since the fix was merged ~1w ago.  I consider that verified.

Comment 9 errata-xmlrpc 2019-06-04 10:48:10 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:0758