Bug 1744422 - Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason
Summary: Kubelet when scheduling a busybox command that always fails in a pod should h...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Node
Version: 4.2.0
Hardware: x86_64
OS: Unspecified
unspecified
medium
Target Milestone: ---
: 4.2.0
Assignee: Ryan Phillips
QA Contact: Sunil Choudhary
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-08-22 07:13 UTC by ge liu
Modified: 2019-10-16 06:37 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-10-16 06:37:03 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift origin pull 23733 0 None None None 2019-09-05 18:34:07 UTC
Red Hat Product Errata RHBA-2019:2922 0 None None None 2019-10-16 06:37:11 UTC

Description ge liu 2019-08-22 07:13:53 UTC
Description of problem:
Test failed in job: 
https://prow.k8s.io/view/gcs/origin-ci-test/logs/canary-openshift-ocp-installer-e2e-azure-4.2/66

https://storage.googleapis.com/origin-ci-test/logs/canary-openshift-ocp-installer-e2e-azure-4.2/66/build-log.txt

Failed cases: 

failed: (1m36s) 2019-08-22T03:23:03 "[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] [Suite:openshift/conformance/parallel/minimal] [Suite:k8s]"


Failed errors:

Aug 22 03:23:03.012: INFO: Running AfterSuite actions on node 1
fail [k8s.io/kubernetes/test/e2e/common/kubelet.go:123]: Timed out after 60.000s.
Expected
    <*errors.errorString | 0xc0040a2130>: {
        s: "expected state to be terminated. Got pod status: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-08-22 03:21:34 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-08-22 03:21:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-falsef181985d-c48b-11e9-adbb-0a58ac10d6ba]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-08-22 03:21:34 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-falsef181985d-c48b-11e9-adbb-0a58ac10d6ba]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-08-22 03:21:34 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.0.32.4 PodIP: StartTime:2019-08-22 03:21:34 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:bin-falsef181985d-c48b-11e9-adbb-0a58ac10d6ba State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:docker.io/library/busybox:1.29 ImageID: ContainerID:}] QOSClass:BestEffort}",
    }
to be nil


How reproducible:
occasionally

Comment 1 Ryan Phillips 2019-09-04 15:05:28 UTC
Upstream PR: https://github.com/kubernetes/kubernetes/pull/82335

Comment 4 errata-xmlrpc 2019-10-16 06:37:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2019:2922


Note You need to log in before you can comment on or make changes to this bug.