Bug 1744909 - [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] [Suite:openshift/conformance/serial/minimal] [Suite:k8s]
Summary: [sig-storage] EmptyDir wrapper volumes should not cause race condition when u...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 4.2.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 4.3.0
Assignee: Jan Safranek
QA Contact: Liang Xia
URL:
Whiteboard:
Depends On:
Blocks: 1761391
TreeView+ depends on / blocked
 
Reported: 2019-08-23 08:26 UTC by scheng
Modified: 2020-01-23 11:05 UTC (History)
9 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
: 1761391 (view as bug list)
Environment:
Last Closed: 2020-01-23 11:05:22 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Github openshift origin pull 23842 0 None closed Bug 1744909: UPSTREAM: 82245: Don't require any resources in race test 2020-04-29 16:05:16 UTC
Red Hat Product Errata RHBA-2020:0062 0 None None None 2020-01-23 11:05:41 UTC

Description scheng 2019-08-23 08:26:59 UTC
Description of problem:
Failed job:
https://prow.k8s.io/view/gcs/origin-ci-test/logs/canary-openshift-ocp-installer-e2e-azure-serial-4.2/32

Failed error:
STEP: Destroying namespace "e2e-emptydir-wrapper-6556" for this suite.
Aug 23 06:07:05.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 23 06:07:09.139: INFO: namespace e2e-emptydir-wrapper-6556 deletion completed in 10.061597118s
Aug 23 06:07:09.142: INFO: Running AfterSuite actions on all nodes
Aug 23 06:07:09.143: INFO: Running AfterSuite actions on node 1
fail [k8s.io/kubernetes/test/e2e/storage/empty_dir_wrapper.go:424]: Failed waiting for pod wrapped-volume-race-6c59a4f6-c56b-11e9-83b5-0a58ac10efb0-2rbsj to enter running state
Unexpected error:
    <*errors.errorString | 0xc0002713f0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

Aug 23 06:01:26.017 I ns/openshift-machine-api machine/ci-op-4inqf4cw-3a8ca-666zl-worker-centralus2-hhwhq Deleted machine "ci-op-4inqf4cw-3a8ca-666zl-worker-centralus2-hhwhq"
Aug 23 06:02:20.386 W clusteroperator/monitoring changed Upgradeable to False: RollOutInProgress: Rollout of the monitoring stack is in progress. Please wait until it finishes.
Aug 23 06:02:46.002 W clusteroperator/monitoring changed Upgradeable to True
Aug 23 06:05:04.446 W clusteroperator/monitoring changed Upgradeable to False: RollOutInProgress: Rollout of the monitoring stack is in progress. Please wait until it finishes.
Aug 23 06:05:30.306 W clusteroperator/monitoring changed Upgradeable to True
Aug 23 06:05:30.343 W clusteroperator/monitoring changed Upgradeable to False: RollOutInProgress: Rollout of the monitoring stack is in progress. Please wait until it finishes.
Aug 23 06:05:56.132 W clusteroperator/monitoring changed Upgradeable to True


Version-Release number of selected component (if applicable):
redhat-canary-openshift-ocp-installer-e2e-azure-serial-4.2

How reproducible:
Some times

Comment 1 Jan Safranek 2019-09-02 16:00:41 UTC
I noticed some test pods are not scheduled:

Aug 23 06:06:58.901: INFO: At 2019-08-23 06:01:18 +0000 UTC - event for wrapped-volume-race-6c59a4f6-c56b-11e9-83b5-0a58ac10efb0-2rbsj: {default-scheduler } FailedScheduling: 0/6 nodes are available: 1 Insufficient cpu, 5 node(s) didn't match node selector.
Aug 23 06:06:58.901: INFO: At 2019-08-23 06:01:18 +0000 UTC - event for wrapped-volume-race-6c59a4f6-c56b-11e9-83b5-0a58ac10efb0-8kh48: {default-scheduler } Scheduled: Successfully assigned e2e-emptydir-wrapper-6556/wrapped-volume-race-6c59a4f6-c56b-11e9-83b5-0a58ac10efb0-8kh48 to ci-op-4inqf4cw-3a8ca-666zl-worker-centralus1-jgqd7
Aug 23 06:06:58.901: INFO: At 2019-08-23 06:01:18 +0000 UTC - event for wrapped-volume-race-6c59a4f6-c56b-11e9-83b5-0a58ac10efb0-bcjcv: {default-scheduler } FailedScheduling: 0/6 nodes are available: 1 Insufficient cpu, 5 node(s) didn't match node selector.
Aug 23 06:06:58.901: INFO: At 2019-08-23 06:01:18 +0000 UTC - event for wrapped-volume-race-6c59a4f6-c56b-11e9-83b5-0a58ac10efb0-lnfmv: {default-scheduler } FailedScheduling: 0/6 nodes are available: 1 Insufficient cpu, 5 node(s) didn't match node selector.
Aug 23 06:06:58.901: INFO: At 2019-08-23 06:01:18 +0000 UTC - event for wrapped-volume-race-6c59a4f6-c56b-11e9-83b5-0a58ac10efb0-zxg92: {default-scheduler } FailedScheduling: 0/6 nodes are available: 1 Insufficient cpu, 5 node(s) didn't match node selector.


The test uses a busybox container requesting "10m" CPU (i.e. 10 milliCPUs), which SHOULD fit any node in an idle OCP cluster (note this is serial test and nothing else should be running).

I filed https://github.com/kubernetes/kubernetes/pull/82245 to remove any CPU requirements, maybe it helps.

Comment 2 Jan Safranek 2019-09-02 16:02:15 UTC
This is not a release blocker.

Comment 7 Chao Yang 2019-10-10 06:54:40 UTC
It is passed https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gce-serial/1182066659620294656
Update the status to Verified

Comment 8 Michal Fojtik 2019-10-14 09:07:42 UTC
This job failed with same problem today (Oct/14): https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-ocp-installer-e2e-azure-serial-4.2/86

Comment 14 errata-xmlrpc 2020-01-23 11:05:22 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2020:0062


Note You need to log in before you can comment on or make changes to this bug.