Bug 1878891 - [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Summary: [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block v...
Keywords:
Status: CLOSED DUPLICATE of bug 1867929
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 4.5
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 4.7.0
Assignee: Hemant Kumar
QA Contact: Qin Ping
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2020-09-14 19:39 UTC by Lalatendu Mohanty
Modified: 2020-09-22 19:00 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Last Closed: 2020-09-22 19:00:53 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Lalatendu Mohanty 2020-09-14 19:39:44 UTC
test:
[sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works 

is failing frequently in CI, see search results:
https://search.ci.openshift.org/?maxAge=168h&context=1&type=bug%2Bjunit&name=&maxMatches=5&maxBytes=20971520&groupBy=job&search=%5C%5Bsig-storage%5C%5D+In-tree+Volumes+%5C%5BDriver%3A+aws%5C%5D+%5C%5BTestpattern%3A+Dynamic+PV+%5C%28block+volmode%5C%29%5C%28allowExpansion%5C%29%5C%5D+volume-expand+Verify+if+offline+PVC+expansion+works



https://prow.ci.openshift.org/view/gs/origin-ci-test/pr-logs/pull/openshift_machine-config-operator/2082/pull-ci-openshift-machine-config-operator-release-4.5-e2e-aws/1305554751395467264



STEP: Destroying namespace "e2e-volume-expand-558" for this suite.
Sep 14 18:11:01.442: INFO: Running AfterSuite actions on all nodes
Sep 14 18:11:01.442: INFO: Running AfterSuite actions on node 1
fail [k8s.io/kubernetes/test/e2e/storage/testsuites/volume_expand.go:211]: while recreating pod for resizing
Unexpected error:
    <*errors.errorString | 0xc001129420>: {
        s: "pod \"security-context-3fb87bb8-7cce-44d2-b592-98abe2ad4c82\" is not Running: timed out waiting for the condition",
    }
    pod "security-context-3fb87bb8-7cce-44d2-b592-98abe2ad4c82" is not Running: timed out waiting for the condition
occurred

Comment 2 Hemant Kumar 2020-09-14 20:48:18 UTC
This has same root cause as https://bugzilla.redhat.com/show_bug.cgi?id=1867929 and detach operation from previous node is taking longer than expected:

12793:2020-09-14T18:05:44.603553177Z I0914 18:05:44.603491       1 aws.go:2230] Waiting for volume "vol-05d3419109556a8bf" state: actual=detaching, desired=detached
12815:2020-09-14T18:05:46.689160833Z I0914 18:05:46.689088       1 aws.go:2230] Waiting for volume "vol-05d3419109556a8bf" state: actual=detaching, desired=detached
12859:2020-09-14T18:05:50.797136658Z I0914 18:05:50.797085       1 aws.go:2230] Waiting for volume "vol-05d3419109556a8bf" state: actual=detaching, desired=detached
12951:2020-09-14T18:05:58.900854439Z I0914 18:05:58.900783       1 aws.go:2230] Waiting for volume "vol-05d3419109556a8bf" state: actual=detaching, desired=detached
13176:2020-09-14T18:06:14.995626393Z I0914 18:06:14.995567       1 aws.go:2230] Waiting for volume "vol-05d3419109556a8bf" state: actual=detaching, desired=detached
13539:2020-09-14T18:06:47.11220893Z I0914 18:06:47.112151       1 aws.go:2230] Waiting for volume "vol-05d3419109556a8bf" state: actual=detaching, desired=detached
14190:2020-09-14T18:07:51.217544864Z I0914 18:07:51.215053       1 aws.go:2230] Waiting for volume "vol-05d3419109556a8bf" state: actual=detaching, desired=detached
15593:2020-09-14T18:09:18.001862519Z I0914 18:09:18.001622       1 operation_generator.go:225] VerifyVolumesAreAttached determined volume "kubernetes.io/aws-ebs/aws://us-west-1b/vol-05d3419109556a8bf" (spec.Name: "pvc-283f313f-f07b-4fa7-8b2e-ab31d66c9c68") is no longer attached to node "ip-10-0-225-178.us-west-1.compute.internal", therefore it was marked as detached.
16143:2020-09-14T18:09:59.291095034Z   VolumeId: "vol-05d3419109556a8bf"
16145:2020-09-14T18:09:59.291095034Z I0914 18:09:59.291080       1 operation_generator.go:472] DetachVolume.Detach succeeded for volume "pvc-283f313f-f07b-4fa7-8b2e-ab31d66c9c68" (UniqueName: "kubernetes.io/aws-ebs/aws://us-west-1b/vol-05d3419109556a8bf") on node "ip-10-0-225-178.us-west-1.compute.internal" 


As evident detach from old node took more than 4 minutes and that would result in timeout for new pod.

Comment 3 Hemant Kumar 2020-09-22 19:00:53 UTC

*** This bug has been marked as a duplicate of bug 1867929 ***


Note You need to log in before you can comment on or make changes to this bug.