Bug 1948535 - External Storage [Driver: disk.csi.azure.com] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Summary: External Storage [Driver: disk.csi.azure.com] [Testpattern: Dynamic PV (defau...
Keywords:
Status: CLOSED DUPLICATE of bug 1948603
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 4.8
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: aos-storage-staff@redhat.com
QA Contact: Qin Ping
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2021-04-12 11:51 UTC by Qin Ping
Modified: 2021-04-16 14:41 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2021-04-16 14:41:45 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Qin Ping 2021-04-12 11:51:32 UTC
Description of problem:
STEP: Destroying namespace "e2e-volume-expand-2690" for this suite.
fail [k8s.io/kubernetes.0/test/e2e/storage/testsuites/volume_expand.go:279]: While waiting for pvc resize to finish
Unexpected error:
    <*errors.errorString | 0xc0022009b0>: {
        s: "error while waiting for controller resize to finish: timed out waiting for the condition",
    }
    error while waiting for controller resize to finish: timed out waiting for the condition
occurred

failed: (11m2s) 2021-04-12T10:29:34 "External Storage [Driver: disk.csi.azure.com] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it"



Version-Release number of selected component (if applicable):
4.8.0-0.nightly-2021-04-09-222447


How reproducible:
Always

Steps to Reproduce:
1. 
2.
3.

Actual results:
Checked the PVC get the following event:
Warning  ExternalExpanding      6m4s   volume_expand                                                                      Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.

Check csi-resizer sidecar container, get a lot of msg like:
E0412 09:33:21.176207       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:openshift-cluster-csi-drivers:azure-disk-csi-driver-controller-sa" cannot list resource "pods" in API group "" at the cluster scope
E0412 09:34:04.039286       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:openshift-cluster-csi-drivers:azure-disk-csi-driver-controller-sa" cannot list resource "pods" in API group "" at the cluster scope
E0412 09:34:44.230470       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:openshift-cluster-csi-drivers:azure-disk-csi-driver-controller-sa" cannot list resource "pods" in API group "" at the cluster scope
E0412 09:35:40.858457       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:openshift-cluster-csi-drivers:azure-disk-csi-driver-controller-sa" cannot list resource "pods" in API group "" at the cluster scope
E0412 09:36:17.950767       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:openshift-cluster-csi-drivers:azure-disk-csi-driver-controller-sa" cannot list resource "pods" in API group "" at the cluster scope


Expected results:
Expand PVC successfully.

Master Log:

Node Log (of failed PODs):

PV Dump:

PVC Dump:

StorageClass Dump (if StorageClass used by PV/PVC):

Additional info:

Comment 1 Fabio Bertinatto 2021-04-15 15:26:24 UTC
Closing as a duplicate; we'll aggregate all CSI certification failing tests in bug 1948603.

*** This bug has been marked as a duplicate of bug 1948603 ***

Comment 2 Qin Ping 2021-04-16 02:35:22 UTC
Reopen it for: https://bugzilla.redhat.com/show_bug.cgi?id=1948603#c4

Comment 3 Jan Safranek 2021-04-16 14:41:45 UTC
I am going to close this bug, we may reopen a new generic "azure CI is still broken" bug to track the remaining issues/

*** This bug has been marked as a duplicate of bug 1948603 ***


Note You need to log in before you can comment on or make changes to this bug.