Bug 1769695 - Could not attach csi volume to instance.
Summary: Could not attach csi volume to instance.
Keywords:
Status: CLOSED DUPLICATE of bug 1769693
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 4.3.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: ---
Assignee: aos-storage-staff@redhat.com
QA Contact: Liang Xia
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-11-07 08:45 UTC by Chao Yang
Modified: 2019-11-08 11:45 UTC (History)
2 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2019-11-08 11:45:59 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)

Description Chao Yang 2019-11-07 08:45:20 UTC
Description of problem:
Could not attache volume to instance.

Version-Release number of selected component (if applicable):
4.3.0-0.nightly-2019-11-02-092336
quay.io/openshift-release-dev/ocp-v4.0-art-dev:v4.3.0-201911061038-ose-csi-external-attacher

How reproducible:
Always.

Steps to Reproduce:
1.Create csi controller
2.Create storageclass
3.Create pvc, pod
4.Dynamic pv is created, but could not attache to node
oc describe pod
Events:
  Type     Reason              Age                  From                                                 Message
  ----     ------              ----                 ----                                                 -------
  Normal   Scheduled           <unknown>            default-scheduler                                    Successfully assigned kube-system/podcsi to ip-10-0-161-241.us-east-2.compute.internal
  Warning  FailedMount         5m28s (x2 over 21m)  kubelet, ip-10-0-161-241.us-east-2.compute.internal  Unable to attach or mount volumes: unmounted volumes=[aws1], unattached volumes=[default-token-czv8x aws1]: timed out waiting for the condition
  Warning  FailedAttachVolume  76s (x10 over 23m)   attachdetach-controller                              AttachVolume.Attach failed for volume "pvc-a00356ed-042a-4fb8-8375-aa1bdd4a6e3a" : attachment timeout for volume vol-092066c92f3bd4579
  Warning  FailedMount         54s (x9 over 23m)    kubelet, ip-10-0-161-241.us-east-2.compute.internal  Unable to attach or mount volumes: unmounted volumes=[aws1], unattached volumes=[aws1 default-token-czv8x]: timed out waiting for the condition
5. oc logs ebs-csi-controller-0 -c csi-attacher 
I1107 08:12:40.805706       1 csi_handler.go:89] CSIHandler: processing VA "csi-642f558c7c0db7288cdf6bd130fc2b57ea31e1246f6a797293f8ec4f83a51dd3"
I1107 08:12:40.805719       1 csi_handler.go:116] Attaching "csi-642f558c7c0db7288cdf6bd130fc2b57ea31e1246f6a797293f8ec4f83a51dd3"
I1107 08:12:40.805730       1 csi_handler.go:249] Starting attach operation for "csi-642f558c7c0db7288cdf6bd130fc2b57ea31e1246f6a797293f8ec4f83a51dd3"
I1107 08:12:40.805824       1 csi_handler.go:215] Adding finalizer to PV "pvc-a00356ed-042a-4fb8-8375-aa1bdd4a6e3a"
I1107 08:12:40.814953       1 csi_handler.go:224] PV finalizer added to "pvc-a00356ed-042a-4fb8-8375-aa1bdd4a6e3a"
I1107 08:12:40.815068       1 csi_handler.go:542] Found NodeID i-0ba9e58ff102a8e70 in CSINode ip-10-0-161-241.us-east-2.compute.internal
I1107 08:12:40.815343       1 csi_handler.go:177] VA finalizer added to "csi-642f558c7c0db7288cdf6bd130fc2b57ea31e1246f6a797293f8ec4f83a51dd3"
I1107 08:12:40.815367       1 csi_handler.go:191] NodeID annotation added to "csi-642f558c7c0db7288cdf6bd130fc2b57ea31e1246f6a797293f8ec4f83a51dd3"
I1107 08:12:40.818197       1 csi_handler.go:412] Saving attach error to "csi-642f558c7c0db7288cdf6bd130fc2b57ea31e1246f6a797293f8ec4f83a51dd3"
I1107 08:12:40.819997       1 csi_handler.go:123] Failed to save attach error to "csi-642f558c7c0db7288cdf6bd130fc2b57ea31e1246f6a797293f8ec4f83a51dd3": volumeattachments.storage.k8s.io "csi-642f558c7c0db7288cdf6bd130fc2b57ea31e1246f6a797293f8ec4f83a51dd3" is forbidden: User "system:serviceaccount:kube-system:ebs-csi-controller-sa" cannot patch resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
I1107 08:12:40.820026       1 csi_handler.go:99] Error processing "csi-642f558c7c0db7288cdf6bd130fc2b57ea31e1246f6a797293f8ec4f83a51dd3": failed to attach: could not save VolumeAttachment: volumeattachments.storage.k8s.io "csi-642f558c7c0db7288cdf6bd130fc2b57ea31e1246f6a797293f8ec4f83a51dd3" is forbidden: User "system:serviceaccount:kube-system:ebs-csi-controller-sa" cannot patch resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope

6
oc get ClusterRoleBinding ebs-csi-attacher-binding -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  creationTimestamp: "2019-11-07T06:54:59Z"
  name: ebs-csi-attacher-binding
  resourceVersion: "99731"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/ebs-csi-attacher-binding
  uid: 9460c033-47a1-4c54-b3d9-b6fe2ec58221
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ebs-external-attacher-role
subjects:
- kind: ServiceAccount
  name: ebs-csi-controller-sa
  namespace: kube-system

7
oc get clusterrole system:csi-external-attacher  -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  creationTimestamp: "2019-11-07T01:30:32Z"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:csi-external-attacher
  resourceVersion: "80"
  selfLink: /apis/rbac.authorization.k8s.io/v1/clusterroles/system%3Acsi-external-attacher
  uid: 35b24a80-789c-4d29-9757-930f3b130129
rules:
- apiGroups:
  - ""
  resources:
  - persistentvolumes
  verbs:
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - storage.k8s.io
  resources:
  - volumeattachments
  verbs:
  - get
  - list
  - patch
  - update
  - watch
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - get
  - list
  - patch
  - update
  - watch

Actual results:
Pod could not be running

Expected results:
Pod is running

Additional info:

Comment 1 Jan Safranek 2019-11-08 11:45:59 UTC

*** This bug has been marked as a duplicate of bug 1769693 ***


Note You need to log in before you can comment on or make changes to this bug.