Bug 1567805 - csi-attacher image for 3.10 using api group storage.k8s.io/v1alpha1 to get VolumeAttachment resource
Summary: csi-attacher image for 3.10 using api group storage.k8s.io/v1alpha1 to get Vo...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Storage
Version: 3.10.0
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: 3.10.0
Assignee: Jan Safranek
QA Contact: Qin Ping
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-04-16 08:57 UTC by Qin Ping
Modified: 2018-07-30 19:13 UTC (History)
3 users (show)

Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2018-07-30 19:13:03 UTC
Target Upstream Version:
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:1816 0 None None None 2018-07-30 19:13:21 UTC

Description Qin Ping 2018-04-16 08:57:57 UTC
Description of problem:
csi-attacher image for 3.10 using api group storage.k8s.io/v1alpha1 to get VolumeAttachment resource

Version-Release number of selected component (if applicable):
oc v3.10.0-0.21.0
openshift v3.10.0-0.21.0
kubernetes v1.10.0+b81c8f8
# oc exec csi-hostpath-plugin  -c external-attacher -- rpm -qa|grep csi
csi-attacher-0.2.0-1.el7.x86_64

How reproducible:
Always

Steps to Reproduce:
1. Create csi Pod using hostpath driver
2. Create sc for csi hostpath driver
3. Create a PVC to claim storage from csi hostpath driver
4. Create a Pod using the PVC

Actual results:
Pod can not get ready and report:
Warning  FailedMount             22s (x12 over 8m)  kubelet, host-172-16-120-99  MountVolume.WaitForAttach failed for volume "kubernetes-dynamic-pv-e6375d88414d11e8" : watch error:unknown (get volumeattachments.storage.k8s.io) for volume e6377460-414d-11e8-8c49-0a580a81000f
  Warning  FailedMount             4s (x4 over 6m)    kubelet, host-172-16-120-99  Unable to mount volumes for pod "web-server_default(ee2d9b65-414d-11e8-b3df-fa163e37f1a9)": timeout expired waiting for volumes to attach or mount for pod "default"/"web-server". list of unmounted volumes=[mypvc]. list of unattached volumes=[mypvc default-token-lptzt]

Expected results:
Pod can be run successfully

Master Log:

Node Log (of failed PODs):

PV Dump:
# oc export pv kubernetes-dynamic-pv-be8f6ddb414f11e8 
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: csi-hostpath
  creationTimestamp: null
  finalizers:
  - kubernetes.io/pv-protection
  name: kubernetes-dynamic-pv-be8f6ddb414f11e8
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: csi-pvc
    namespace: default
    resourceVersion: "46973"
    uid: beb841da-414f-11e8-b3df-fa163e37f1a9
  csi:
    driver: csi-hostpath
    volumeAttributes:
      storage.kubernetes.io/csiProvisionerIdentity: 1523867113848-8081-csi-hostpath
    volumeHandle: be8f80e7-414f-11e8-bdb3-0a580a810010
  persistentVolumeReclaimPolicy: Delete
  storageClassName: csi-hostpath-sc
status: {}

PVC Dump:
# oc export pvc csi-pvc 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"b2a81c9b-414f-11e8-9d4e-0a580a810010","leaseDurationSeconds":15,"acquireTime":"2018-04-16T08:25:38Z","renewTime":"2018-04-16T08:25:41Z","leaderTransitions":0}'
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: csi-hostpath
  creationTimestamp: null
  finalizers:
  - kubernetes.io/pvc-protection
  name: csi-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-hostpath-sc
  volumeName: kubernetes-dynamic-pv-be8f6ddb414f11e8
status: {}

StorageClass Dump (if StorageClass used by PV/PVC):
# oc export sc csi-hostpath-sc 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  creationTimestamp: null
  name: csi-hostpath-sc
provisioner: csi-hostpath
reclaimPolicy: Delete
volumeBindingMode: Immediate

Additional info:
csi-attacher logs:
I0416 08:13:44.520373       1 reflector.go:240] Listing and watching *v1alpha1.VolumeAttachment from github.com/kubernetes-csi/external-attacher/vendor/k8s.io/client-go/informers/factory.go:86
E0416 08:13:44.521837       1 reflector.go:205] github.com/kubernetes-csi/external-attacher/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1alpha1.VolumeAttachment: the server could not find the requested resource

VolumeAttachment Dump:
# oc export volumeattachment
apiVersion: v1
items:
- apiVersion: storage.k8s.io/v1beta1
  kind: VolumeAttachment
  metadata:
    creationTimestamp: null
    name: csi-547a93eceb638e073e648697dc933d35a260380c79faffbac1e2a4adf879088f
  spec:
    attacher: csi-hostpath
    nodeName: host-172-16-120-99
    source:
      persistentVolumeName: kubernetes-dynamic-pv-e6375d88414d11e8
  status:
    attached: false
kind: List
metadata: {}

Comment 3 Jan Safranek 2018-04-16 12:52:16 UTC
I've built csi-attacher-0.2.0-2.git27299be.el7 RPM last Friday. I am not sure how and when it gets propagated into new image and pushed to our repositories.

It will use storage.k8s.io/v1beta1 and no changes in master-config.yaml should be necessary.

Comment 5 Jan Safranek 2018-04-17 09:53:47 UTC
This should be fixed in csi-attacher-container-v3.10.0-0.22.0.0 built yesterday.

Comment 7 Qin Ping 2018-05-17 05:39:10 UTC
Verified in openshift:
oc v3.10.0-0.47.0
openshift v3.10.0-0.47.0
kubernetes v1.10.0+b81c8f8

csi-attacher-0.2.0-2.git27299be.el7.x86_64

# uname -a
Linux host-172-16-120-49 3.10.0-862.2.3.el7.x86_64 #1 SMP Mon Apr 30 12:37:51 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux

# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.5 (Maipo)

Comment 9 errata-xmlrpc 2018-07-30 19:13:03 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:1816


Note You need to log in before you can comment on or make changes to this bug.