Bug 1573520 - Glusterfs BlockVolume dynamic provisioning error
Summary: Glusterfs BlockVolume dynamic provisioning error
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: kubernetes
Version: rhgs-3.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: CNS 3.10
Assignee: Humble Chirammal
QA Contact: Neha Berry
URL:
Whiteboard:
Depends On: 1563512
Blocks: 1568861
TreeView+ depends on / blocked
 
Reported: 2018-05-01 15:19 UTC by hchen
Modified: 2018-09-12 11:01 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of: 1563512
Environment:
Last Closed: 2018-09-12 11:00:54 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:2689 0 None None None 2018-09-12 11:01:39 UTC

Comment 16 Qin Ping 2018-07-03 03:11:58 UTC
Verified in OCP:
oc v3.10.10
openshift v3.10.10
kubernetes v1.10.0+b81c8f8

# uname -a
Linux ip-172-18-15-217.ec2.internal 3.10.0-862.6.3.el7.x86_64 #1 SMP Fri Jun 15 17:57:37 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux

# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 7.5 (Maipo)

# cat pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: block-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: glusterfs
  volumeMode: Block
  resources:
    requests:
      storage: 1Gi

# oc describe pvc block-pvc 
Name:          block-pvc
Namespace:     blockvolume
StorageClass:  glusterfs
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/glusterfs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Block
Events:
  Type     Reason              Age              From                         Message
  ----     ------              ----             ----                         -------
  Warning  ProvisioningFailed  5s (x8 over 1m)  persistentvolume-controller  Failed to provision volume with StorageClass "glusterfs": kubernetes.io/glusterfs does not support block volume provisioning

Comment 17 Humble Chirammal 2018-07-03 07:23:02 UTC
(In reply to Qin Ping from comment #16)
> Verified in OCP:
> oc v3.10.10
> openshift v3.10.10
> kubernetes v1.10.0+b81c8f8
> 
> # uname -a
> Linux ip-172-18-15-217.ec2.internal 3.10.0-862.6.3.el7.x86_64 #1 SMP Fri Jun
> 15 17:57:37 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
> 
> # cat /etc/redhat-release 
> Red Hat Enterprise Linux Server release 7.5 (Maipo)
> 
> # cat pvc.yaml 
> apiVersion: v1
> kind: PersistentVolumeClaim
> metadata:
>   name: block-pvc
> spec:
>   accessModes:
>     - ReadWriteOnce
>   storageClassName: glusterfs
>   volumeMode: Block
>   resources:
>     requests:
>       storage: 1Gi
> 
> # oc describe pvc block-pvc 
> Name:          block-pvc
> Namespace:     blockvolume
> StorageClass:  glusterfs
> Status:        Pending
> Volume:        
> Labels:        <none>
> Annotations:  
> volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/glusterfs
> Finalizers:    [kubernetes.io/pvc-protection]
> Capacity:      
> Access Modes:  
> VolumeMode:    Block
> Events:
>   Type     Reason              Age              From                        
> Message
>   ----     ------              ----             ----                        
> -------
>   Warning  ProvisioningFailed  5s (x8 over 1m)  persistentvolume-controller 
> Failed to provision volume with StorageClass "glusterfs":
> kubernetes.io/glusterfs does not support block volume provisioning

Awesome! Thanks a lot Qin Ping for verifying this bug!

Comment 20 errata-xmlrpc 2018-09-12 11:00:54 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:2689


Note You need to log in before you can comment on or make changes to this bug.