Bug 1733234 - pvc fails to bind with gluster-block storage as the backend when logging is deployed.
Summary: pvc fails to bind with gluster-block storage as the backend when logging is d...
Keywords:
Status: CLOSED DEFERRED
Alias: None
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer
Version: 3.11.0
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: 3.11.z
Assignee: Jeremiah Stuever
QA Contact: Johnny Liu
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2019-07-25 13:35 UTC by RamaKasturi
Modified: 2020-02-18 20:43 UTC (History)
8 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2020-02-13 22:28:49 UTC
Target Upstream Version:


Attachments (Terms of Use)
screenshot of the parameters from ocp doc (209.67 KB, image/png)
2019-07-25 13:36 UTC, RamaKasturi
no flags Details

Description RamaKasturi 2019-07-25 13:35:17 UTC
Description of problem:
I see that pvc does not get bound when openshift-logging is deployed and below is the error seen.

[root@dhcp46-183 ~]# oc describe pvc logging-es-0
Name:          logging-es-0
Namespace:     openshift-logging
StorageClass:  
Status:        Pending
Volume:        
Labels:        logging-infra=support
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
Events:
  Type    Reason         Age                From                         Message
  ----    ------         ----               ----                         -------
  Normal  FailedBinding  1m (x991 over 4h)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

Inventory file options used (set 1):
===================================
# logging
openshift_logging_install_logging=true
openshift_logging_es_pvc_dynamic=true
openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_logging_es_pvc_size=20Gi
openshift_logging_es_pvc_storage_class_name="glusterfs-registry-block"


Inventory file options used (set 2 ) as per OCP documentation, attaching screenshot reference for the same:
==========================================================
# logging
openshift_logging_install_logging=true
openshift_logging_elasticsearch_storage_type=pvc
openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_logging_es_pvc_size=20Gi
openshift_logging_es_pvc_storage_class_name="glusterfs-registry-block"



Version-Release number of selected component (if applicable):
openshift-ansible-playbooks-3.11.123-1.git.0.db681ba.el7.noarch
openshift-ansible-roles-3.11.123-1.git.0.db681ba.el7.noarch
openshift-ansible-3.11.123-1.git.0.db681ba.el7.noarch
ansible-2.6.18-1.el7ae.noarch
openshift-ansible-docs-3.11.123-1.git.0.db681ba.el7.noarch

Also tried with latest ansible which is live which is 3.11.129 as well

How reproducible:
Always

Steps to Reproduce:
1. Install latest openshift-ansible version from live
2. Install logging
3.

Actual results:
PVC backed by gluster backed storage does not get bound and following error is seeen.

[root@dhcp46-183 ~]# oc describe pvc logging-es-0
Name:          logging-es-0
Namespace:     openshift-logging
StorageClass:  
Status:        Pending
Volume:        
Labels:        logging-infra=support
Annotations:   <none>
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
Events:
  Type    Reason         Age                From                         Message
  ----    ------         ----               ----                         -------
  Normal  FailedBinding  1m (x44 over 11m)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set


Expected results:
pvc should get bound 

Additional info:

Comment 1 RamaKasturi 2019-07-25 13:36:09 UTC
Created attachment 1593401 [details]
screenshot of the parameters from ocp doc

Comment 2 Jeff Cantrill 2019-07-25 18:04:12 UTC
Do you have dynamic provisioning enabled on your cluster?  Do you have a storage class that matches the one you are trying to use? This is outside the scope of cluster logging

Comment 3 daniel 2019-07-26 05:09:57 UTC
seems I see a similar behaviour as well:

# logging
openshift_logging_install_logging=true
openshift_logging_es_memory_limit=4Gi
openshift_logging_es_pvc_dynamic=true 
openshift_logging_es_pvc_size=10Gi
openshift_logging_es_cluster_size=1
openshift_logging_es_pvc_storage_class_name=glusterfs-registry-block
openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"}
openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"}


however it binds to my default storage class:

pvc-9714f802-aed2-11e9-8c78-001a4a990117   10Gi       RWO            Delete           Bound     openshift-logging/logging-es-0                                  glusterfs-storage                    17h


# oc get sc
NAME                          PROVISIONER                AGE
glusterfs-registry-block      gluster.org/glusterblock   17h
glusterfs-storage (default)   kubernetes.io/glusterfs    17h
glusterfs-storage-block       gluster.org/glusterblock   17h
# 

as per doc:
https://docs.openshift.com/container-platform/3.11/install_config/master_node_configuration.html#master-node-config-volume-config 

DynamicProvisioningEnabled    A boolean to enable or disable dynamic provisioning. Default is true.


openshift-ansible]# grep -ir DynamicProvisioningEnabled
roles/openshift_control_plane/templates/master.yaml.v1.j2:  dynamicProvisioningEnabled: {{ openshift_master_dynamic_provisioning_enabled }}

# grep -ir openshift_master_dynamic_provisioning_enabled
roles/openshift_control_plane/defaults/main.yml:openshift_master_dynamic_provisioning_enabled: True
roles/openshift_control_plane/templates/master.yaml.v1.j2:  dynamicProvisioningEnabled: {{ openshift_master_dynamic_provisioning_enabled }}
roles/openshift_sanitize_inventory/tasks/unsupported.yml:  - not openshift_master_dynamic_provisioning_enabled | default(false) | bool
roles/openshift_sanitize_inventory/tasks/unsupported.yml:      openshift_master_dynamic_provisioning_enabled to True and set an


and this configuration did work now for a long time.....

admittetly it looks to me more like a install than a logging issue ...

Comment 5 Jeff Cantrill 2019-07-26 14:44:50 UTC
(In reply to daniel from comment #3)
> seems I see a similar behaviour as well:

> however it binds to my default storage class:
> 
> pvc-9714f802-aed2-11e9-8c78-001a4a990117   10Gi       RWO            Delete 
> Bound     openshift-logging/logging-es-0                                 
> glusterfs-storage                    17h
> 

How is your issue the same given the summary and the title of the bug?  This demonstrates the PVC is bound to a PV.  Are your ES pods starting?

Comment 6 Jeff Cantrill 2019-07-26 14:46:12 UTC
(In reply to RamaKasturi from comment #4)
> (In reply to Jeff Cantrill from comment #2)
> > Do you have dynamic provisioning enabled on your cluster?  

Please confirm dynamic provisioning is enabled on your cluster.

Do you have a
> > storage class that matches the one you are trying to use? Th

is is outside
> > the scope of cluster logging
> 
> Yes, i do have them.
> 
> [root@dhcp46-183 ~]# oc get sc
> NAME                       PROVISIONER                AGE
> glusterfs-block            gluster.org/glusterblock   7d
> glusterfs-file             kubernetes.io/glusterfs    7d
> glusterfs-registry         kubernetes.io/glusterfs    7d
> glusterfs-registry-block   gluster.org/glusterblock   7d
> glusterfs-storage          kubernetes.io/glusterfs    7d
> glusterfs-storage-block    gluster.org/glusterblock   7d

Comment 7 daniel 2019-07-27 11:47:07 UTC
(In reply to Jeff Cantrill from comment #5)
> (In reply to daniel from comment #3)
> > seems I see a similar behaviour as well:
> 
> > however it binds to my default storage class:
> > 
> > pvc-9714f802-aed2-11e9-8c78-001a4a990117   10Gi       RWO            Delete 
> > Bound     openshift-logging/logging-es-0                                 
> > glusterfs-storage                    17h
> > 
> 
> How is your issue the same given the summary and the title of the bug?  This
> demonstrates the PVC is bound to a PV.  Are your ES pods starting?

ES pod is starting, however the service is unavailable.
I think it is the same issue as in the subject as I configured to use `glusterfs-registry-block`:

~~~
openshift_logging_es_pvc_storage_class_name=glusterfs-registry-block
~~~

the SC is there:

~~~
# oc get sc
NAME                          PROVISIONER                AGE
glusterfs-registry-block      gluster.org/glusterblock   17h   <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
glusterfs-storage (default)   kubernetes.io/glusterfs    17h
glusterfs-storage-block       gluster.org/glusterblock   17h
# 
~~~

but bound is 'glusterfs-storage' :

~~~
pvc-9714f802-aed2-11e9-8c78-001a4a990117   10Gi       RWO            Delete           Bound     openshift-logging/logging-es-0                                  glusterfs-storage                    17h
~~~

so this is obviously not the one I configured to use. 
The difference I see now between Kasturi's and my setup is that in her setup there is no default storage class and hence the pvc is not satisfied and keeps pending while I do have a default storage class (glusterfs-storage (default)) which then is used as a fall-back for the pending PVC and finally binds to the default SC.
So to my understanding the behaviour is the same.

Comment 9 Jeff Cantrill 2019-07-29 15:36:08 UTC
Moving to the storage team as this is related to provisioning and binding not specifically to logging.

Comment 10 Jan Safranek 2019-07-30 12:20:23 UTC
From the description it seems that installer did not set storage class on elastic's PVC even though user set one:


> Inventory file options used (set 1):
> ===================================
> # logging
> openshift_logging_install_logging=true
> openshift_logging_es_pvc_dynamic=true
...
> openshift_logging_es_pvc_size=20Gi
> openshift_logging_es_pvc_storage_class_name="glusterfs-registry-block"



> [root@dhcp46-183 ~]# oc describe pvc logging-es-0
> Name:          logging-es-0
> Namespace:     openshift-logging
> StorageClass:  

                 ^^^^^^^^^^ This should not be empty

Therefore OpenShift does not know how to provision the volume.

Comment 19 Jeremiah Stuever 2020-02-13 22:28:49 UTC
There appear to be no active cases related to this bug. As such we're closing this bug in order to focus on bugs that are still tied to active customer cases. Please re-open this bug if you feel it was closed in error or a new active case is attached.


Note You need to log in before you can comment on or make changes to this bug.