Bugzilla will be upgraded to version 5.0. The upgrade date is tentatively scheduled for 2 December 2018, pending final testing and feedback.
Bug 1529048 - dynamic storage for logging from glusterfs do not work from the installer [NEEDINFO]
dynamic storage for logging from glusterfs do not work from the installer
Status: CLOSED ERRATA
Product: OpenShift Container Platform
Classification: Red Hat
Component: Installer (Show other bugs)
3.7.0
Unspecified Unspecified
high Severity high
: ---
: 3.7.z
Assigned To: Jose A. Rivera
Wenkai Shi
:
Depends On:
Blocks: 1564290
  Show dependency treegraph
 
Reported: 2017-12-26 01:48 EST by Jaspreet Kaur
Modified: 2018-04-15 22:00 EDT (History)
9 users (show)

See Also:
Fixed In Version:
Doc Type: No Doc Update
Doc Text:
undefined
Story Points: ---
Clone Of:
: 1564290 (view as bug list)
Environment:
Last Closed: 2018-04-05 05:34:33 EDT
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---
fshaikh: needinfo? (jrivera)


Attachments (Terms of Use)


External Trackers
Tracker ID Priority Status Summary Last Updated
Red Hat Product Errata RHBA-2018:0636 None None None 2018-04-05 05:35 EDT

  None (edit)
Description Jaspreet Kaur 2017-12-26 01:48:07 EST
Description of problem:  AFTER cluster 3.7 installation. (with glusterfs as a storage)
IMO glusterfs-storage is not tagged as Default

# oc describe sc glusterfs-storage
Name:           glusterfs-storage
IsDefaultClass: No
Annotations:    <none>
Provisioner:    kubernetes.io/glusterfs
Parameters:     resturl=http://heketi-storage-glusterfs.apps.example.com,restuser=admin,secretName=heketi-storage-admin-secret,secretNamespace=glusterfs
Events:         <none>

Therefore a 6 pods (with pv claims) are not created properly:
# oc get pvc --all-namespaces |grep Pending
logging           logging-es-0          Pending                                                             24m
logging           logging-es-1          Pending                                                             23m
logging           logging-es-2          Pending                                                             23m
openshift-infra   metrics-cassandra-1   Pending                                                             26m
openshift-infra   metrics-cassandra-2   Pending                                                             26m
openshift-infra   metrics-cassandra-3   Pending                                                             26m

# oc project openshift-infra && oc describe pvc metrics-cassandra-1
Now using project "openshift-infra" on server "https://console.example.com:8443".
Name:           metrics-cassandra-1
Namespace:      openshift-infra
StorageClass:
Status:         Pending
Volume:
Labels:         metrics-infra=hawkular-cassandra
Annotations:    kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"labels":{"metrics-infra":"hawkular-cassandra"},"name":"metrics-cassandr...
Capacity:
Access Modes:
Events:
  FirstSeen     LastSeen        Count   From                            SubObjectPath   Type            Reason          Message
  ---------     --------        -----   ----                            -------------   --------        ------          -------
  27m           2m              106     persistentvolume-controller                     Normal          FailedBinding   no persistent volumes available for this claim and no storage class is set

Version-Release number of the following components:
rpm -q openshift-ansible
rpm -q ansible
ansible --version

How reproducible:

Steps to Reproduce:
1.
2.
3.

Actual results: pods were in pending state when using dynamic storage for logging using glusterfs

Expected results: expecting that cluster will be ready after installing without any additional fixes

Additional info:
Please attach logs from ansible-playbook with the -vvv flag
Comment 1 Jaspreet Kaur 2017-12-27 04:49:25 EST
PR : https://github.com/openshift/openshift-ansible/pull/6567
Comment 2 Scott Dodson 2018-01-02 09:15:30 EST
Product management should decide if we want this enabled by default or not possibly based on whether they're installing Origin or OCP. Lets update the PR per Jose's comments as he would know best what product management intends to happen here.
Comment 3 Jaspreet Kaur 2018-01-05 07:19:40 EST
I see that it is fixed in 3.7 upstream code now. Any timeline when can this be released ?
Comment 4 Aaron Ship 2018-01-10 03:49:36 EST
Hello Guys,
Can we have a response to comment #3... We are looking for ETA if possible?
Comment 5 Jose A. Rivera 2018-01-10 08:20:31 EST
Heyo. Just for back from PTO this week. :)

There is currently no specific timeline into 3.7. That's not to say it can't or won't be done, just that there's no plan for it. I think that would fall to Scott's team whether they want/can to schedule such a backport for an upcoming release.
Comment 6 Jose A. Rivera 2018-01-10 08:30:34 EST
Oh, I forgot about something. We already merged this PR in master:

https://github.com/openshift/openshift-ansible/pull/6182

And it is currently being backported to 3.7:

https://github.com/openshift/openshift-ansible/pull/6677

So the timeline now whatever it takes to get it merged and cut a release. :)
Comment 7 Scott Dodson 2018-01-19 14:17:52 EST
Referenced PR is merged, moving ON_QA
Comment 8 Wenkai Shi 2018-01-22 02:43:40 EST
Verified with version openshift-ansible-3.7.24-1.git.0.18a2c6a.el7, the parameter "openshift_storage_glusterfs_storageclass_default=true" has effect.

# cat hosts
...
openshift_storage_glusterfs_storageclass_default=true
...

# oc get sc 
NAME                          TYPE
glusterfs-storage (default)   kubernetes.io/glusterfs
Comment 12 errata-xmlrpc 2018-04-05 05:34:33 EDT
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2018:0636

Note You need to log in before you can comment on or make changes to this bug.