Red Hat Bugzilla – Bug 1529048
dynamic storage for logging from glusterfs do not work from the installer
Last modified: 2018-04-15 22:00:49 EDT
Description of problem: AFTER cluster 3.7 installation. (with glusterfs as a storage) IMO glusterfs-storage is not tagged as Default # oc describe sc glusterfs-storage Name: glusterfs-storage IsDefaultClass: No Annotations: <none> Provisioner: kubernetes.io/glusterfs Parameters: resturl=http://heketi-storage-glusterfs.apps.example.com,restuser=admin,secretName=heketi-storage-admin-secret,secretNamespace=glusterfs Events: <none> Therefore a 6 pods (with pv claims) are not created properly: # oc get pvc --all-namespaces |grep Pending logging logging-es-0 Pending 24m logging logging-es-1 Pending 23m logging logging-es-2 Pending 23m openshift-infra metrics-cassandra-1 Pending 26m openshift-infra metrics-cassandra-2 Pending 26m openshift-infra metrics-cassandra-3 Pending 26m # oc project openshift-infra && oc describe pvc metrics-cassandra-1 Now using project "openshift-infra" on server "https://console.example.com:8443". Name: metrics-cassandra-1 Namespace: openshift-infra StorageClass: Status: Pending Volume: Labels: metrics-infra=hawkular-cassandra Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"labels":{"metrics-infra":"hawkular-cassandra"},"name":"metrics-cassandr... Capacity: Access Modes: Events: FirstSeen LastSeen Count From SubObjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 27m 2m 106 persistentvolume-controller Normal FailedBinding no persistent volumes available for this claim and no storage class is set Version-Release number of the following components: rpm -q openshift-ansible rpm -q ansible ansible --version How reproducible: Steps to Reproduce: 1. 2. 3. Actual results: pods were in pending state when using dynamic storage for logging using glusterfs Expected results: expecting that cluster will be ready after installing without any additional fixes Additional info: Please attach logs from ansible-playbook with the -vvv flag
PR : https://github.com/openshift/openshift-ansible/pull/6567
Product management should decide if we want this enabled by default or not possibly based on whether they're installing Origin or OCP. Lets update the PR per Jose's comments as he would know best what product management intends to happen here.
I see that it is fixed in 3.7 upstream code now. Any timeline when can this be released ?
Hello Guys, Can we have a response to comment #3... We are looking for ETA if possible?
Heyo. Just for back from PTO this week. :) There is currently no specific timeline into 3.7. That's not to say it can't or won't be done, just that there's no plan for it. I think that would fall to Scott's team whether they want/can to schedule such a backport for an upcoming release.
Oh, I forgot about something. We already merged this PR in master: https://github.com/openshift/openshift-ansible/pull/6182 And it is currently being backported to 3.7: https://github.com/openshift/openshift-ansible/pull/6677 So the timeline now whatever it takes to get it merged and cut a release. :)
Referenced PR is merged, moving ON_QA
Verified with version openshift-ansible-3.7.24-1.git.0.18a2c6a.el7, the parameter "openshift_storage_glusterfs_storageclass_default=true" has effect. # cat hosts ... openshift_storage_glusterfs_storageclass_default=true ... # oc get sc NAME TYPE glusterfs-storage (default) kubernetes.io/glusterfs
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHBA-2018:0636