Bug 1300710 - RFE: define a default service and endpoint for gluster storage
RFE: define a default service and endpoint for gluster storage
Status: CLOSED ERRATA
Product: OpenShift Container Platform
Classification: Red Hat
Component: RFE (Show other bugs)
3.1.0
All Linux
unspecified Severity medium
: ---
: 3.4.z
Assigned To: Humble Chirammal
Johnny Liu
:
Depends On:
Blocks:
  Show dependency treegraph
 
Reported: 2016-01-21 09:02 EST by Christophe Augello
Modified: 2017-02-22 13:10 EST (History)
10 users (show)

See Also:
Fixed In Version:
Doc Type: Bug Fix
Doc Text:
Story Points: ---
Clone Of:
Environment:
Last Closed: 2017-02-22 13:10:27 EST
Type: Bug
Regression: ---
Mount Type: ---
Documentation: ---
CRM:
Verified Versions:
Category: ---
oVirt Team: ---
RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: ---


Attachments (Terms of Use)

  None (edit)
Description Christophe Augello 2016-01-21 09:02:51 EST
Description of problem:

Ability to define a default service and endpoint for gluster so it can be accessed by all projects. When a project doesn't have the service and endpoint, a pod with a gluster pv with fail with:
~~~~
Jan 21 12:06:01 node.paas.local atomic-openshift-node[1797]: E0121 12:06:01.279933    1797 glusterfs.go:89] glusterfs: failed to get endpoints glusterfs-cluster[endpoints "glusterfs-cluster" not found]
~~~

Version-Release number of selected component (if applicable):
OpenShift Enterprise 3.1
Comment 2 Steve Watt 2016-02-01 11:50:26 EST
Christophe, I think what you're looking for can be done by having the OpenShift Administrator create a Persistent Volume for GlusterFS using this guide - https://docs.openshift.com/enterprise/3.1/install_config/persistent_storage/persistent_storage_glusterfs.html

That guide lays out the steps for the admin to create the glusterfs endpoints file and create the Persistent Volume. The Persistent volume would then be available to claim by all the projects and constituents of the openshift cluster.

Does this resolve the issue?
Comment 3 Christophe Augello 2016-02-02 02:49:29 EST
Steve, if I am not mistaken the above requires to do it in each namespace. This implies to share the Gluster environment details with all the customers. The RFE is meant to make this transparent to our customer customers by creating a default service and endpoint. 

The end idea would be that, our customer (hosting OSE) will only share to his customers the endpoint and the volume-{name,size} for the PV.

Would this be possible?
Comment 4 Clayton Coleman 2016-02-07 14:43:12 EST
It needs to be possible on the PV to specify a global cluster endpoint or hostname and not require each namespace to have a gluster service.  This is a deficiency in the initial design of GlusterFS (which was our first volume provider that needed this) and is effectively a design flaw.
Comment 5 Dave McCormick 2016-04-14 13:16:04 EDT
Is there any kind of ETA for fixing this flaw/design issue?
Comment 6 Humble Chirammal 2016-12-15 02:38:52 EST
(In reply to Dave McCormick from comment #5)
> Is there any kind of ETA for fixing this flaw/design issue?

With latest enhancements and new feature of Dynamic provisioning https://github.com/kubernetes/kubernetes/blob/34c873a748bf2e45839d1e3f178470d837f1a587/examples/experimental/persistent-volume-provisioning/README.md, this has been taken care. Whenever a PV is provisioned an endpoint and service is (auto) created for that PV and kept in the PVC namespace. Now the user experience is seemless, thus I think this issue/flaw in design is fixed.
Comment 7 Steve Watt 2017-02-09 11:37:27 EST
This issue has been fixed as of OpenShift 3.4.
Comment 9 errata-xmlrpc 2017-02-22 13:10:27 EST
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHBA-2017:0289

Note You need to log in before you can comment on or make changes to this bug.