Bug 1491219 - Default 'gluster-s3-storageclass.yaml' fails to create
Summary: Default 'gluster-s3-storageclass.yaml' fails to create
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: cns-deploy-tool
Version: cns-3.6
Hardware: Unspecified
OS: Unspecified
unspecified
high
Target Milestone: ---
: CNS 3.6
Assignee: Michael Adam
QA Contact: Prasanth
URL:
Whiteboard:
Depends On: 1474317
Blocks: 1445448
TreeView+ depends on / blocked
 
Reported: 2017-09-13 10:52 UTC by Prasanth
Modified: 2018-12-06 19:58 UTC (History)
6 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-10-11 07:14:24 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:2881 0 normal SHIPPED_LIVE cns-deploy-tool bug fix and enhancement update 2017-10-11 11:11:43 UTC

Description Prasanth 2017-09-13 10:52:18 UTC
Description of problem:

Default 'gluster-s3-storageclass.yaml' fails to create. 

# oc create -f ./gluster-s3-storageclass.yaml
The StorageClass "${STORAGE_CLASS}" is invalid: metadata.name: Invalid value: "${STORAGE_CLASS}": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*')

Version-Release number of selected component (if applicable):
cns-deploy-5.0.0-34.el7rhgs.x86_64

How reproducible: 100%


Steps to Reproduce:
1. cd /usr/share/heketi/templates/
2. oc create -f ./gluster-s3-storageclass.yaml
3.

Actual results: The default gluster-s3-storageclass.yaml we are shipping with cns-deploy rpm is not working


Expected results: The default gluster-s3-storageclass.yaml we are shipping with cns-deploy rpm should work


Additional info:

# pwd
/usr/share/heketi/templates
[root@dhcp46-186 templates]# ls -al
total 44
drwxr-xr-x. 2 root root 4096 Sep 13 16:13 .
drwxr-xr-x. 3 root root   51 Sep 11 14:07 ..
-rw-r--r--. 1 root root 2922 Sep  8 22:01 deploy-heketi-template.yaml
-rw-r--r--. 1 root root 2238 Sep  8 22:01 glusterblock-provisioner.yaml
-rw-r--r--. 1 root root 4444 Sep  8 22:01 glusterfs-template.yaml
-rw-r--r--. 1 root root  717 Sep  8 22:01 gluster-s3-pvcs.yaml
-rw-r--r--. 1 root root  329 Sep  8 22:01 gluster-s3-storageclass.yaml
-rw-r--r--. 1 root root 3249 Sep  8 22:01 gluster-s3-template.yaml
-rw-r--r--. 1 root root 1327 Sep  8 22:01 heketi.json.template
-rw-r--r--. 1 root root  127 Aug 24 20:01 heketi-service-account.yaml
-rw-r--r--. 1 root root 2952 Sep  8 22:01 heketi-template.yaml

# cat gluster-s3-storageclass.yaml 
---
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
  name: ${STORAGE_CLASS}
  labels:
    glusterfs: s3-sc
    gluster-s3: sc
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://${HEKETI_URL}"
  restuser: "admin"
  secretNamespace: "${NAMESPACE}"
  secretName: "heketi-${NAMESPACE}-admin-secret"

Comment 1 Prasanth 2017-09-13 13:28:22 UTC
Also, the creation of default pvc fails as well:

# oc create -f ./gluster-s3-pvcs.yaml
persistentvolumeclaim "gluster-s3-meta-claim" created
Error from server (BadRequest): PersistentVolumeClaim in version "v1" cannot be handled as a PersistentVolumeClaim: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'

Comment 2 Humble Chirammal 2017-09-13 17:31:11 UTC
I dont think we need seperate bug for this, Any way we are tracking manual deployment via https://bugzilla.redhat.com/show_bug.cgi?id=1474317. Once the steps are available and if it fails in any of them everything can be consolidate on it. I am closing this bug, please feel free to reopen if you have a different thought.

*** This bug has been marked as a duplicate of bug 1474317 ***

Comment 3 Prasanth 2017-09-13 17:44:14 UTC
 (In reply to Humble Chirammal from comment #2)
> I dont think we need seperate bug for this, Any way we are tracking manual
> deployment via https://bugzilla.redhat.com/show_bug.cgi?id=1474317. Once the
> steps are available and if it fails in any of them everything can be
> consolidate on it. I am closing this bug, please feel free to reopen if you
> have a different thought.
> 
> *** This bug has been marked as a duplicate of bug 1474317 ***

I don't think we should get confused this bug with the deployment bug. This bug is all about the default yaml files that we are shipping in cns-deploy package for S3 deployment are not working out of the box. So I would like to keep this BZ open until we come to a conclusion that the default yaml files (StorageClass, PVC) are indeed correct and needs NO further modification for it to work. If that's the case, we can close this BZ, but not until then.

Comment 4 Humble Chirammal 2017-09-13 18:30:22 UTC
(In reply to Prasanth from comment #3)
>  (In reply to Humble Chirammal from comment #2)
> > I dont think we need seperate bug for this, Any way we are tracking manual
> > deployment via https://bugzilla.redhat.com/show_bug.cgi?id=1474317. Once the
> > steps are available and if it fails in any of them everything can be
> > consolidate on it. I am closing this bug, please feel free to reopen if you
> > have a different thought.
> > 
> > *** This bug has been marked as a duplicate of bug 1474317 ***
> 
> I don't think we should get confused this bug with the deployment bug. This
> bug is all about the default yaml files that we are shipping in cns-deploy
> package for S3 deployment are not working out of the box. So I would like to
> keep this BZ open until we come to a conclusion that the default yaml files
> (StorageClass, PVC) are indeed correct and needs NO further modification for
> it to work. If that's the case, we can close this BZ, but not until then.

I dont know you noticed the error message, its just that  string "${STORAGE_CLASS}" is not able to parse, this is an ENV variable, so it has to be filled and its not that "its something wrong with the YAML file" as you mentioned. In this angle the bugzilla description for below does not make much sense.

-----snip--

Actual results: The default gluster-s3-storageclass.yaml we are shipping with cns-deploy rpm is not working


Expected results: The default gluster-s3-storageclass.yaml we are shipping with cns-deploy rpm should work

--/snip--


However if you want to keep it open please feel free. But in this sense you have to open bugs against all S3 related YAMLs.

Comment 5 Humble Chirammal 2017-09-13 19:00:43 UTC
Based on https://bugzilla.redhat.com/show_bug.cgi?id=1474317#c20, I am moving the status of this bug to "ON_QA".

Comment 6 Humble Chirammal 2017-09-13 19:01:44 UTC
I will get PM ack on this soon.

Comment 9 Prasanth 2017-09-14 09:02:52 UTC
Test results:

# sed  -e 's/${HEKETI_URL}/heketi-storage-project.cloudapps.mystorage.com/g' -e 's/${STORAGE_CLASS}/gluster-s3-store/g' -e  's/${NAMESPACE}/storage-project/g' /usr/share/heketi/templates/gluster-s3-storageclass.yaml | oc create -f -
storageclass "gluster-s3-store" created

 
# oc get sc
NAME               TYPE
gluster-s3-store   kubernetes.io/glusterfs


# sed -e 's/${VOLUME_CAPACITY}/2Gi/g'  -e  's/${STORAGE_CLASS}/gluster-s3-store/g'  /usr/share/heketi/templates/gluster-s3-pvcs.yaml | oc create -f -
persistentvolumeclaim "gluster-s3-claim" created
persistentvolumeclaim "gluster-s3-meta-claim" created


# oc get pvc
NAME                    STATUS    VOLUME                                     CAPACITY   ACCESSMODES   STORAGECLASS       AGE
gluster-s3-claim        Bound     pvc-883719aa-9926-11e7-aef4-005056a50e3b   2Gi        RWX           gluster-s3-store   24s
gluster-s3-meta-claim   Bound     pvc-8838e302-9926-11e7-aef4-005056a50e3b   1Gi        RWX           gluster-s3-store   24s


# oc get pv
NAME                                       CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                                   STORAGECLASS       REASON    AGE
pvc-883719aa-9926-11e7-aef4-005056a50e3b   2Gi        RWX           Delete          Bound     storage-project/gluster-s3-claim        gluster-s3-store             14s
pvc-8838e302-9926-11e7-aef4-005056a50e3b   1Gi        RWX           Delete          Bound     storage-project/gluster-s3-meta-claim   gluster-s3-store             10s

Comment 10 Prasanth 2017-09-14 09:03:47 UTC
Based on Comment 9, marking this BZ as Verified.

Comment 12 errata-xmlrpc 2017-10-11 07:14:24 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:2881

Comment 13 vinutha 2018-12-06 19:58:01 UTC
Marking qe-test-coverage as - since the preferred mode of deployment is using ansible


Note You need to log in before you can comment on or make changes to this bug.