Description of problem: Default 'gluster-s3-storageclass.yaml' fails to create. # oc create -f ./gluster-s3-storageclass.yaml The StorageClass "${STORAGE_CLASS}" is invalid: metadata.name: Invalid value: "${STORAGE_CLASS}": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start and end with an alphanumeric character (e.g. 'example.com', regex used for validation is '[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*') Version-Release number of selected component (if applicable): cns-deploy-5.0.0-34.el7rhgs.x86_64 How reproducible: 100% Steps to Reproduce: 1. cd /usr/share/heketi/templates/ 2. oc create -f ./gluster-s3-storageclass.yaml 3. Actual results: The default gluster-s3-storageclass.yaml we are shipping with cns-deploy rpm is not working Expected results: The default gluster-s3-storageclass.yaml we are shipping with cns-deploy rpm should work Additional info: # pwd /usr/share/heketi/templates [root@dhcp46-186 templates]# ls -al total 44 drwxr-xr-x. 2 root root 4096 Sep 13 16:13 . drwxr-xr-x. 3 root root 51 Sep 11 14:07 .. -rw-r--r--. 1 root root 2922 Sep 8 22:01 deploy-heketi-template.yaml -rw-r--r--. 1 root root 2238 Sep 8 22:01 glusterblock-provisioner.yaml -rw-r--r--. 1 root root 4444 Sep 8 22:01 glusterfs-template.yaml -rw-r--r--. 1 root root 717 Sep 8 22:01 gluster-s3-pvcs.yaml -rw-r--r--. 1 root root 329 Sep 8 22:01 gluster-s3-storageclass.yaml -rw-r--r--. 1 root root 3249 Sep 8 22:01 gluster-s3-template.yaml -rw-r--r--. 1 root root 1327 Sep 8 22:01 heketi.json.template -rw-r--r--. 1 root root 127 Aug 24 20:01 heketi-service-account.yaml -rw-r--r--. 1 root root 2952 Sep 8 22:01 heketi-template.yaml # cat gluster-s3-storageclass.yaml --- kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: ${STORAGE_CLASS} labels: glusterfs: s3-sc gluster-s3: sc provisioner: kubernetes.io/glusterfs parameters: resturl: "http://${HEKETI_URL}" restuser: "admin" secretNamespace: "${NAMESPACE}" secretName: "heketi-${NAMESPACE}-admin-secret"
Also, the creation of default pvc fails as well: # oc create -f ./gluster-s3-pvcs.yaml persistentvolumeclaim "gluster-s3-meta-claim" created Error from server (BadRequest): PersistentVolumeClaim in version "v1" cannot be handled as a PersistentVolumeClaim: quantities must match the regular expression '^([+-]?[0-9.]+)([eEinumkKMGTP]*[-+]?[0-9]*)$'
I dont think we need seperate bug for this, Any way we are tracking manual deployment via https://bugzilla.redhat.com/show_bug.cgi?id=1474317. Once the steps are available and if it fails in any of them everything can be consolidate on it. I am closing this bug, please feel free to reopen if you have a different thought. *** This bug has been marked as a duplicate of bug 1474317 ***
(In reply to Humble Chirammal from comment #2) > I dont think we need seperate bug for this, Any way we are tracking manual > deployment via https://bugzilla.redhat.com/show_bug.cgi?id=1474317. Once the > steps are available and if it fails in any of them everything can be > consolidate on it. I am closing this bug, please feel free to reopen if you > have a different thought. > > *** This bug has been marked as a duplicate of bug 1474317 *** I don't think we should get confused this bug with the deployment bug. This bug is all about the default yaml files that we are shipping in cns-deploy package for S3 deployment are not working out of the box. So I would like to keep this BZ open until we come to a conclusion that the default yaml files (StorageClass, PVC) are indeed correct and needs NO further modification for it to work. If that's the case, we can close this BZ, but not until then.
(In reply to Prasanth from comment #3) > (In reply to Humble Chirammal from comment #2) > > I dont think we need seperate bug for this, Any way we are tracking manual > > deployment via https://bugzilla.redhat.com/show_bug.cgi?id=1474317. Once the > > steps are available and if it fails in any of them everything can be > > consolidate on it. I am closing this bug, please feel free to reopen if you > > have a different thought. > > > > *** This bug has been marked as a duplicate of bug 1474317 *** > > I don't think we should get confused this bug with the deployment bug. This > bug is all about the default yaml files that we are shipping in cns-deploy > package for S3 deployment are not working out of the box. So I would like to > keep this BZ open until we come to a conclusion that the default yaml files > (StorageClass, PVC) are indeed correct and needs NO further modification for > it to work. If that's the case, we can close this BZ, but not until then. I dont know you noticed the error message, its just that string "${STORAGE_CLASS}" is not able to parse, this is an ENV variable, so it has to be filled and its not that "its something wrong with the YAML file" as you mentioned. In this angle the bugzilla description for below does not make much sense. -----snip-- Actual results: The default gluster-s3-storageclass.yaml we are shipping with cns-deploy rpm is not working Expected results: The default gluster-s3-storageclass.yaml we are shipping with cns-deploy rpm should work --/snip-- However if you want to keep it open please feel free. But in this sense you have to open bugs against all S3 related YAMLs.
Based on https://bugzilla.redhat.com/show_bug.cgi?id=1474317#c20, I am moving the status of this bug to "ON_QA".
I will get PM ack on this soon.
Test results: # sed -e 's/${HEKETI_URL}/heketi-storage-project.cloudapps.mystorage.com/g' -e 's/${STORAGE_CLASS}/gluster-s3-store/g' -e 's/${NAMESPACE}/storage-project/g' /usr/share/heketi/templates/gluster-s3-storageclass.yaml | oc create -f - storageclass "gluster-s3-store" created # oc get sc NAME TYPE gluster-s3-store kubernetes.io/glusterfs # sed -e 's/${VOLUME_CAPACITY}/2Gi/g' -e 's/${STORAGE_CLASS}/gluster-s3-store/g' /usr/share/heketi/templates/gluster-s3-pvcs.yaml | oc create -f - persistentvolumeclaim "gluster-s3-claim" created persistentvolumeclaim "gluster-s3-meta-claim" created # oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE gluster-s3-claim Bound pvc-883719aa-9926-11e7-aef4-005056a50e3b 2Gi RWX gluster-s3-store 24s gluster-s3-meta-claim Bound pvc-8838e302-9926-11e7-aef4-005056a50e3b 1Gi RWX gluster-s3-store 24s # oc get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-883719aa-9926-11e7-aef4-005056a50e3b 2Gi RWX Delete Bound storage-project/gluster-s3-claim gluster-s3-store 14s pvc-8838e302-9926-11e7-aef4-005056a50e3b 1Gi RWX Delete Bound storage-project/gluster-s3-meta-claim gluster-s3-store 10s
Based on Comment 9, marking this BZ as Verified.
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:2881
Marking qe-test-coverage as - since the preferred mode of deployment is using ansible