Description of problem: DeploymentConfig is missing in gluster-s3-template.yaml Version-Release number of selected component (if applicable): rhgs3/rhgs-s3-server-rhel7:3.3.0-3 How reproducible: Always Steps to Reproduce: 1. 2. 3. Actual results: DeploymentConfig is missing in gluster-s3-template.yaml Expected results: DeploymentConfig / rc should be added to the gluster-s3-template.yaml similar to the ones we have for heketi. ########### - kind: DeploymentConfig apiVersion: v1 metadata: name: heketi labels: glusterfs: heketi-dc annotations: description: Defines how to deploy Heketi spec: replicas: 1 selector: glusterfs: heketi-pod triggers: - type: ConfigChange strategy: type: Recreate ########### Additional info:
Upstream patch posted: https://github.com/gluster/gluster-kubernetes/pull/283
Upstream patch merged. With this patch , we can create one account(corresponding to a gluster volume) in a namespace. It is not possible to create more accounts in the same namespace. This limitation will be addressed in future.
Could you try launching the same pod like this: oc new-app gluster-s3-template.yaml --param=S3_VOLUMES=testvolume --param=S3_USER=adminuser --param=S3_PASSWORD=itsmine --param=VOLUME_CAPACITY=2Gi In the last patch, GLUSTER_VOLUMES was renamed as S3_VOLUMES (instead of S3_ACCOUNT), hence the issue.
(In reply to Saravanakumar from comment #10) > Could you try launching the same pod like this: > > oc new-app gluster-s3-template.yaml --param=S3_VOLUMES=testvolume > --param=S3_USER=adminuser --param=S3_PASSWORD=itsmine > --param=VOLUME_CAPACITY=2Gi > > In the last patch, GLUSTER_VOLUMES was renamed as S3_VOLUMES (instead of > S3_ACCOUNT), hence the issue. As mentioned, I've tried the above and it didn't work. ####### # oc new-app gluster-s3-template.yaml --param=S3_VOLUMES=testvolume --param=S3_USER=adminuser --param=S3_PASSWORD=itsmine --param=VOLUME_CAPACITY=2Gi error: unexpected parameter name "S3_VOLUMES" ####### Please let me know how to proceed further.
With Talur's help, I have build s3 container image to reflect S3_ACCOUNT. So, please use latest s3 image (rhgs-s3-server-rhel7:3.3.0-6) for testing. You can use the original command mentioned for deployment as mentioned in comment#7.
(In reply to Saravanakumar from comment #12) > With Talur's help, I have build s3 container image to reflect S3_ACCOUNT. > > So, please use latest s3 image (rhgs-s3-server-rhel7:3.3.0-6) for testing. > > You can use the original command mentioned for deployment as mentioned in > comment#7. Tried it again with the latest build and now it's failing with the following error: ####### # oc new-app gluster-s3-template.yaml --param=S3_ACCOUNT=testvolume --param=S3_USER=adminuser --param=S3_PASSWORD=itsmine --param=VOLUME_CAPACITY=2Gi error: unexpected parameter name "VOLUME_CAPACITY" ####### Has something changed recently which has broken this? Moving it back to Assigned again.
With recent changes in S3 template, deploying S3 storage using cns-deploy is carried out and hence changes in the templates. Deploying using cns-deploy takes more precedence. Manual deployment will need some changes in Documentation.
(In reply to Saravanakumar from comment #14) > With recent changes in S3 template, deploying S3 storage using cns-deploy is > carried out and hence changes in the templates. Deploying using cns-deploy > takes more precedence. > > Manual deployment will need some changes in Documentation. I'm confused. What does this really mean? That cns-deploy works, but manual deployment according to doc fails?
This bug is also strange in that it does not describe at all what goes wrong, but says that some section is missing from the gluster-s3-template.yaml file. What are the criteria for having this verified?
(In reply to Michael Adam from comment #15) > (In reply to Saravanakumar from comment #14) > > With recent changes in S3 template, deploying S3 storage using cns-deploy is > > carried out and hence changes in the templates. Deploying using cns-deploy > > takes more precedence. > > > > Manual deployment will need some changes in Documentation. > > I'm confused. What does this really mean? That cns-deploy works, but manual > deployment according to doc fails? Yes cns-deploy works. Manual deployment (as per documentation) fails as it still refers older template. For example, VOLUME_CAPACITY is moved from the template file to another file.
(In reply to Michael Adam from comment #16) > This bug is also strange in that it does not describe at all what goes > wrong, but says that some section is missing from the > gluster-s3-template.yaml file. > > What are the criteria for having this verified? Deployment of gluster S3, which used to work earlier, should be successful even now with the latest gluster-s3-template.yaml file that we are shipping with the product on following the official guide [1] [1] https://access.qa.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html-single/container-native_storage_for_openshift_container_platform/#S3_Object_Store
*** Bug 1491219 has been marked as a duplicate of this bug. ***
Please follow these steps for setting up S3 storage using updated gluster-s3-template file. 1. Create secret: oc create secret generic heketi-${NAMESPACE}-admin-secret --from-literal=key=${ADMIN_KEY} --type=kubernetes.io/glusterfs For example: oc create secret generic heketi-store-project1-admin-secret --from-literal=key= --type=kubernetes.io/glusterfs 2. Label the secret: oc label --overwrite secret heketi-${NAMESPACE}-admin-secret glusterfs=s3-heketi-${NAMESPACE}-admin-secret gluster-s3=heketi-${NAMESPACE}-admin-secret For example: oc label --overwrite secret heketi-store-project1-admin-secret glusterfs=s3-heketi-store-project1-admin-secret gluster-s3=heketi-store-project1-admin-secret 3. Create storage class cat /usr/share/heketi/templates/gluster-s3-storageclass.yaml | sed s/'${HEKETI_URL}'/heketi-store-project1.cloudapps.mystorage.com/g | sed s/'${STORAGE_CLASS}'/gluster-s3-store/g | sed s/'${NAMESPACE}'/store-project1/g | oc create -f - Use HEKETI_URL and NAMESPACE from the current setup. Set a STORAGE_CLASS name. 4. Create PVC using storage class cat /usr/share/heketi/templates/gluster-s3-pvcs.yaml | sed s/'${VOLUME_CAPACITY}'/2Gi/g | sed s/'${STORAGE_CLASS}'/gluster-s3-store/g | oc create -f - Adjust VOLUME_CAPACITY as per your needs. Use STORAGE_CLASS created from above step. 5. Launch S3 storage service oc new-app /usr/share/heketi/templates/gluster-s3-template.yaml --param=S3_ACCOUNT=testvolume --param=S3_USER=adminuser --param=S3_PASSWORD=itsmine --param=PVC=gluster-s3-claim --param=META_PVC=gluster-s3-meta-claim Set S3_ACCOUNT name, S3_USER name, S3_PASSWORD. PVC and META_PVC are obtained from above step. 6. Get S3 url for access: s3_storage_url=$(oc get routes | grep "gluster.*s3" | awk '{print $2}') Follow the same documentation for testing.
(In reply to Saravanakumar from comment #20) > Please follow these steps for setting up S3 storage using updated > gluster-s3-template file. > > 1. Create secret: > oc create secret generic heketi-${NAMESPACE}-admin-secret > --from-literal=key=${ADMIN_KEY} --type=kubernetes.io/glusterfs > > For example: > oc create secret generic heketi-store-project1-admin-secret > --from-literal=key= --type=kubernetes.io/glusterfs > > 2. Label the secret: > oc label --overwrite secret heketi-${NAMESPACE}-admin-secret > glusterfs=s3-heketi-${NAMESPACE}-admin-secret > gluster-s3=heketi-${NAMESPACE}-admin-secret > > For example: > oc label --overwrite secret heketi-store-project1-admin-secret > glusterfs=s3-heketi-store-project1-admin-secret > gluster-s3=heketi-store-project1-admin-secret > > > 3. Create storage class > cat /usr/share/heketi/templates/gluster-s3-storageclass.yaml | sed > s/'${HEKETI_URL}'/heketi-store-project1.cloudapps.mystorage.com/g | sed > s/'${STORAGE_CLASS}'/gluster-s3-store/g | sed > s/'${NAMESPACE}'/store-project1/g | oc create -f - > > Use HEKETI_URL and NAMESPACE from the current setup. > Set a STORAGE_CLASS name. > > 4. Create PVC using storage class > cat /usr/share/heketi/templates/gluster-s3-pvcs.yaml | sed > s/'${VOLUME_CAPACITY}'/2Gi/g | sed s/'${STORAGE_CLASS}'/gluster-s3-store/g > | oc create -f - > > Adjust VOLUME_CAPACITY as per your needs. > Use STORAGE_CLASS created from above step. > > 5. Launch S3 storage service > oc new-app /usr/share/heketi/templates/gluster-s3-template.yaml > --param=S3_ACCOUNT=testvolume --param=S3_USER=adminuser > --param=S3_PASSWORD=itsmine --param=PVC=gluster-s3-claim > --param=META_PVC=gluster-s3-meta-claim > > Set S3_ACCOUNT name, S3_USER name, S3_PASSWORD. > PVC and META_PVC are obtained from above step. > > 6. Get S3 url for access: > s3_storage_url=$(oc get routes | grep "gluster.*s3" | awk '{print $2}') > > Follow the same documentation for testing. Based on above and also because we have notified documentation team for above changes, I am moving the status of the bug to "ON_QA"
Updated commands: Please follow these steps for setting up S3 storage using updated gluster-s3-template file. 1. Create secret: oc create secret generic heketi-${NAMESPACE}-admin-secret --from-literal=key=${ADMIN_KEY} --type=kubernetes.io/glusterfs For example: oc create secret generic heketi-store-project1-admin-secret --from-literal=key= --type=kubernetes.io/glusterfs 2. Label the secret: oc label --overwrite secret heketi-${NAMESPACE}-admin-secret glusterfs=s3-heketi-${NAMESPACE}-admin-secret gluster-s3=heketi-${NAMESPACE}-admin-secret For example: oc label --overwrite secret heketi-store-project1-admin-secret glusterfs=s3-heketi-store-project1-admin-secret gluster-s3=heketi-store-project1-admin-secret The above two steps are optional. 3. Create storage class Use HEKETI_URL and NAMESPACE from the current setup. Set a STORAGE_CLASS name. # sed -e 's/${HEKETI_URL}/heketi-store-project1.cloudapps.mystorage.com/g' -e 's/${STORAGE_CLASS}/gluster-s3-store/g' -e 's/${NAMESPACE}/store-project1/g' /usr/share/heketi/templates/gluster-s3-storageclass.yaml | oc create -f - storageclass "gluster-s3-store" created 4. Create PVC using storage class # sed -e 's/${VOLUME_CAPACITY}/2Gi/g' -e 's/${STORAGE_CLASS}/gluster-s3-store/g' /usr/share/heketi/templates/gluster-s3-pvcs.yaml | oc create -f - persistentvolumeclaim "gluster-s3-claim" created persistentvolumeclaim "gluster-s3-meta-claim" created Adjust VOLUME_CAPACITY as per your needs. Use STORAGE_CLASS created from above step. Wait till PVC is bound : # oc get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE gluster-s3-claim Bound pvc-0b7f75ef-9920-11e7-9309-00151e000016 2Gi RWX 2m gluster-s3-meta-claim Bound pvc-0b87a698-9920-11e7-9309-00151e000016 1Gi RWX 2m 5. Launch S3 storage service Set S3_ACCOUNT name, S3_USER name, S3_PASSWORD. PVC and META_PVC are obtained from above step. # oc new-app /usr/share/heketi/templates/gluster-s3-template.yaml \ --param=S3_ACCOUNT=testvolume --param=S3_USER=adminuser \ --param=S3_PASSWORD=itsmine --param=PVC=gluster-s3-claim \ --param=META_PVC=gluster-s3-meta-claim --> Deploying template "store-project1/gluster-s3" for "/usr/share/heketi/templates/gluster-s3-template.yaml" to project store-project1 gluster-s3 --------- Gluster s3 service template * With parameters: * S3 Account Name=testvolume * S3 User=adminuser * S3 User Password=itsmine * Primary GlusterFS-backed PVC=gluster-s3-claim * Metadata GlusterFS-backed PVC=gluster-s3-meta-claim --> Creating resources ... service "gluster-s3-service" created route "gluster-s3-route" created deploymentconfig "gluster-s3-dc" created --> Success Run 'oc status' to view your app. 6. Get S3 url for access: # s3_storage_url=$(oc get routes | grep "gluster.*s3" | awk '{print $2}') Follow the same documentation for testing.
(In reply to Saravanakumar from comment #22) > Updated commands: > > Please follow these steps for setting up S3 storage using updated > gluster-s3-template file. > > 1. Create secret: > oc create secret generic heketi-${NAMESPACE}-admin-secret > --from-literal=key=${ADMIN_KEY} --type=kubernetes.io/glusterfs > > For example: > oc create secret generic heketi-store-project1-admin-secret > --from-literal=key= --type=kubernetes.io/glusterfs > > 2. Label the secret: > oc label --overwrite secret heketi-${NAMESPACE}-admin-secret > glusterfs=s3-heketi-${NAMESPACE}-admin-secret > gluster-s3=heketi-${NAMESPACE}-admin-secret > > For example: > oc label --overwrite secret heketi-store-project1-admin-secret > glusterfs=s3-heketi-store-project1-admin-secret > gluster-s3=heketi-store-project1-admin-secret > > The above two steps are optional. > > 3. Create storage class > > Use HEKETI_URL and NAMESPACE from the current setup. > Set a STORAGE_CLASS name. > > # sed -e 's/${HEKETI_URL}/heketi-store-project1.cloudapps.mystorage.com/g' > -e 's/${STORAGE_CLASS}/gluster-s3-store/g' -e > 's/${NAMESPACE}/store-project1/g' > /usr/share/heketi/templates/gluster-s3-storageclass.yaml | oc create -f - > storageclass "gluster-s3-store" created > > 4. Create PVC using storage class > # sed -e 's/${VOLUME_CAPACITY}/2Gi/g' -e > 's/${STORAGE_CLASS}/gluster-s3-store/g' > /usr/share/heketi/templates/gluster-s3-pvcs.yaml | oc create -f - > persistentvolumeclaim "gluster-s3-claim" created > persistentvolumeclaim "gluster-s3-meta-claim" created > > Adjust VOLUME_CAPACITY as per your needs. > Use STORAGE_CLASS created from above step. > > Wait till PVC is bound : > > # oc get pvc > NAME STATUS VOLUME > CAPACITY ACCESSMODES AGE > gluster-s3-claim Bound pvc-0b7f75ef-9920-11e7-9309-00151e000016 > 2Gi RWX 2m > gluster-s3-meta-claim Bound pvc-0b87a698-9920-11e7-9309-00151e000016 > 1Gi RWX 2m > > 5. Launch S3 storage service > > Set S3_ACCOUNT name, S3_USER name, S3_PASSWORD. > PVC and META_PVC are obtained from above step. > > # oc new-app /usr/share/heketi/templates/gluster-s3-template.yaml \ > --param=S3_ACCOUNT=testvolume --param=S3_USER=adminuser \ > --param=S3_PASSWORD=itsmine --param=PVC=gluster-s3-claim \ > --param=META_PVC=gluster-s3-meta-claim > --> Deploying template "store-project1/gluster-s3" for > "/usr/share/heketi/templates/gluster-s3-template.yaml" to project > store-project1 > > gluster-s3 > --------- > Gluster s3 service template > > > * With parameters: > * S3 Account Name=testvolume > * S3 User=adminuser > * S3 User Password=itsmine > * Primary GlusterFS-backed PVC=gluster-s3-claim > * Metadata GlusterFS-backed PVC=gluster-s3-meta-claim > > --> Creating resources ... > service "gluster-s3-service" created > route "gluster-s3-route" created > deploymentconfig "gluster-s3-dc" created > --> Success > Run 'oc status' to view your app. > > 6. Get S3 url for access: > # s3_storage_url=$(oc get routes | grep "gluster.*s3" | awk '{print $2}') > > Follow the same documentation for testing. Thanks Saravana, for providing the working deployment steps based on all the latest changes. I'll follow the same steps and validate this bug now. However, do ensure that the old steps are removed and updated with these new steps in our CNS 3.6 guide as well.
Based on Comment24, marking it as Verified
Since the problem described in this bug report should be resolved in a recent advisory, it has been closed with a resolution of ERRATA. For information on the advisory, and where to find the updated files, follow the link below. If the solution does not work for you, open a new bug report. https://access.redhat.com/errata/RHEA-2017:2880