Bug 1474317 - Manual Deployment of gluster-s3 is failing with the updated gluster-s3-template.yaml file
Summary: Manual Deployment of gluster-s3 is failing with the updated gluster-s3-templa...
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: rhgs-s3-server-container
Version: cns-3.6
Hardware: Unspecified
OS: Unspecified
high
high
Target Milestone: ---
: CNS 3.6
Assignee: Saravanakumar
QA Contact: Prasanth
URL:
Whiteboard:
Depends On:
Blocks: 1445448 1446216 1491219
TreeView+ depends on / blocked
 
Reported: 2017-07-24 12:06 UTC by Prasanth
Modified: 2019-02-14 11:43 UTC (History)
10 users (show)

Fixed In Version: cns-deploy-5.0.0-13
Doc Type: No Doc Update
Doc Text:
undefined
Clone Of:
Environment:
Last Closed: 2017-10-11 07:01:45 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1446216 0 unspecified CLOSED [Doc RFE] Document support for s3 compatible object store in RHGS via PUT/GET operations (TP) 2021-02-22 00:41:40 UTC
Red Hat Product Errata RHEA-2017:2880 0 normal SHIPPED_LIVE rhgs-s3-server-container bug fix and enhancement update 2017-10-11 11:14:53 UTC

Internal Links: 1446216

Description Prasanth 2017-07-24 12:06:47 UTC
Description of problem:

DeploymentConfig is missing in gluster-s3-template.yaml

Version-Release number of selected component (if applicable):
rhgs3/rhgs-s3-server-rhel7:3.3.0-3

How reproducible: Always


Steps to Reproduce:
1.
2.
3.

Actual results: DeploymentConfig is missing in gluster-s3-template.yaml


Expected results: DeploymentConfig / rc should be added to the gluster-s3-template.yaml similar to the ones we have for heketi.

###########
- kind: DeploymentConfig
  apiVersion: v1
  metadata:
    name: heketi
    labels:
      glusterfs: heketi-dc
    annotations:
      description: Defines how to deploy Heketi
  spec:
    replicas: 1
    selector:
      glusterfs: heketi-pod
    triggers:
    - type: ConfigChange
    strategy:
      type: Recreate
###########


Additional info:

Comment 5 Saravanakumar 2017-07-27 06:18:22 UTC
Upstream patch posted:
https://github.com/gluster/gluster-kubernetes/pull/283

Comment 6 Saravanakumar 2017-08-01 05:48:16 UTC
Upstream patch merged.

With this patch , we can create one account(corresponding to a gluster volume) in a namespace. It is not possible to create more accounts in the same namespace.

This limitation will be addressed in future.

Comment 10 Saravanakumar 2017-08-29 07:56:00 UTC
Could you try launching the same pod like this:

oc new-app gluster-s3-template.yaml  --param=S3_VOLUMES=testvolume  --param=S3_USER=adminuser --param=S3_PASSWORD=itsmine --param=VOLUME_CAPACITY=2Gi

In the last patch, GLUSTER_VOLUMES was renamed as S3_VOLUMES (instead of S3_ACCOUNT), hence the issue.

Comment 11 Prasanth 2017-08-29 08:33:37 UTC
(In reply to Saravanakumar from comment #10)
> Could you try launching the same pod like this:
> 
> oc new-app gluster-s3-template.yaml  --param=S3_VOLUMES=testvolume 
> --param=S3_USER=adminuser --param=S3_PASSWORD=itsmine
> --param=VOLUME_CAPACITY=2Gi
> 
> In the last patch, GLUSTER_VOLUMES was renamed as S3_VOLUMES (instead of
> S3_ACCOUNT), hence the issue.

As mentioned, I've tried the above and it didn't work.

#######
# oc new-app gluster-s3-template.yaml  --param=S3_VOLUMES=testvolume --param=S3_USER=adminuser --param=S3_PASSWORD=itsmine --param=VOLUME_CAPACITY=2Gi
error: unexpected parameter name "S3_VOLUMES"
#######

Please let me know how to proceed further.

Comment 12 Saravanakumar 2017-08-29 11:21:02 UTC
With Talur's help, I have build s3 container image to reflect S3_ACCOUNT.

So, please use latest s3 image (rhgs-s3-server-rhel7:3.3.0-6) for testing. 

You can use the original command mentioned for deployment as mentioned in comment#7.

Comment 13 Prasanth 2017-09-13 11:04:57 UTC
(In reply to Saravanakumar from comment #12)
> With Talur's help, I have build s3 container image to reflect S3_ACCOUNT.
> 
> So, please use latest s3 image (rhgs-s3-server-rhel7:3.3.0-6) for testing. 
> 
> You can use the original command mentioned for deployment as mentioned in
> comment#7.

Tried it again with the latest build and now it's failing with the following error:

#######
#  oc new-app gluster-s3-template.yaml  --param=S3_ACCOUNT=testvolume  --param=S3_USER=adminuser --param=S3_PASSWORD=itsmine --param=VOLUME_CAPACITY=2Gi
error: unexpected parameter name "VOLUME_CAPACITY"
#######

Has something changed recently which has broken this?

Moving it back to Assigned again.

Comment 14 Saravanakumar 2017-09-13 12:15:41 UTC
With recent changes in S3 template, deploying S3 storage using cns-deploy is carried out and hence changes in the templates. Deploying using cns-deploy takes more precedence.

Manual deployment will need some changes in Documentation.

Comment 15 Michael Adam 2017-09-13 13:15:12 UTC
(In reply to Saravanakumar from comment #14)
> With recent changes in S3 template, deploying S3 storage using cns-deploy is
> carried out and hence changes in the templates. Deploying using cns-deploy
> takes more precedence.
> 
> Manual deployment will need some changes in Documentation.

I'm confused. What does this really mean? That cns-deploy works, but manual deployment according to doc fails?

Comment 16 Michael Adam 2017-09-13 13:17:41 UTC
This bug is also strange in that it does not describe at all what goes wrong, but says that some section is missing from the gluster-s3-template.yaml file.

What are the criteria for having this verified?

Comment 17 Saravanakumar 2017-09-13 13:24:47 UTC
(In reply to Michael Adam from comment #15)
> (In reply to Saravanakumar from comment #14)
> > With recent changes in S3 template, deploying S3 storage using cns-deploy is
> > carried out and hence changes in the templates. Deploying using cns-deploy
> > takes more precedence.
> > 
> > Manual deployment will need some changes in Documentation.
> 
> I'm confused. What does this really mean? That cns-deploy works, but manual
> deployment according to doc fails?

Yes cns-deploy works. 

Manual deployment (as per documentation) fails as it still refers older template.

For example, VOLUME_CAPACITY is moved from the template file to another file.

Comment 18 Prasanth 2017-09-13 16:54:51 UTC
(In reply to Michael Adam from comment #16)
> This bug is also strange in that it does not describe at all what goes
> wrong, but says that some section is missing from the
> gluster-s3-template.yaml file.
> 
> What are the criteria for having this verified?

Deployment of gluster S3, which used to work earlier, should be successful even now with the latest gluster-s3-template.yaml file that we are shipping with the product on following the official guide [1]

[1] https://access.qa.redhat.com/documentation/en-us/red_hat_gluster_storage/3.3/html-single/container-native_storage_for_openshift_container_platform/#S3_Object_Store

Comment 19 Humble Chirammal 2017-09-13 17:31:11 UTC
*** Bug 1491219 has been marked as a duplicate of this bug. ***

Comment 20 Saravanakumar 2017-09-13 18:20:03 UTC
Please follow these steps for setting up S3 storage using updated gluster-s3-template file.

1. Create secret: 
oc create secret generic heketi-${NAMESPACE}-admin-secret --from-literal=key=${ADMIN_KEY} --type=kubernetes.io/glusterfs

For example:
oc create secret generic heketi-store-project1-admin-secret --from-literal=key=  --type=kubernetes.io/glusterfs

2. Label the secret: 
oc label --overwrite secret heketi-${NAMESPACE}-admin-secret glusterfs=s3-heketi-${NAMESPACE}-admin-secret gluster-s3=heketi-${NAMESPACE}-admin-secret

For example:
oc label --overwrite secret heketi-store-project1-admin-secret glusterfs=s3-heketi-store-project1-admin-secret gluster-s3=heketi-store-project1-admin-secret


3. Create storage class
cat /usr/share/heketi/templates/gluster-s3-storageclass.yaml  |    sed s/'${HEKETI_URL}'/heketi-store-project1.cloudapps.mystorage.com/g |  sed s/'${STORAGE_CLASS}'/gluster-s3-store/g | sed s/'${NAMESPACE}'/store-project1/g | oc create -f - 

Use HEKETI_URL and NAMESPACE from the current setup.
Set a STORAGE_CLASS name.

4. Create PVC using storage class
cat /usr/share/heketi/templates/gluster-s3-pvcs.yaml  |  sed s/'${VOLUME_CAPACITY}'/2Gi/g | sed s/'${STORAGE_CLASS}'/gluster-s3-store/g  | oc create -f - 

Adjust VOLUME_CAPACITY as per your needs.
Use STORAGE_CLASS created from above step.

5. Launch S3 storage service
oc new-app  /usr/share/heketi/templates/gluster-s3-template.yaml  --param=S3_ACCOUNT=testvolume  --param=S3_USER=adminuser --param=S3_PASSWORD=itsmine --param=PVC=gluster-s3-claim --param=META_PVC=gluster-s3-meta-claim

Set S3_ACCOUNT name, S3_USER name, S3_PASSWORD.
PVC and META_PVC are obtained from above step.

6. Get S3 url for access:
s3_storage_url=$(oc get routes   | grep "gluster.*s3"  | awk '{print $2}')

Follow the same documentation for testing.

Comment 21 Humble Chirammal 2017-09-13 18:58:20 UTC
(In reply to Saravanakumar from comment #20)
> Please follow these steps for setting up S3 storage using updated
> gluster-s3-template file.
> 
> 1. Create secret: 
> oc create secret generic heketi-${NAMESPACE}-admin-secret
> --from-literal=key=${ADMIN_KEY} --type=kubernetes.io/glusterfs
> 
> For example:
> oc create secret generic heketi-store-project1-admin-secret
> --from-literal=key=  --type=kubernetes.io/glusterfs
> 
> 2. Label the secret: 
> oc label --overwrite secret heketi-${NAMESPACE}-admin-secret
> glusterfs=s3-heketi-${NAMESPACE}-admin-secret
> gluster-s3=heketi-${NAMESPACE}-admin-secret
> 
> For example:
> oc label --overwrite secret heketi-store-project1-admin-secret
> glusterfs=s3-heketi-store-project1-admin-secret
> gluster-s3=heketi-store-project1-admin-secret
> 
> 
> 3. Create storage class
> cat /usr/share/heketi/templates/gluster-s3-storageclass.yaml  |    sed
> s/'${HEKETI_URL}'/heketi-store-project1.cloudapps.mystorage.com/g |  sed
> s/'${STORAGE_CLASS}'/gluster-s3-store/g | sed
> s/'${NAMESPACE}'/store-project1/g | oc create -f - 
> 
> Use HEKETI_URL and NAMESPACE from the current setup.
> Set a STORAGE_CLASS name.
> 
> 4. Create PVC using storage class
> cat /usr/share/heketi/templates/gluster-s3-pvcs.yaml  |  sed
> s/'${VOLUME_CAPACITY}'/2Gi/g | sed s/'${STORAGE_CLASS}'/gluster-s3-store/g 
> | oc create -f - 
> 
> Adjust VOLUME_CAPACITY as per your needs.
> Use STORAGE_CLASS created from above step.
> 
> 5. Launch S3 storage service
> oc new-app  /usr/share/heketi/templates/gluster-s3-template.yaml 
> --param=S3_ACCOUNT=testvolume  --param=S3_USER=adminuser
> --param=S3_PASSWORD=itsmine --param=PVC=gluster-s3-claim
> --param=META_PVC=gluster-s3-meta-claim
> 
> Set S3_ACCOUNT name, S3_USER name, S3_PASSWORD.
> PVC and META_PVC are obtained from above step.
> 
> 6. Get S3 url for access:
> s3_storage_url=$(oc get routes   | grep "gluster.*s3"  | awk '{print $2}')
> 
> Follow the same documentation for testing.

Based on above and also because we have notified documentation team for above changes, I am moving the status of the bug to "ON_QA"

Comment 22 Saravanakumar 2017-09-14 07:51:15 UTC
Updated commands:

Please follow these steps for setting up S3 storage using updated
gluster-s3-template file.

1. Create secret: 
oc create secret generic heketi-${NAMESPACE}-admin-secret
--from-literal=key=${ADMIN_KEY} --type=kubernetes.io/glusterfs

For example:
oc create secret generic heketi-store-project1-admin-secret
--from-literal=key=  --type=kubernetes.io/glusterfs

2. Label the secret: 
oc label --overwrite secret heketi-${NAMESPACE}-admin-secret
glusterfs=s3-heketi-${NAMESPACE}-admin-secret
gluster-s3=heketi-${NAMESPACE}-admin-secret

For example:
oc label --overwrite secret heketi-store-project1-admin-secret
glusterfs=s3-heketi-store-project1-admin-secret
gluster-s3=heketi-store-project1-admin-secret

The above two steps are optional.

3. Create storage class

Use HEKETI_URL and NAMESPACE from the current setup.
Set a STORAGE_CLASS name.

# sed  -e 's/${HEKETI_URL}/heketi-store-project1.cloudapps.mystorage.com/g'  -e 's/${STORAGE_CLASS}/gluster-s3-store/g'      -e  's/${NAMESPACE}/store-project1/g'   /usr/share/heketi/templates/gluster-s3-storageclass.yaml | oc create -f -  
storageclass "gluster-s3-store" created

4. Create PVC using storage class
# sed -e 's/${VOLUME_CAPACITY}/2Gi/g'  -e  's/${STORAGE_CLASS}/gluster-s3-store/g'  /usr/share/heketi/templates/gluster-s3-pvcs.yaml | oc create -f - 
persistentvolumeclaim "gluster-s3-claim" created
persistentvolumeclaim "gluster-s3-meta-claim" created

Adjust VOLUME_CAPACITY as per your needs.
Use STORAGE_CLASS created from above step.

Wait till PVC is bound : 

# oc get pvc 
NAME                    STATUS    VOLUME                                     CAPACITY   ACCESSMODES   AGE
gluster-s3-claim        Bound     pvc-0b7f75ef-9920-11e7-9309-00151e000016   2Gi        RWX           2m
gluster-s3-meta-claim   Bound     pvc-0b87a698-9920-11e7-9309-00151e000016   1Gi        RWX           2m

5. Launch S3 storage service

Set S3_ACCOUNT name, S3_USER name, S3_PASSWORD.
PVC and META_PVC are obtained from above step.

# oc new-app  /usr/share/heketi/templates/gluster-s3-template.yaml \
--param=S3_ACCOUNT=testvolume  --param=S3_USER=adminuser \
--param=S3_PASSWORD=itsmine --param=PVC=gluster-s3-claim \
--param=META_PVC=gluster-s3-meta-claim
--> Deploying template "store-project1/gluster-s3" for "/usr/share/heketi/templates/gluster-s3-template.yaml" to project store-project1

     gluster-s3
     ---------
     Gluster s3 service template


     * With parameters:
        * S3 Account Name=testvolume
        * S3 User=adminuser
        * S3 User Password=itsmine
        * Primary GlusterFS-backed PVC=gluster-s3-claim
        * Metadata GlusterFS-backed PVC=gluster-s3-meta-claim

--> Creating resources ...
    service "gluster-s3-service" created
    route "gluster-s3-route" created
    deploymentconfig "gluster-s3-dc" created
--> Success
    Run 'oc status' to view your app.

6. Get S3 url for access:
# s3_storage_url=$(oc get routes   | grep "gluster.*s3"  | awk '{print $2}')

Follow the same documentation for testing.

Comment 23 Prasanth 2017-09-14 08:40:50 UTC
(In reply to Saravanakumar from comment #22)
> Updated commands:
> 
> Please follow these steps for setting up S3 storage using updated
> gluster-s3-template file.
> 
> 1. Create secret: 
> oc create secret generic heketi-${NAMESPACE}-admin-secret
> --from-literal=key=${ADMIN_KEY} --type=kubernetes.io/glusterfs
> 
> For example:
> oc create secret generic heketi-store-project1-admin-secret
> --from-literal=key=  --type=kubernetes.io/glusterfs
> 
> 2. Label the secret: 
> oc label --overwrite secret heketi-${NAMESPACE}-admin-secret
> glusterfs=s3-heketi-${NAMESPACE}-admin-secret
> gluster-s3=heketi-${NAMESPACE}-admin-secret
> 
> For example:
> oc label --overwrite secret heketi-store-project1-admin-secret
> glusterfs=s3-heketi-store-project1-admin-secret
> gluster-s3=heketi-store-project1-admin-secret
> 
> The above two steps are optional.
> 
> 3. Create storage class
> 
> Use HEKETI_URL and NAMESPACE from the current setup.
> Set a STORAGE_CLASS name.
> 
> # sed  -e 's/${HEKETI_URL}/heketi-store-project1.cloudapps.mystorage.com/g' 
> -e 's/${STORAGE_CLASS}/gluster-s3-store/g'      -e 
> 's/${NAMESPACE}/store-project1/g'  
> /usr/share/heketi/templates/gluster-s3-storageclass.yaml | oc create -f -  
> storageclass "gluster-s3-store" created
> 
> 4. Create PVC using storage class
> # sed -e 's/${VOLUME_CAPACITY}/2Gi/g'  -e 
> 's/${STORAGE_CLASS}/gluster-s3-store/g' 
> /usr/share/heketi/templates/gluster-s3-pvcs.yaml | oc create -f - 
> persistentvolumeclaim "gluster-s3-claim" created
> persistentvolumeclaim "gluster-s3-meta-claim" created
> 
> Adjust VOLUME_CAPACITY as per your needs.
> Use STORAGE_CLASS created from above step.
> 
> Wait till PVC is bound : 
> 
> # oc get pvc 
> NAME                    STATUS    VOLUME                                    
> CAPACITY   ACCESSMODES   AGE
> gluster-s3-claim        Bound     pvc-0b7f75ef-9920-11e7-9309-00151e000016  
> 2Gi        RWX           2m
> gluster-s3-meta-claim   Bound     pvc-0b87a698-9920-11e7-9309-00151e000016  
> 1Gi        RWX           2m
> 
> 5. Launch S3 storage service
> 
> Set S3_ACCOUNT name, S3_USER name, S3_PASSWORD.
> PVC and META_PVC are obtained from above step.
> 
> # oc new-app  /usr/share/heketi/templates/gluster-s3-template.yaml \
> --param=S3_ACCOUNT=testvolume  --param=S3_USER=adminuser \
> --param=S3_PASSWORD=itsmine --param=PVC=gluster-s3-claim \
> --param=META_PVC=gluster-s3-meta-claim
> --> Deploying template "store-project1/gluster-s3" for
> "/usr/share/heketi/templates/gluster-s3-template.yaml" to project
> store-project1
> 
>      gluster-s3
>      ---------
>      Gluster s3 service template
> 
> 
>      * With parameters:
>         * S3 Account Name=testvolume
>         * S3 User=adminuser
>         * S3 User Password=itsmine
>         * Primary GlusterFS-backed PVC=gluster-s3-claim
>         * Metadata GlusterFS-backed PVC=gluster-s3-meta-claim
> 
> --> Creating resources ...
>     service "gluster-s3-service" created
>     route "gluster-s3-route" created
>     deploymentconfig "gluster-s3-dc" created
> --> Success
>     Run 'oc status' to view your app.
> 
> 6. Get S3 url for access:
> # s3_storage_url=$(oc get routes   | grep "gluster.*s3"  | awk '{print $2}')
> 
> Follow the same documentation for testing.


Thanks Saravana, for providing the working deployment steps based on all the latest changes. I'll follow the same steps and validate this bug now. However, do ensure that the old steps are removed and updated with these new steps in our CNS 3.6 guide as well.

Comment 25 Prasanth 2017-09-14 10:50:47 UTC
Based on Comment24, marking it as Verified

Comment 27 errata-xmlrpc 2017-10-11 07:01:45 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:2880


Note You need to log in before you can comment on or make changes to this bug.