Bug 1480124 - [RFE] Support Block Provisioner and S3 template deployment via cns-deploy tool
Summary: [RFE] Support Block Provisioner and S3 template deployment via cns-deploy tool
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: cns-deploy-tool
Version: cns-3.6
Hardware: Unspecified
OS: Unspecified
unspecified
low
Target Milestone: ---
: CNS 3.6
Assignee: Jose A. Rivera
QA Contact: Prasanth
URL:
Whiteboard:
Depends On: 1488122
Blocks: 1445444
TreeView+ depends on / blocked
 
Reported: 2017-08-10 09:00 UTC by Humble Chirammal
Modified: 2018-12-06 19:45 UTC (History)
10 users (show)

Fixed In Version:
Doc Type: Enhancement
Doc Text:
With this update, cns-deploy is now able to deploy the gluster-block provisioner pod and the gluster-s3 service pod.
Clone Of:
Environment:
Last Closed: 2017-10-11 07:12:11 UTC
Embargoed:


Attachments (Terms of Use)


Links
System ID Private Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:2881 0 normal SHIPPED_LIVE cns-deploy-tool bug fix and enhancement update 2017-10-11 11:11:43 UTC

Description Humble Chirammal 2017-08-10 09:00:54 UTC
Description of problem:

At present to deploy gluster block provisioner admin has to do 3 oc commands and 1 command is required to deploy S3 template. 

This RFE is for an enhancement to cns-deploy tool to do it automatically based on some options or switches mentioned by the admin.

Version-Release number of selected component (if applicable):


How reproducible:


Steps to Reproduce:
1.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 4 Humble Chirammal 2017-08-10 10:10:00 UTC
https://github.com/gluster/gluster-kubernetes/pull/285

Comment 5 Humble Chirammal 2017-08-29 11:12:45 UTC
https://github.com/gluster/gluster-kubernetes/pull/299 -> S3 deployment.

Comment 9 Prasanth 2017-09-15 17:53:21 UTC
As per the cns-deploy --help output, following are the newly included features/options to the tool as part of this RFE:

*************************************
  --no-block
              Don't deploy a gluster-block container. Default is to deploy.

  --block-host SIZE
              Specify the size (in GB) of the GlusterFS volumes which will be
              automatically created to host gluster-block volumes. Default is
              to not automatically create host volumes for gluster-block.

  --no-object
              Don't deploy a gluster-s3 container. Default is to deploy.

  --object-account ACCOUNT, --object-user USER, --object-password PASSWORD
              Required credentials for deploying the gluster-s3 container. If
              any of these are missing, object container deployment will be
              skipped.

  --object-sc STORAGE_CLASS
              Specify a pre-existing StorageClass to use to create GlusterFS
              volumes to back the object store. Two volumes are created, one
              for object data and one for metadata. Default is to create a new
              StorageClass called 'glusterfs-for-s3'.

  --object-capacity CAPACITY
              The total capacity of the GlusterFS volume which will store the
              object data. Default is '2Gi'.
*************************************

We need to ensure that we are documenting this properly with examples in the appropriate section in the guide.

Comment 11 Prasanth 2017-09-18 12:09:51 UTC
Based on Comment 8, Comment 9 and Comment 10, moving it to Verified.

Comment 13 Ramakrishna Reddy Yekulla 2017-10-04 12:38:35 UTC
I have reviewed the doc text. It looks good to me.

Comment 15 errata-xmlrpc 2017-10-11 07:12:11 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:2881

Comment 16 vinutha 2018-12-06 19:45:29 UTC
Marking qe-test-coverage as - since the preferred mode of deployment is using ansible


Note You need to log in before you can comment on or make changes to this bug.