Bug 1402016 - [RFE] Provide documentation on cns deployment tool
Summary: [RFE] Provide documentation on cns deployment tool
Keywords:
Status: CLOSED CURRENTRELEASE
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: doc-Container_Native_Storage_with_OpenShift
Version: cns-3.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: CNS 3.4
Assignee: Divya
QA Contact: Prasanth
URL:
Whiteboard:
: 1403942 (view as bug list)
Depends On:
Blocks: 1385252 1404621
TreeView+ depends on / blocked
 
Reported: 2016-12-06 15:09 UTC by Raghavendra Talur
Modified: 2017-01-23 07:22 UTC (History)
11 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-01-23 07:22:54 UTC
Embargoed:
hchiramm: needinfo-


Attachments (Terms of Use)

Description Raghavendra Talur 2016-12-06 15:09:47 UTC
Description of problem:
We would like to have gk-deploy shipped as tech-preview in the first release and documentation should make a note of the same where ever applicable.

Comment 4 Humble Chirammal 2016-12-13 08:11:16 UTC
Divya, Jose (  jrivera ) will be providing the doc. However you could kickstart a google doc from here. https://github.com/gluster/gluster-kubernetes/blob/master/docs/setup-guide.md

Comment 5 Jose A. Rivera 2016-12-13 16:46:25 UTC
*** Bug 1403942 has been marked as a duplicate of this bug. ***

Comment 14 Divya 2016-12-21 09:59:00 UTC
Humble,

Could you please confirm if we support both the manual steps and the cns-tool for the CNS 3.4 release?

Thanks!

Comment 15 Humble Chirammal 2016-12-21 10:01:30 UTC
(In reply to Divya from comment #14)
> Humble,
> 
> Could you please confirm if we support both the manual steps and the
> cns-tool for the CNS 3.4 release?
> 
> Thanks!

Yes, as this is the first release, the plan is to support both manual and tool deployment. To double confirm I am placing a needinfo on Program management.

Comment 16 Humble Chirammal 2016-12-21 11:03:43 UTC
Section 4.2.3:
In Example output: add one more line:
service account "heketi-service-account" created.
Section 4.2.5

Verify the service account called 'heketi-service-account' created:

#oc get serviceaccount:

[root@master ~]# oc get serviceaccount
NAME                     SECRETS   AGE
builder                  2         13d
default                  2         13d
deployer                 2         13d
heketi-service-account   2         6d
router                   2         13d


4.3.2:


Change below section:

--snip--

Deploy a Red Hat Gluster Storage container on a node by executing the following command:

# oc process glusterfs -v GLUSTERFS_NODE=<node_hostname> | oc create -f -

For example:

# oc process glusterfs -v GLUSTERFS_NODE=node1.example.com | oc create -f -

deploymentconfig "glusterfs-dc-node1.example.com" created

--/snip--

We make use DeamonSet feature for Gluster Storage Containers. To configure DeamonSet deployment we need to first label the nodes which you selected as Storage nodes for Gluster Container deployement. 

To label the storage nodes, execute below command in all storage nodes:

#oc label node <NODENAME> storagenode=glusterfs

Verify the label has set properly by:

#oc get nodes 

Then deploy the gluster pods:


# oc process glusterfs | oc create -f - 

In NOTE: section remove below line:

This command deploys a single Red Hat Gluster Storage container on the node. 

4.3.3 section 
# oc process deploy-heketi -v \
    HEKETI_KUBE_NAMESPACE=<Project name> \
    HEKETI_KUBE_APIHOST='<OpenShift master endpoint address>' \
    HEKETI_KUBE_INSECURE=y \
    HEKETI_KUBE_SECRETNAME=<heketi_secret>| oc create -f -

Where :
'heketi_secret': should be:


[root@master ~]# oc describe sa heketi-service-account
Name:		heketi-service-account
Namespace:	aplo
Labels:		<none>

Image pull secrets:	heketi-service-account-dockercfg-hk85k

Mountable secrets: 	heketi-service-account-dockercfg-hk85k
                   	heketi-service-account-token-nljgt

Tokens:            	heketi-service-account-token-nljgt
                   	heketi-service-account-token-nnrgm

Under mountable secrets, fetch 'service-account-token' type secret for heketi-service-account. 

[root@master ~]# oc get secret/heketi-service-account-token-nljgt -o="go-template" --template="{{.type}}"
kubernetes.io/service-account-token

4.3.4 output need to captured again.

4.4.7:

# oc process heketi -v \
    HEKETI_KUBE_NAMESPACE=<Project name> \
    HEKETI_KUBE_APIHOST='<OpenShift master endpoint address>' \
    HEKETI_KUBE_INSECURE=y \
    HEKETI_KUBE_SECRETNAME=<heketi_secret>| oc create -f -

Comment 18 Rejy M Cyriac 2016-12-22 10:00:41 UTC
(In reply to Humble Chirammal from comment #15)
> (In reply to Divya from comment #14)
> > Humble,
> > 
> > Could you please confirm if we support both the manual steps and the
> > cns-tool for the CNS 3.4 release?
> > 
> > Thanks!
> 
> Yes, as this is the first release, the plan is to support both manual and
> tool deployment. To double confirm I am placing a needinfo on Program
> management.

Yes, both the existing manual and the new automated deployment models are to be supported and documented at the CNS 3.4 release

Comment 21 Humble Chirammal 2016-12-23 11:36:43 UTC
Here are the few changes Which I can think of:

3.2.5. Planning Guidelines:

Change from: 
...
by default the heketidb utilises 32 GB distributed replica volume
...

To:
by default the heketidb utilises 2 GB distributed replica volume

Change 

....
Execute the following command on the client to deploy the heketi and Red Hat Gluster Storage pods:

# cns-deploy -n <name> -g topology.json
.....

to 

Execute the following command on the client to deploy the heketi and Red Hat Gluster Storage pods:

# cns-deploy -n <namespace> -g topology.json

Comment 23 Mohamed Ashiq 2016-12-26 07:16:02 UTC
Hi Divya,

As We discussed Please change the following:

The output was taken in my setup which had project name "aplo" as our guide follows project name "storage-project" please change all the references of "aplo" to "storage-project" 

Also new changes in document has project name reference as "<Project>" or "<Project name>" all the old references has "<project_name>". Please change all references of "<Project>" or "<Project name>" to "<project_name>".

In A.2 step 4.

Change command to 
 # oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account

Thanks,
Ashiq

Comment 24 Divya 2016-12-26 07:45:00 UTC
(In reply to Mohamed Ashiq from comment #23)
> Hi Divya,
> 
> As We discussed Please change the following:
> 
> The output was taken in my setup which had project name "aplo" as our guide
> follows project name "storage-project" please change all the references of
> "aplo" to "storage-project" 
> 
> Also new changes in document has project name reference as "<Project>" or
> "<Project name>" all the old references has "<project_name>". Please change
> all references of "<Project>" or "<Project name>" to "<project_name>".
> 
> In A.2 step 4.
> 
> Change command to 
>  # oc policy add-role-to-user edit
> system:serviceaccount:<project_name>:heketi-service-account
> 
> Thanks,
> Ashiq

Hi Ashiq,

I have made the changes suggested in comment 23.

Link to the draft doc: http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.4-Container_Native_Storage_with_OpenShift_Platform-branch-master/lastSuccessfulBuild/artifact/tmp/en-US/html-single/index.html

Cheers,
Divya

Comment 27 krishnaram Karthick 2017-01-09 10:18:44 UTC
Two quick comments from my side on this bug:

1) Make a note under cns-deploy section suggesting "cns-deploy tool doesn't support scale up for this release. Please use manual steps to scale-up the cluster" 

We should be giving a pointer to manually scale-up to section 7.

2) Also before section 7.1.2.1 and section 7.1.3.1, add a point mentioning,

"gluster pods have to be configured before proceeding with the following steps. Please refer Appendix A-2 to manually deploy gluster pods"

Humble, can you please check if the above changes looks good.

Comment 28 Humble Chirammal 2017-01-10 12:04:05 UTC
(In reply to krishnaram Karthick from comment #27)
> Two quick comments from my side on this bug:
> 
> 1) Make a note under cns-deploy section suggesting "cns-deploy tool doesn't
> support scale up for this release. Please use manual steps to scale-up the
> cluster" 
> 
> We should be giving a pointer to manually scale-up to section 7.
> 
> 2) Also before section 7.1.2.1 and section 7.1.3.1, add a point mentioning,
> 
> "gluster pods have to be configured before proceeding with the following
> steps. Please refer Appendix A-2 to manually deploy gluster pods"
> 
> Humble, can you please check if the above changes looks good.

LGTM.

Comment 29 Prasanth 2017-01-10 14:25:53 UTC
Divya, I don't see the Comment 28 provided by Karthick is addressed in the latest doc. Could you please check and confirm the same from your end?

Comment 30 Prasanth 2017-01-10 14:37:56 UTC
cns-deploy tool, being a deployment tool newly introduced in this release for CNS deployment, the doc should ideally contain a brief or a detailed technical explanation of the tool as such along with it's various features and available options. However, I don't see anything like that about this tool in the latest doc. Is that left intentionally?

Last but not the least, providing the '--help' output of the tool would also help the users is what I believe.

Anyways, I'll paste it's output in the next comment for your reference.

Comment 31 Prasanth 2017-01-10 14:38:28 UTC
# cns-deploy --help
USAGE: cns-deploy [-ghv] [-c CLI] [-t <TEMPLATES>] [-n NAMESPACE] [-w <SECONDS>] [--load]
       [-l <LOG_FILE>] [<TOPOLOGY>]

This is a utility script for deploying heketi (and optionally GlusterFS) in a
Kubernetes environment.

Arguments:
  TOPOLOGY    Path to a JSON-formatted file containing the initial topology
              information for the storage heketi will manage.
              Default is 'topology.json'.

Options:
  -g, --deploy-gluster
              Deploy GlusterFS pods on the nodes in the topology that contain
              brick devices. If the --abort flag is also specified, this flag
              indicates that all GlusterFS pods and deployments should be
              deleted as well. Default is to not handle GlusterFS deployment
              or removal.

  -c CLI, --cli CLI
              Specify the container platform CLI (e.g. kubectl, oc) to use.
              Default behavior is to auto-detect the installed CLI.

  -t TEMPLATES, --templates_dir TEMPLATES
              Location of directory containing the heketi templates for the
              various resources. Defaults are:
                * For Kubernetes: './kube-templates'.
                * For OpenShift: '/usr/share/heketi/templates'.

  -n NAMESPACE, --namespace NAMESPACE
              The namespace to use for creating resources.
              Defaults is ''.

  -w SECONDS, --wait SECONDS
              Wait SECONDS seconds for pods to become ready. Default is '300'.

  --load      Resumes from the Topology load. Used in case of topology load
              failure on adding nodes or devices to heketi. Fix the issue on
              device or node, then restart with --load to skip gluster
              deployment and re-run topology load command.

  -y, --yes
              Skip the pre-requisites prompt.

  -l LOG_FILE, --log-file LOG_FILE
              Save all output to the specified file.

  --abort     Abort a deployment. WARNING: Deletes all related resources.

  -h, --help  Output this help message.

  -v, --verbose
              Verbose output

Comment 32 Michael Adam 2017-01-10 16:28:29 UTC
Please don't put the whole output of the help into the doc, in order to reduce maintenance overhead. Just mention to run "cns-deploy -h".

Also the original intent of the BZ is imho fixed. We can't put more and more stuff into this BZ. Further suggestions like comment #27 and comment #30 could be treated as doc RFEs later, possibly async.

Comment 33 Divya 2017-01-11 06:17:38 UTC
(In reply to Michael Adam from comment #32)
> Please don't put the whole output of the help into the doc, in order to
> reduce maintenance overhead. Just mention to run "cns-deploy -h".
> 
> Also the original intent of the BZ is imho fixed. We can't put more and more
> stuff into this BZ. Further suggestions like comment #27 and comment #30
> could be treated as doc RFEs later, possibly async.

I agree with Michael's suggestion of treating comment #27 and comment #30 as doc RFEs. 

Prasanth, Karthick, 

Could you share your thoughts?

Comment 34 Prasanth 2017-01-11 08:17:06 UTC
(In reply to Divya from comment #33)
> (In reply to Michael Adam from comment #32)
> > Please don't put the whole output of the help into the doc, in order to
> > reduce maintenance overhead. Just mention to run "cns-deploy -h".
> > 
> > Also the original intent of the BZ is imho fixed. We can't put more and more
> > stuff into this BZ. Further suggestions like comment #27 and comment #30
> > could be treated as doc RFEs later, possibly async.
> 
> I agree with Michael's suggestion of treating comment #27 and comment #30 as
> doc RFEs. 
> 
> Prasanth, Karthick, 
> 
> Could you share your thoughts?

Divya/Anjana,

I heard that some decision had already taken in yesterday's program team for addressing Comment 30. In that case, you can actually go ahead with that decision.

Comment 40 Prasanth 2017-01-11 10:56:49 UTC
The following is now added in the doc:

######## 
For more information on the cns-deploy commands, refer to the man page of the cns-deploy.

# cns-deploy --help
########

Moving it to Verified.


Note You need to log in before you can comment on or make changes to this bug.