Description of problem: We would like to have gk-deploy shipped as tech-preview in the first release and documentation should make a note of the same where ever applicable.
Divya, Jose ( jrivera ) will be providing the doc. However you could kickstart a google doc from here. https://github.com/gluster/gluster-kubernetes/blob/master/docs/setup-guide.md
*** Bug 1403942 has been marked as a duplicate of this bug. ***
Humble, Could you please confirm if we support both the manual steps and the cns-tool for the CNS 3.4 release? Thanks!
(In reply to Divya from comment #14) > Humble, > > Could you please confirm if we support both the manual steps and the > cns-tool for the CNS 3.4 release? > > Thanks! Yes, as this is the first release, the plan is to support both manual and tool deployment. To double confirm I am placing a needinfo on Program management.
Section 4.2.3: In Example output: add one more line: service account "heketi-service-account" created. Section 4.2.5 Verify the service account called 'heketi-service-account' created: #oc get serviceaccount: [root@master ~]# oc get serviceaccount NAME SECRETS AGE builder 2 13d default 2 13d deployer 2 13d heketi-service-account 2 6d router 2 13d 4.3.2: Change below section: --snip-- Deploy a Red Hat Gluster Storage container on a node by executing the following command: # oc process glusterfs -v GLUSTERFS_NODE=<node_hostname> | oc create -f - For example: # oc process glusterfs -v GLUSTERFS_NODE=node1.example.com | oc create -f - deploymentconfig "glusterfs-dc-node1.example.com" created --/snip-- We make use DeamonSet feature for Gluster Storage Containers. To configure DeamonSet deployment we need to first label the nodes which you selected as Storage nodes for Gluster Container deployement. To label the storage nodes, execute below command in all storage nodes: #oc label node <NODENAME> storagenode=glusterfs Verify the label has set properly by: #oc get nodes Then deploy the gluster pods: # oc process glusterfs | oc create -f - In NOTE: section remove below line: This command deploys a single Red Hat Gluster Storage container on the node. 4.3.3 section # oc process deploy-heketi -v \ HEKETI_KUBE_NAMESPACE=<Project name> \ HEKETI_KUBE_APIHOST='<OpenShift master endpoint address>' \ HEKETI_KUBE_INSECURE=y \ HEKETI_KUBE_SECRETNAME=<heketi_secret>| oc create -f - Where : 'heketi_secret': should be: [root@master ~]# oc describe sa heketi-service-account Name: heketi-service-account Namespace: aplo Labels: <none> Image pull secrets: heketi-service-account-dockercfg-hk85k Mountable secrets: heketi-service-account-dockercfg-hk85k heketi-service-account-token-nljgt Tokens: heketi-service-account-token-nljgt heketi-service-account-token-nnrgm Under mountable secrets, fetch 'service-account-token' type secret for heketi-service-account. [root@master ~]# oc get secret/heketi-service-account-token-nljgt -o="go-template" --template="{{.type}}" kubernetes.io/service-account-token 4.3.4 output need to captured again. 4.4.7: # oc process heketi -v \ HEKETI_KUBE_NAMESPACE=<Project name> \ HEKETI_KUBE_APIHOST='<OpenShift master endpoint address>' \ HEKETI_KUBE_INSECURE=y \ HEKETI_KUBE_SECRETNAME=<heketi_secret>| oc create -f -
(In reply to Humble Chirammal from comment #15) > (In reply to Divya from comment #14) > > Humble, > > > > Could you please confirm if we support both the manual steps and the > > cns-tool for the CNS 3.4 release? > > > > Thanks! > > Yes, as this is the first release, the plan is to support both manual and > tool deployment. To double confirm I am placing a needinfo on Program > management. Yes, both the existing manual and the new automated deployment models are to be supported and documented at the CNS 3.4 release
Here are the few changes Which I can think of: 3.2.5. Planning Guidelines: Change from: ... by default the heketidb utilises 32 GB distributed replica volume ... To: by default the heketidb utilises 2 GB distributed replica volume Change .... Execute the following command on the client to deploy the heketi and Red Hat Gluster Storage pods: # cns-deploy -n <name> -g topology.json ..... to Execute the following command on the client to deploy the heketi and Red Hat Gluster Storage pods: # cns-deploy -n <namespace> -g topology.json
Incorporated the suggestions listed in comment 21 and 20. Link to the cns-deploy section: http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.4-Container_Native_Storage_with_OpenShift_Platform-branch-master/lastSuccessfulBuild/artifact/tmp/en-US/html-single/index.html#idm139651239231872 Link to the updated Manual Deployment section: http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.4-Container_Native_Storage_with_OpenShift_Platform-branch-master/lastSuccessfulBuild/artifact/tmp/en-US/html-single/index.html#chap-Documentation-Red_Hat_Gluster_Storage_Container_Native_with_OpenShift_Platform-Manual_Deployment Moving the bug ON_QA.
Hi Divya, As We discussed Please change the following: The output was taken in my setup which had project name "aplo" as our guide follows project name "storage-project" please change all the references of "aplo" to "storage-project" Also new changes in document has project name reference as "<Project>" or "<Project name>" all the old references has "<project_name>". Please change all references of "<Project>" or "<Project name>" to "<project_name>". In A.2 step 4. Change command to # oc policy add-role-to-user edit system:serviceaccount:<project_name>:heketi-service-account Thanks, Ashiq
(In reply to Mohamed Ashiq from comment #23) > Hi Divya, > > As We discussed Please change the following: > > The output was taken in my setup which had project name "aplo" as our guide > follows project name "storage-project" please change all the references of > "aplo" to "storage-project" > > Also new changes in document has project name reference as "<Project>" or > "<Project name>" all the old references has "<project_name>". Please change > all references of "<Project>" or "<Project name>" to "<project_name>". > > In A.2 step 4. > > Change command to > # oc policy add-role-to-user edit > system:serviceaccount:<project_name>:heketi-service-account > > Thanks, > Ashiq Hi Ashiq, I have made the changes suggested in comment 23. Link to the draft doc: http://ccs-jenkins.gsslab.brq.redhat.com:8080/job/doc-Red_Hat_Gluster_Storage-3.4-Container_Native_Storage_with_OpenShift_Platform-branch-master/lastSuccessfulBuild/artifact/tmp/en-US/html-single/index.html Cheers, Divya
Two quick comments from my side on this bug: 1) Make a note under cns-deploy section suggesting "cns-deploy tool doesn't support scale up for this release. Please use manual steps to scale-up the cluster" We should be giving a pointer to manually scale-up to section 7. 2) Also before section 7.1.2.1 and section 7.1.3.1, add a point mentioning, "gluster pods have to be configured before proceeding with the following steps. Please refer Appendix A-2 to manually deploy gluster pods" Humble, can you please check if the above changes looks good.
(In reply to krishnaram Karthick from comment #27) > Two quick comments from my side on this bug: > > 1) Make a note under cns-deploy section suggesting "cns-deploy tool doesn't > support scale up for this release. Please use manual steps to scale-up the > cluster" > > We should be giving a pointer to manually scale-up to section 7. > > 2) Also before section 7.1.2.1 and section 7.1.3.1, add a point mentioning, > > "gluster pods have to be configured before proceeding with the following > steps. Please refer Appendix A-2 to manually deploy gluster pods" > > Humble, can you please check if the above changes looks good. LGTM.
Divya, I don't see the Comment 28 provided by Karthick is addressed in the latest doc. Could you please check and confirm the same from your end?
cns-deploy tool, being a deployment tool newly introduced in this release for CNS deployment, the doc should ideally contain a brief or a detailed technical explanation of the tool as such along with it's various features and available options. However, I don't see anything like that about this tool in the latest doc. Is that left intentionally? Last but not the least, providing the '--help' output of the tool would also help the users is what I believe. Anyways, I'll paste it's output in the next comment for your reference.
# cns-deploy --help USAGE: cns-deploy [-ghv] [-c CLI] [-t <TEMPLATES>] [-n NAMESPACE] [-w <SECONDS>] [--load] [-l <LOG_FILE>] [<TOPOLOGY>] This is a utility script for deploying heketi (and optionally GlusterFS) in a Kubernetes environment. Arguments: TOPOLOGY Path to a JSON-formatted file containing the initial topology information for the storage heketi will manage. Default is 'topology.json'. Options: -g, --deploy-gluster Deploy GlusterFS pods on the nodes in the topology that contain brick devices. If the --abort flag is also specified, this flag indicates that all GlusterFS pods and deployments should be deleted as well. Default is to not handle GlusterFS deployment or removal. -c CLI, --cli CLI Specify the container platform CLI (e.g. kubectl, oc) to use. Default behavior is to auto-detect the installed CLI. -t TEMPLATES, --templates_dir TEMPLATES Location of directory containing the heketi templates for the various resources. Defaults are: * For Kubernetes: './kube-templates'. * For OpenShift: '/usr/share/heketi/templates'. -n NAMESPACE, --namespace NAMESPACE The namespace to use for creating resources. Defaults is ''. -w SECONDS, --wait SECONDS Wait SECONDS seconds for pods to become ready. Default is '300'. --load Resumes from the Topology load. Used in case of topology load failure on adding nodes or devices to heketi. Fix the issue on device or node, then restart with --load to skip gluster deployment and re-run topology load command. -y, --yes Skip the pre-requisites prompt. -l LOG_FILE, --log-file LOG_FILE Save all output to the specified file. --abort Abort a deployment. WARNING: Deletes all related resources. -h, --help Output this help message. -v, --verbose Verbose output
Please don't put the whole output of the help into the doc, in order to reduce maintenance overhead. Just mention to run "cns-deploy -h". Also the original intent of the BZ is imho fixed. We can't put more and more stuff into this BZ. Further suggestions like comment #27 and comment #30 could be treated as doc RFEs later, possibly async.
(In reply to Michael Adam from comment #32) > Please don't put the whole output of the help into the doc, in order to > reduce maintenance overhead. Just mention to run "cns-deploy -h". > > Also the original intent of the BZ is imho fixed. We can't put more and more > stuff into this BZ. Further suggestions like comment #27 and comment #30 > could be treated as doc RFEs later, possibly async. I agree with Michael's suggestion of treating comment #27 and comment #30 as doc RFEs. Prasanth, Karthick, Could you share your thoughts?
(In reply to Divya from comment #33) > (In reply to Michael Adam from comment #32) > > Please don't put the whole output of the help into the doc, in order to > > reduce maintenance overhead. Just mention to run "cns-deploy -h". > > > > Also the original intent of the BZ is imho fixed. We can't put more and more > > stuff into this BZ. Further suggestions like comment #27 and comment #30 > > could be treated as doc RFEs later, possibly async. > > I agree with Michael's suggestion of treating comment #27 and comment #30 as > doc RFEs. > > Prasanth, Karthick, > > Could you share your thoughts? Divya/Anjana, I heard that some decision had already taken in yesterday's program team for addressing Comment 30. In that case, you can actually go ahead with that decision.
The following is now added in the doc: ######## For more information on the cns-deploy commands, refer to the man page of the cns-deploy. # cns-deploy --help ######## Moving it to Verified.