Bug 1436136 - cns-deploy tool is broken
Summary: cns-deploy tool is broken
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: cns-deploy-tool
Version: cns-3.5
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: CNS 3.5
Assignee: Mohamed Ashiq
QA Contact: krishnaram Karthick
Depends On:
Blocks: 1415600
TreeView+ depends on / blocked
Reported: 2017-03-27 09:28 UTC by krishnaram Karthick
Modified: 2018-12-06 19:33 UTC (History)
6 users (show)

Fixed In Version: cns-deploy-4.0.0-9.el7rhgs
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2017-04-20 18:28:56 UTC
Target Upstream Version:

Attachments (Terms of Use)
cnsdeploy-log (20.86 KB, text/plain)
2017-03-27 09:28 UTC, krishnaram Karthick
no flags Details

System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:1112 normal SHIPPED_LIVE cns-deploy-tool bug fix and enhancement update 2017-04-20 22:25:47 UTC

Description krishnaram Karthick 2017-03-27 09:28:27 UTC
Created attachment 1266587 [details]

Description of problem:

cns-deploy tool is broken in the build - cns-deploy-4.0.0-8.el7rhgs.x86_64

Please find the console output pasted below.

[root@dhcp46-202 ~]# cns-deploy -n storage-project -g topology-sample.json --verbose -l /var/log/cns-deploy.log
Welcome to the deployment tool for GlusterFS on Kubernetes and OpenShift.

Before getting started, this script has some requirements of the execution
environment and of the container platform that you should verify.

The client machine that will run this script must have:
 * Administrative access to an existing Kubernetes or OpenShift cluster
 * Access to a python interpreter 'python'
 * Access to the heketi client 'heketi-cli'

Each of the nodes that will host GlusterFS must also have appropriate firewall
rules for the required GlusterFS ports:
 * 2222  - sshd (if running GlusterFS in a pod)
 * 24007 - GlusterFS Daemon
 * 24008 - GlusterFS Management
 * 49152 to 49251 - Each brick for every volume on the host requires its own
   port. For every new brick, one new port will be used starting at 49152. We
   recommend a default range of 49152-49251 on each host, though you can adjust
   this to fit your needs.

In addition, for an OpenShift deployment you must:
 * Have 'cluster_admin' role on the administrative account doing the deployment
 * Add the 'default' and 'router' Service Accounts to the 'privileged' SCC
 * Have a router deployed that is configured to allow apps to access services
   running in the cluster

Do you wish to proceed with deployment?

[Y]es, [N]o? [Default: Y]: y
Using OpenShift CLI.
NAME              STATUS    AGE
storage-project   Active    24m
Using namespace "storage-project".
Checking that heketi pod is not running ... 
Checking status of pods matching 'glusterfs=heketi-pod':
No resources found.
Timed out waiting for pods matching 'glusterfs=heketi-pod'.
template "deploy-heketi" created
serviceaccount "heketi-service-account" created
template "heketi" created
template "glusterfs" created
role "edit" added: "system:serviceaccount:storage-project:heketi-service-account"
Marking 'dhcp46-165.lab.eng.blr.redhat.com' as a GlusterFS node.
node "dhcp46-165.lab.eng.blr.redhat.com" labeled
Marking 'dhcp47-21.lab.eng.blr.redhat.com' as a GlusterFS node.
node "dhcp47-21.lab.eng.blr.redhat.com" labeled
Marking 'dhcp47-51.lab.eng.blr.redhat.com' as a GlusterFS node.
node "dhcp47-51.lab.eng.blr.redhat.com" labeled
Deploying GlusterFS pods.
daemonset "glusterfs" created
Waiting for GlusterFS pods to start ... 
Checking status of pods matching 'glusterfs-node=pod':
glusterfs-1zs3l   1/1       Running   0         3m
glusterfs-4jjz5   1/1       Running   0         3m
glusterfs-w812j   1/1       Running   0         3m
Flag --value has been deprecated, Use -p, --param instead.
Flag --value has been deprecated, Use -p, --param instead.
service "deploy-heketi" created
route "deploy-heketi" created
deploymentconfig "deploy-heketi" created
Waiting for deploy-heketi pod to start ... 
Checking status of pods matching 'glusterfs=heketi-pod':
deploy-heketi-1-8gk48   1/1       Running   0         1m
Determining heketi service URL ... OK
Error: unknown command "load" for "heketi-cli"
Run 'heketi-cli --help' for usage.
heketi topology loaded.
heketi-cli [flags]
heketi-cli [command]

$ export HEKETI_CLI_SERVER=http://localhost:8080
$ heketi-cli volume list

Available Commands:
cluster Heketi cluster management
device Heketi device management
node Heketi Node Management
setup-openshift-heketi-storage Setup OpenShift/Kubernetes persistent storage for Heketi
topology Heketi Topology Management
volume Heketi Volume Management

Print response as JSON
--secret string
Secret key for specified user. Can also be
set using the environment variable HEKETI_CLI_KEY
-s, --server string
Heketi server. Can also be set using the
environment variable HEKETI_CLI_SERVER
--user string
Heketi user. Can also be set using the
environment variable HEKETI_CLI_USER
-v, --version
Print version

Use "heketi-cli [command] --help" for more information about a command.
heketi-storage.json file not found
[root@dhcp46-202 ~]#

oc get pods
NAME                             READY     STATUS    RESTARTS   AGE
deploy-heketi-1-8gk48            1/1       Running   0          14m
glusterfs-1zs3l                  1/1       Running   0          17m
glusterfs-4jjz5                  1/1       Running   0          17m
glusterfs-w812j                  1/1       Running   0          17m
storage-project-router-1-l68vj   1/1       Running   0          42m

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:
1. configure oc cluster and run cns-deploy

Actual results:
cns-deploy fails after setting up depoy-heketi pod

Expected results:
cns-deploy should setup cns successfully

Additional info:

Comment 2 Mohamed Ashiq 2017-03-27 10:16:57 UTC
Patch upstream merged:


RCA : 

# heketi-cli --user admin --secret "" topology load --json=topology-sample.json

Is the right command.

# heketi-cli --user admin --secret topology load --json=topology-sample.json

Missing "" after secret causes topology as value for secret.

Comment 6 krishnaram Karthick 2017-03-28 04:46:43 UTC
Verified the bug in build cns-deploy-4.0.0-9.el7rhgs.x86_64

The issue reported in this bug is not seen anymore.

Comment 7 errata-xmlrpc 2017-04-20 18:28:56 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Comment 8 vinutha 2018-12-06 19:33:24 UTC
Marking qe-test-coverage as - since the preferred mode of deployment is using ansible

Note You need to log in before you can comment on or make changes to this bug.