Bug 1434426 - cns-deploy should not abort if setting up of heketi pod fails
Summary: cns-deploy should not abort if setting up of heketi pod fails
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: CNS-deployment
Version: cns-3.5
Hardware: Unspecified
OS: Unspecified
Target Milestone: ---
: CNS 3.5
Assignee: Mohamed Ashiq
QA Contact: Tejas Chaphekar
Depends On:
Blocks: 1415600
TreeView+ depends on / blocked
Reported: 2017-03-21 13:39 UTC by krishnaram Karthick
Modified: 2018-12-14 11:19 UTC (History)
12 users (show)

Fixed In Version: cns-deploy-4.0.0-6.el7rhgs
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Last Closed: 2017-04-20 18:28:25 UTC
Target Upstream Version:

Attachments (Terms of Use)

System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:1112 normal SHIPPED_LIVE cns-deploy-tool bug fix and enhancement update 2017-04-20 22:25:47 UTC

Description krishnaram Karthick 2017-03-21 13:39:18 UTC
Description of problem:
Today when heketi setup fails in cns-deploy but if deploy-heketi is successful, we abort the deployment and proceed with the clean up of cns related pods. Since, heketidb is already setup and only heketi needs to be configured to complete the setup, it would be ideal to proceed with heketi configuration manually using the heketidb volume setup already.

Version-Release number of selected component (if applicable):

How reproducible:

Steps to Reproduce:

Actual results:

Expected results:

Additional info:

Comment 2 Mohamed Ashiq 2017-03-21 13:40:46 UTC
Patch upstream 


Comment 7 krishnaram Karthick 2017-04-12 06:51:59 UTC
Verified the bug in cns-deploy-4.0.0-13.el7rhgs.x86_64. This issue seems to be fixed.

Deleted heketi template towards the end, so heketi couldn't be setup. cns deploy did not abort as expected. 

A message asking admin to manually setup heketi would have been good here. will file an enhancement bug separately.

Do you wish to proceed with deployment?

[Y]es, [N]o? [Default: Y]: y
Using OpenShift CLI.
NAME              STATUS    AGE
storage-project   Active    8d
Using namespace "storage-project".
Checking that heketi pod is not running ... OK
template "deploy-heketi" created
serviceaccount "heketi-service-account" created
template "heketi" created
template "glusterfs" created
role "edit" added: "system:serviceaccount:storage-project:heketi-service-account"
node "dhcp46-221.lab.eng.blr.redhat.com" labeled
node "dhcp46-222.lab.eng.blr.redhat.com" labeled
node "dhcp46-91.lab.eng.blr.redhat.com" labeled
daemonset "glusterfs" created
Waiting for GlusterFS pods to start ... OK
service "deploy-heketi" created
route "deploy-heketi" created
deploymentconfig "deploy-heketi" created
Waiting for deploy-heketi pod to start ... OK
Creating cluster ... ID: 63a23110b91c538ddaf53992af5977f9
Creating node dhcp46-221.lab.eng.blr.redhat.com ... ID: 44470d27b706d89df4d50cdadc1f5783
Adding device /dev/sdf ... OK
Creating node dhcp46-222.lab.eng.blr.redhat.com ... ID: 2689ec4d55a2fe725c82952baa5de6bc
Adding device /dev/sdd ... OK
Creating node dhcp46-91.lab.eng.blr.redhat.com ... ID: 76332f61d060cc73ad2d192634b63390
Adding device /dev/sdd ... OK
heketi topology loaded.
Saving heketi-storage.json
secret "heketi-storage-secret" created
endpoints "heketi-storage-endpoints" created
service "heketi-storage-endpoints" created
job "heketi-storage-copy-job" created
deploymentconfig "deploy-heketi" deleted
route "deploy-heketi" deleted
service "deploy-heketi" deleted
job "heketi-storage-copy-job" deleted
pod "deploy-heketi-1-rf7sz" deleted
secret "heketi-storage-secret" deleted
service "heketi" created
route "heketi" created
deploymentconfig "heketi" created
Waiting for heketi pod to start ... OK
Failed to communicate with heketi service.
Please verify that a router has been properly configured.

Comment 8 errata-xmlrpc 2017-04-20 18:28:25 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.


Note You need to log in before you can comment on or make changes to this bug.