Bug 1593640 - After import job failed cluster is marked as managed and ready to use
Summary: After import job failed cluster is marked as managed and ready to use
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: web-admin-tendrl-node-agent
Version: rhgs-3.4
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: RHGS 3.4.0
Assignee: gowtham
QA Contact: Filip Balák
URL:
Whiteboard:
Depends On:
Blocks: 1503137
TreeView+ depends on / blocked
 
Reported: 2018-06-21 09:48 UTC by gowtham
Modified: 2018-09-04 07:08 UTC (History)
6 users (show)

Fixed In Version: tendrl-commons-1.6.3-8.el7rhgs
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-09-04 07:07:57 UTC
Embargoed:


Attachments (Terms of Use)
import_job_failed (100.38 KB, image/png)
2018-06-21 09:48 UTC, gowtham
no flags Details
cluster list page (22.56 KB, image/png)
2018-06-21 09:49 UTC, gowtham
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Github Tendrl commons issues 1000 0 None closed Cluster is marked as managed after import cluster job fails 2020-06-19 08:09:51 UTC
Red Hat Product Errata RHSA-2018:2616 0 None None None 2018-09-04 07:08:46 UTC

Description gowtham 2018-06-21 09:48:18 UTC
Created attachment 1453392 [details]
import_job_failed

Description of problem:

During import cluster flow I have stopped tendrl-monitoring-integration service, After few minutes import job is failed with error "Setting up cluster aliasnot yet complete. Timing out". But in UI cluster is ready to use. Problem is during import job fail also we are updating cluster is_managed flag is yes.

Version-Release number of selected component (if applicable):
tendrl-ansible-1.6.3-5.el7rhgs.noarch
tendrl-commons-1.6.3-7.el7rhgs.noarch
tendrl-api-1.6.3-3.el7rhgs.noarch
tendrl-monitoring-integration-1.6.3-5.el7rhgs.noarch
tendrl-selinux-1.5.4-2.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
tendrl-notifier-1.6.3-4.el7rhgs.noarch
tendrl-node-agent-1.6.3-7.el7rhgs.noarch
tendrl-ui-1.6.3-4.el7rhgs.noarch
tendrl-grafana-plugins-1.6.3-5.el7rhgs.noarch
tendrl-api-httpd-1.6.3-3.el7rhgs.noarch

How reproducible:
100% reprodusable using below steps

Steps to Reproduce:
1. After UI shows cluster ready to import with all the storage nodes stop tendrl-monitoring-integration service 

service tendrl-monitoring-integration stop

2. Start import cluster flow from tendrl-ui
3. After few minutes import cluster flow will fail
4. Check All_cluster list page, the cluster is managed and ready to use

Actual results:
The cluster is ready to use after import flow fail also

Expected results:
Cluster should be in unmanaged state after import job failed

Additional info:

Comment 2 gowtham 2018-06-21 09:49:06 UTC
Created attachment 1453393 [details]
cluster list page

Comment 3 gowtham 2018-06-21 15:39:20 UTC
PR is under review https://github.com/Tendrl/commons/pull/1001

Comment 4 Ju Lim 2018-07-02 18:59:32 UTC
I just redeployed with:

$ rpm -qa | grep tendrl | sort
tendrl-api-1.6.3-20180626T110501.5a1c79e.noarch
tendrl-api-httpd-1.6.3-20180626T110501.5a1c79e.noarch
tendrl-commons-1.6.3-20180628T114340.d094568.noarch
tendrl-grafana-plugins-1.6.3-20180622T070617.1f84bc8.noarch
tendrl-grafana-selinux-1.5.4-20180227T085901.984600c.noarch
tendrl-monitoring-integration-1.6.3-20180622T070617.1f84bc8.noarch
tendrl-node-agent-1.6.3-20180618T083110.ba580e6.noarch
tendrl-notifier-1.6.3-20180618T083117.fd7bddb.noarch
tendrl-selinux-1.5.4-20180227T085901.984600c.noarch
tendrl-ui-1.6.3-20180625T085228.23f862a.noarch

Import is failing at the moment (as monitoring-integration agent does not want to stay up), but it now says Import failed (vs. Ready to use), so it appears that the fix is in there.

I've also updated https://github.com/Tendrl/ui/issues/995.

Comment 5 Ju Lim 2018-07-03 13:23:47 UTC
It appears that this bug has been fixed per https://bugzilla.redhat.com/show_bug.cgi?id=1593640#c4.

Comment 6 gowtham 2018-07-04 08:09:35 UTC
Creating alias using cluster name in carbon-cache directory is a one of the atom on import cluster flow. The problem which Ju Lim faced here is in her vagrant machine password for grafana is not set properly(why it is not set properly is a different issue because of it because of some vagrant setup problem, because we are tested with Mac Os vagrant package). So, in this case, monitoring-integration is down but if we try to import cluster then import is failing when it tries to run Setup alias atom. Because this atom only run by monitoring-integration. So a job is failed but the cluster is marked as managed. and it is ready to use. 

In PR for this bug, I have solved the problem like when a job is failed then cluster also be in an unmanaged state only.

Comment 7 Martin Bukatovic 2018-07-04 17:12:49 UTC
Reproducer in the description is clear enough.

Comment 11 Filip Balák 2018-07-23 14:23:44 UTC
It looks ok.

The import finishes and cluster is available as managed with version:
tendrl-ansible-1.5.4-7.el7rhgs.noarch
tendrl-selinux-1.5.4-2.el7rhgs.noarch
tendrl-api-httpd-1.5.4-4.el7rhgs.noarch
tendrl-monitoring-integration-1.5.4-14.el7rhgs.noarch
tendrl-commons-1.5.4-9.el7rhgs.noarch
tendrl-node-agent-1.5.4-16.el7rhgs.noarch
tendrl-ui-1.5.4-6.el7rhgs.noarch
tendrl-grafana-plugins-1.5.4-14.el7rhgs.noarch
tendrl-notifier-1.5.4-6.el7rhgs.noarch
tendrl-api-1.5.4-4.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch

but it correctly fails and is unavailable with:
tendrl-selinux-1.5.4-2.el7rhgs.noarch
tendrl-api-httpd-1.6.3-4.el7rhgs.noarch
tendrl-grafana-plugins-1.6.3-7.el7rhgs.noarch
tendrl-ansible-1.6.3-5.el7rhgs.noarch
tendrl-commons-1.6.3-9.el7rhgs.noarch
tendrl-node-agent-1.6.3-9.el7rhgs.noarch
tendrl-ui-1.6.3-8.el7rhgs.noarch
tendrl-monitoring-integration-1.6.3-7.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch
tendrl-notifier-1.6.3-4.el7rhgs.noarch
tendrl-api-1.6.3-4.el7rhgs.noarch
--> VERIFIED

Comment 13 errata-xmlrpc 2018-09-04 07:07:57 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHSA-2018:2616


Note You need to log in before you can comment on or make changes to this bug.