Bug 1514423

Summary: Cluster import fails, yet 'Clusters' tab in webadmin server shows info of all hosts in green
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Bala Konda Reddy M <bmekala>
Component: web-admin-tendrl-uiAssignee: Neha Gupta <negupta>
Status: CLOSED ERRATA QA Contact: Rochelle <rallan>
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.3CC: nthomas, ppenicka, rallan, rhinduja, rhs-bugs, rkanade, sankarshan, ssaha
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: tendrl-ui-1.5.4-3.el7rhgs.noarch Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-12-18 04:37:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
The cluster tab rightly depicts the status of the hosts and the cluster even when import fails none

Description Bala Konda Reddy M 2017-11-17 11:24:22 UTC
Description of problem:
=======================

On a 4node cluster with glusterfs-3.8.4-52 and a tendrl server with below mentioned packages, if the gluster import fails for any reason, the tasks tab correctly displays the failure and the message. The field 'status' in Tasks shows as 'failed' - as expected. 
The Clusters tab, however, shows all the hosts to be successfully imported, in other words, it shows all green --> giving no indication that the import has failed. 

Version-Release number of selected component (if applicable):
tendrl-monitoring-integration-1.5.4-3.el7rhgs.noarch
tendrl-commons-1.5.4-2.el7rhgs.noarch
tendrl-api-httpd-1.5.4-2.el7rhgs.noarch
tendrl-ansible-1.5.4-1.el7rhgs.noarch
tendrl-selinux-1.5.3-2.el7rhgs.noarch
tendrl-grafana-plugins-1.5.4-3.el7rhgs.noarch
tendrl-node-agent-1.5.4-2.el7rhgs.noarch
tendrl-ui-1.5.4-2.el7rhgs.noarch
tendrl-grafana-selinux-1.5.3-2.el7rhgs.noarch
tendrl-notifier-1.5.4-2.el7rhgs.noarch
tendrl-api-1.5.4-2.el7rhgs.noarch

How reproducible:
================
2:2


Steps to Reproduce:
==================
1. Create a 4node storage cluster with latest gluster 3.3.1 bits
2. Ran ansible playbook to setup webadmin and storage nodes
3. Enabled webadmin repos on all storage nodes, but for one
4. Import cluster on webadmin

Actual results:
==============
Import fails as expected. Tasks correctly shows it. Clusters tab doesn't. 

Expected results:
=================
Information should be uniform across all tabs of webadmin. Showing all green on Clusters tab is misleading to the end user.

Additional info:
================
Update from Nishanth when this was shown to him - "The field 'managed' in Clusters tab is getting flipped to 'Yes', even when the import has failed.

Comment 2 Rohan Kanade 2017-11-17 14:42:36 UTC
The hosts of the cluster shown as green depends on the state of those hosts reported by tendrl-node-agents running on each of them.

What needs to be checked is whether the Cluster is marked with errors during the failed import correctly.

Comment 3 Nishanth Thomas 2017-11-17 19:13:52 UTC
As mentioned in https://bugzilla.redhat.com/show_bug.cgi?id=1514423#c2, icon will be still green but appropriate error messages are shown on the UI. Also you will see the right menus and buttons based on the status of the cluster.

Comment 5 Rochelle 2017-11-22 06:51:50 UTC
Created attachment 1357257 [details]
The cluster tab rightly depicts the status of the hosts and the cluster even when import fails

Under the clusters tab, when the cluster import fails, the hosts are marked with green but the cluster is down which is rightly depicted.

Adding the attachment.

Moving this bug to verified.

Comment 8 errata-xmlrpc 2017-12-18 04:37:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:3478