Bug 1595360

Summary: WA UI does not reflect the node down status when I reboot a node in the cluster
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Anand Paladugu <apaladug>
Component: web-admin-tendrl-uiAssignee: Neha Gupta <negupta>
Status: CLOSED WONTFIX QA Contact: sds-qe-bugs
Severity: high Docs Contact:
Priority: unspecified    
Version: rhgs-3.4CC: mbukatov, nthomas, rhs-bugs, sankarshan
Target Milestone: ---   
Target Release: ---   
Hardware: All   
OS: All   
Whiteboard:
Fixed In Version: Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2018-07-03 10:02:53 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
Screen shot 1
none
Screen shot 2 none

Description Anand Paladugu 2018-06-26 17:58:29 UTC
Created attachment 1454779 [details]
Screen shot 1

Description of problem: WA UI does not reflect the node down status when I reboot a node in the cluster.

Alters looked good and they indicate that a node went down and volume is degraded etc.. 

Some UI issues that I noticed. 

1. UI shows cluster status as unhealthy after a bit of delay (like after 1 1/2 minute although I have set the polling frequency to 5 seconds.  In some cases by the time UI shows the unhealthy status, node reboot has almost completed.

2. I have repeated the node reboot case three times. In all instances UI showed cluster status as unhealthy (after a delay), in two cases it showed the corresponding volumes and bricks as degraded/unavailable, In one case it only showed volume as degraded and in none of the cases it showed host as down.  I have attached all three screen shots.


Version-Release number of selected component (if applicable):3.4 (sandbox environment)


How reproducible: Always


Steps to Reproduce:
1. Reboot a node in the cluster and watch for Cluster, host, volume and brick status in the cluster-view dashboard.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Anand Paladugu 2018-06-26 17:59:15 UTC
Created attachment 1454780 [details]
Screen shot 2

Comment 3 Nishanth Thomas 2018-06-27 11:01:19 UTC
Reboot will not reflect on the UI as the node comes back quickly. TTL for nodes is 150 seconds and the status will be reflected in the next sync. So if you shutdown the node, you will see that reflected in UI as expected after the TTL expires.

As per the discussion in the triage meeting, qe needs to verify the behaviour and post the results here.

Comment 4 Martin Bukatovic 2018-06-27 18:22:20 UTC
Assuming this has been reported based on sandbox-usm1 instance, the             
full package version list follows:                                              
                                                                                
```                                                                             
[root@sandbox-usm1-server ~]# rpm -qa | grep tendrl | sort                      
tendrl-ansible-1.6.3-5.el7rhgs.noarch                                           
tendrl-api-1.6.3-3.el7rhgs.noarch                                               
tendrl-api-httpd-1.6.3-3.el7rhgs.noarch                                         
tendrl-commons-1.6.3-7.el7rhgs.noarch                                           
tendrl-grafana-plugins-1.6.3-5.el7rhgs.noarch                                   
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch                                   
tendrl-monitoring-integration-1.6.3-5.el7rhgs.noarch                            
tendrl-node-agent-1.6.3-7.el7rhgs.noarch                                        
tendrl-notifier-1.6.3-4.el7rhgs.noarch                                          
tendrl-selinux-1.5.4-2.el7rhgs.noarch                                           
tendrl-ui-1.6.3-4.el7rhgs.noarch                                                
```

Comment 5 Nishanth Thomas 2018-07-03 10:02:53 UTC
Based on the comment - https://bugzilla.redhat.com/show_bug.cgi?id=1595360#c3 - I am closing this bug. Please re-open if you feel otherwise.

Comment 6 Martin Bukatovic 2019-09-24 08:22:58 UTC
I'm no longer testing RHGS WA project.