Bug 1595360 - WA UI does not reflect the node down status when I reboot a node in the cluster
Summary: WA UI does not reflect the node down status when I reboot a node in the cluster
Keywords:
Status: CLOSED WONTFIX
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat Storage
Component: web-admin-tendrl-ui
Version: rhgs-3.4
Hardware: All
OS: All
unspecified
high
Target Milestone: ---
: ---
Assignee: Neha Gupta
QA Contact: sds-qe-bugs
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2018-06-26 17:58 UTC by Anand Paladugu
Modified: 2019-09-24 08:22 UTC (History)
4 users (show)

Fixed In Version:
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2018-07-03 10:02:53 UTC
Embargoed:


Attachments (Terms of Use)
Screen shot 1 (443.01 KB, image/png)
2018-06-26 17:58 UTC, Anand Paladugu
no flags Details
Screen shot 2 (421.94 KB, image/png)
2018-06-26 17:59 UTC, Anand Paladugu
no flags Details


Links
System ID Private Priority Status Summary Last Updated
Red Hat Bugzilla 1590391 0 unspecified CLOSED Describe refresh intervals, default values and visible consequences for the user 2021-02-22 00:41:40 UTC

Internal Links: 1590391

Description Anand Paladugu 2018-06-26 17:58:29 UTC
Created attachment 1454779 [details]
Screen shot 1

Description of problem: WA UI does not reflect the node down status when I reboot a node in the cluster.

Alters looked good and they indicate that a node went down and volume is degraded etc.. 

Some UI issues that I noticed. 

1. UI shows cluster status as unhealthy after a bit of delay (like after 1 1/2 minute although I have set the polling frequency to 5 seconds.  In some cases by the time UI shows the unhealthy status, node reboot has almost completed.

2. I have repeated the node reboot case three times. In all instances UI showed cluster status as unhealthy (after a delay), in two cases it showed the corresponding volumes and bricks as degraded/unavailable, In one case it only showed volume as degraded and in none of the cases it showed host as down.  I have attached all three screen shots.


Version-Release number of selected component (if applicable):3.4 (sandbox environment)


How reproducible: Always


Steps to Reproduce:
1. Reboot a node in the cluster and watch for Cluster, host, volume and brick status in the cluster-view dashboard.
2.
3.

Actual results:


Expected results:


Additional info:

Comment 2 Anand Paladugu 2018-06-26 17:59:15 UTC
Created attachment 1454780 [details]
Screen shot 2

Comment 3 Nishanth Thomas 2018-06-27 11:01:19 UTC
Reboot will not reflect on the UI as the node comes back quickly. TTL for nodes is 150 seconds and the status will be reflected in the next sync. So if you shutdown the node, you will see that reflected in UI as expected after the TTL expires.

As per the discussion in the triage meeting, qe needs to verify the behaviour and post the results here.

Comment 4 Martin Bukatovic 2018-06-27 18:22:20 UTC
Assuming this has been reported based on sandbox-usm1 instance, the             
full package version list follows:                                              
                                                                                
```                                                                             
[root@sandbox-usm1-server ~]# rpm -qa | grep tendrl | sort                      
tendrl-ansible-1.6.3-5.el7rhgs.noarch                                           
tendrl-api-1.6.3-3.el7rhgs.noarch                                               
tendrl-api-httpd-1.6.3-3.el7rhgs.noarch                                         
tendrl-commons-1.6.3-7.el7rhgs.noarch                                           
tendrl-grafana-plugins-1.6.3-5.el7rhgs.noarch                                   
tendrl-grafana-selinux-1.5.4-2.el7rhgs.noarch                                   
tendrl-monitoring-integration-1.6.3-5.el7rhgs.noarch                            
tendrl-node-agent-1.6.3-7.el7rhgs.noarch                                        
tendrl-notifier-1.6.3-4.el7rhgs.noarch                                          
tendrl-selinux-1.5.4-2.el7rhgs.noarch                                           
tendrl-ui-1.6.3-4.el7rhgs.noarch                                                
```

Comment 5 Nishanth Thomas 2018-07-03 10:02:53 UTC
Based on the comment - https://bugzilla.redhat.com/show_bug.cgi?id=1595360#c3 - I am closing this bug. Please re-open if you feel otherwise.

Comment 6 Martin Bukatovic 2019-09-24 08:22:58 UTC
I'm no longer testing RHGS WA project.


Note You need to log in before you can comment on or make changes to this bug.