Bug 1514784 - After storage node reboot, In clusters page the host is shown as down
Summary: After storage node reboot, In clusters page the host is shown as down
Keywords:
Status: CLOSED ERRATA
Alias: None
Product: Red Hat Gluster Storage
Classification: Red Hat
Component: web-admin-tendrl-ui
Version: rhgs-3.3
Hardware: Unspecified
OS: Unspecified
unspecified
unspecified
Target Milestone: ---
: ---
Assignee: Rohan Kanade
QA Contact: Martin Kudlej
URL:
Whiteboard:
Depends On:
Blocks:
TreeView+ depends on / blocked
 
Reported: 2017-11-18 14:59 UTC by Bala Konda Reddy M
Modified: 2017-12-18 04:37 UTC (History)
6 users (show)

Fixed In Version: tendrl-node-agent-1.5.4-8.el7rhgs.noarch
Doc Type: If docs needed, set a value
Doc Text:
Clone Of:
Environment:
Last Closed: 2017-12-18 04:37:04 UTC
Target Upstream Version:


Attachments (Terms of Use)
The host is shown as down, but it is up and running (99.66 KB, image/png)
2017-11-18 14:59 UTC, Bala Konda Reddy M
no flags Details


Links
System ID Priority Status Summary Last Updated
Red Hat Product Errata RHEA-2017:3478 normal SHIPPED_LIVE RHGS Web Administration packages 2017-12-18 09:34:49 UTC
Github Tendrl node-agent issues 680 None None None 2017-11-23 12:20:36 UTC

Description Bala Konda Reddy M 2017-11-18 14:59:33 UTC
Created attachment 1354748 [details]
The host is shown as down, but it is up and running

Description of problem:
Created a storage cluster and imported the cluster successfully. Rebooted the storage and the host is up and running. In clusters tab. The host is shown as down in clusters tab(Check the attachment) but in hosts tab it is shown as green.



Version-Release number of selected component (if applicable):
tendrl-ui-1.5.4-2

How reproducible:
1:1

Steps to Reproduce:
1. Created a 3 node cluster and imported to the webadmin
2. Import is successful
3. Rebooted on of the storage node, the node is up and running.
4. Check the clusters tab for that host, it will be shown as down

Actual results:
The host is shown as down

Expected results:
The host is should be shown "Up".


Additional info:

Comment 2 Rohan Kanade 2017-11-23 10:19:47 UTC
Questions:
1) After reboot of the storage node, was the tendrl-node-agent service running?
2) Assuming you didnt change the tendrl-node-agent "sync_interval" config, did you wait for 180 seconds for new data to show up on the UI?


By design, at each restart of the node-agent the status is set to UP and this is used by monitoring stack as well.

Comment 3 Rohan Kanade 2017-11-23 12:20:36 UTC
Fixed: https://github.com/Tendrl/node-agent/issues/680

Comment 5 Martin Kudlej 2017-11-30 11:50:31 UTC
Tested with
etcd-3.2.7-1.el7.x86_64
glusterfs-3.8.4-52.el7_4.x86_64
glusterfs-3.8.4-52.el7rhgs.x86_64
glusterfs-api-3.8.4-52.el7rhgs.x86_64
glusterfs-cli-3.8.4-52.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-52.el7_4.x86_64
glusterfs-client-xlators-3.8.4-52.el7rhgs.x86_64
glusterfs-events-3.8.4-52.el7rhgs.x86_64
glusterfs-fuse-3.8.4-52.el7_4.x86_64
glusterfs-fuse-3.8.4-52.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-52.el7rhgs.x86_64
glusterfs-libs-3.8.4-52.el7_4.x86_64
glusterfs-libs-3.8.4-52.el7rhgs.x86_64
glusterfs-rdma-3.8.4-52.el7rhgs.x86_64
glusterfs-server-3.8.4-52.el7rhgs.x86_64
gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.3.x86_64
python-etcd-0.4.5-1.el7rhgs.noarch
python-gluster-3.8.4-52.el7rhgs.noarch
rubygem-etcd-0.3.0-1.el7rhgs.noarch
tendrl-ansible-1.5.4-2.el7rhgs.noarch
tendrl-api-1.5.4-3.el7rhgs.noarch
tendrl-api-httpd-1.5.4-3.el7rhgs.noarch
tendrl-collectd-selinux-1.5.4-1.el7rhgs.noarch
tendrl-commons-1.5.4-5.el7rhgs.noarch
tendrl-gluster-integration-1.5.4-6.el7rhgs.noarch
tendrl-grafana-plugins-1.5.4-8.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-1.el7rhgs.noarch
tendrl-monitoring-integration-1.5.4-8.el7rhgs.noarch
tendrl-node-agent-1.5.4-8.el7rhgs.noarch
tendrl-notifier-1.5.4-5.el7rhgs.noarch
tendrl-selinux-1.5.4-1.el7rhgs.noarch
tendrl-ui-1.5.4-4.el7rhgs.noarch
vdsm-gluster-4.17.33-1.2.el7rhgs.noarch

and it works. -->VERIFIED

Comment 7 errata-xmlrpc 2017-12-18 04:37:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:3478


Note You need to log in before you can comment on or make changes to this bug.