Bug 1514784

Summary: After storage node reboot, In clusters page the host is shown as down
Product: [Red Hat Storage] Red Hat Gluster Storage Reporter: Bala Konda Reddy M <bmekala>
Component: web-admin-tendrl-uiAssignee: Rohan Kanade <rkanade>
Status: CLOSED ERRATA QA Contact: Martin Kudlej <mkudlej>
Severity: unspecified Docs Contact:
Priority: unspecified    
Version: rhgs-3.3CC: mkudlej, nthomas, rhinduja, rhs-bugs, rkanade, sankarshan
Target Milestone: ---Keywords: ZStream
Target Release: ---   
Hardware: Unspecified   
OS: Unspecified   
Whiteboard:
Fixed In Version: tendrl-node-agent-1.5.4-8.el7rhgs.noarch Doc Type: If docs needed, set a value
Doc Text:
Story Points: ---
Clone Of: Environment:
Last Closed: 2017-12-18 04:37:04 UTC Type: Bug
Regression: --- Mount Type: ---
Documentation: --- CRM:
Verified Versions: Category: ---
oVirt Team: --- RHEL 7.3 requirements from Atomic Host:
Cloudforms Team: --- Target Upstream Version:
Embargoed:
Attachments:
Description Flags
The host is shown as down, but it is up and running none

Description Bala Konda Reddy M 2017-11-18 14:59:33 UTC
Created attachment 1354748 [details]
The host is shown as down, but it is up and running

Description of problem:
Created a storage cluster and imported the cluster successfully. Rebooted the storage and the host is up and running. In clusters tab. The host is shown as down in clusters tab(Check the attachment) but in hosts tab it is shown as green.



Version-Release number of selected component (if applicable):
tendrl-ui-1.5.4-2

How reproducible:
1:1

Steps to Reproduce:
1. Created a 3 node cluster and imported to the webadmin
2. Import is successful
3. Rebooted on of the storage node, the node is up and running.
4. Check the clusters tab for that host, it will be shown as down

Actual results:
The host is shown as down

Expected results:
The host is should be shown "Up".


Additional info:

Comment 2 Rohan Kanade 2017-11-23 10:19:47 UTC
Questions:
1) After reboot of the storage node, was the tendrl-node-agent service running?
2) Assuming you didnt change the tendrl-node-agent "sync_interval" config, did you wait for 180 seconds for new data to show up on the UI?


By design, at each restart of the node-agent the status is set to UP and this is used by monitoring stack as well.

Comment 3 Rohan Kanade 2017-11-23 12:20:36 UTC
Fixed: https://github.com/Tendrl/node-agent/issues/680

Comment 5 Martin Kudlej 2017-11-30 11:50:31 UTC
Tested with
etcd-3.2.7-1.el7.x86_64
glusterfs-3.8.4-52.el7_4.x86_64
glusterfs-3.8.4-52.el7rhgs.x86_64
glusterfs-api-3.8.4-52.el7rhgs.x86_64
glusterfs-cli-3.8.4-52.el7rhgs.x86_64
glusterfs-client-xlators-3.8.4-52.el7_4.x86_64
glusterfs-client-xlators-3.8.4-52.el7rhgs.x86_64
glusterfs-events-3.8.4-52.el7rhgs.x86_64
glusterfs-fuse-3.8.4-52.el7_4.x86_64
glusterfs-fuse-3.8.4-52.el7rhgs.x86_64
glusterfs-geo-replication-3.8.4-52.el7rhgs.x86_64
glusterfs-libs-3.8.4-52.el7_4.x86_64
glusterfs-libs-3.8.4-52.el7rhgs.x86_64
glusterfs-rdma-3.8.4-52.el7rhgs.x86_64
glusterfs-server-3.8.4-52.el7rhgs.x86_64
gluster-nagios-addons-0.2.10-2.el7rhgs.x86_64
gluster-nagios-common-0.2.4-1.el7rhgs.noarch
libvirt-daemon-driver-storage-gluster-3.2.0-14.el7_4.3.x86_64
python-etcd-0.4.5-1.el7rhgs.noarch
python-gluster-3.8.4-52.el7rhgs.noarch
rubygem-etcd-0.3.0-1.el7rhgs.noarch
tendrl-ansible-1.5.4-2.el7rhgs.noarch
tendrl-api-1.5.4-3.el7rhgs.noarch
tendrl-api-httpd-1.5.4-3.el7rhgs.noarch
tendrl-collectd-selinux-1.5.4-1.el7rhgs.noarch
tendrl-commons-1.5.4-5.el7rhgs.noarch
tendrl-gluster-integration-1.5.4-6.el7rhgs.noarch
tendrl-grafana-plugins-1.5.4-8.el7rhgs.noarch
tendrl-grafana-selinux-1.5.4-1.el7rhgs.noarch
tendrl-monitoring-integration-1.5.4-8.el7rhgs.noarch
tendrl-node-agent-1.5.4-8.el7rhgs.noarch
tendrl-notifier-1.5.4-5.el7rhgs.noarch
tendrl-selinux-1.5.4-1.el7rhgs.noarch
tendrl-ui-1.5.4-4.el7rhgs.noarch
vdsm-gluster-4.17.33-1.2.el7rhgs.noarch

and it works. -->VERIFIED

Comment 7 errata-xmlrpc 2017-12-18 04:37:04 UTC
Since the problem described in this bug report should be
resolved in a recent advisory, it has been closed with a
resolution of ERRATA.

For information on the advisory, and where to find the updated
files, follow the link below.

If the solution does not work for you, open a new bug report.

https://access.redhat.com/errata/RHEA-2017:3478